+ 1

Why the sigmoid function is used on machine learning?

Sorry i dont speak english verte well

27th Apr 2018, 5:11 PM
BenTwo
10 ответов
+ 3
Sigmoid function (or logistic function) is used in logistic regression. Unlike in linear regression, which looks to predict a VALUE (from a continuous range), logistic regression looks for a CLASS (a discrete category). In other words, in linear regression you expect to find a function which would "calculate" the ultimate value, while logistic regression aims to classify the object of either being of a state or not (in binary classification) or of being of a given category. In order for the classifier to predict the category, you use the sigmoid function, which takes another function as argument. Its result is of a value between 0 and 1 and (usually) you attribute "positive" prediction to sigmoid values greater than 0.5 and "negative" to those below. Sigmod equal to 0.5 exactly marks the so-called "decision boundary", where there is an equal probability that your hypothesis (usually of the observation belonging to the "positive" class) is true than that it is false. Check out my code on logistic regression, I tried to describe it as much as possible: https://code.sololearn.com/cwstTU4ohOr9/?ref=app
27th Apr 2018, 7:31 PM
Kuba Siekierzyński
Kuba Siekierzyński - avatar
+ 3
No problem :) In case you'd be interested in how properties differ for three functions used as activators in neural networks (a similar case for classification, as the network's neuron has to make a decision based on the activation function indication), here's the comparison below. Notice how sigmoid favors values more distant from zero. The farther the value is, the even more confident the sigmoid is - it gets to either zero or one quicker. https://code.sololearn.com/cXLVvxXZbCDE/?ref=app
27th Apr 2018, 8:21 PM
Kuba Siekierzyński
Kuba Siekierzyński - avatar
+ 2
It quickly sides to either of the options and favors strong indications (or signals), which either activates a particular neuron or not. There are some papers though, that recommend using relu instead, as in practice it works really similar and is computationally way cheaper.
27th Apr 2018, 8:36 PM
Kuba Siekierzyński
Kuba Siekierzyński - avatar
+ 2
You can use any activation function you want, as long as it is supported by the library you are using. It's just sigmoid is very good for binary classification as it gives you definite indication in a normalized range (0,1), regardless the inside function's value. It is thus more intuitive to interpret - we predict all less than 0.5 as 0 and all greater than 0.5 as 1.
27th Apr 2018, 8:53 PM
Kuba Siekierzyński
Kuba Siekierzyński - avatar
+ 1
Ok
27th Apr 2018, 8:37 PM
BenTwo
+ 1
There's another way to code a neural network with out sigmoid function?
27th Apr 2018, 8:44 PM
BenTwo
+ 1
Ok thanks!!!!
27th Apr 2018, 8:54 PM
BenTwo
0
Thanks!!!!
27th Apr 2018, 7:39 PM
BenTwo
0
I made a code that prints the constant of a sigmoid function following the equation, I think that if it's inside a while loop can make a strong effect in the output of a neural network
27th Apr 2018, 8:30 PM
BenTwo
0
Please I need help on this: Training the following perceptron.. somehow I can't post on the board with something went wrong error You are training the following perceptron. The neuron in this perceptron has a sigmoid activation function. The sigmoid function is represented by the following equation: See full details here: Diagram https://ibb.co/wytpqbq Using the update function for the weights:  See full details here:https://ibb.co/wytpqbq with a learning rate of η=1, and assuming that the current weights are w1 = 0.2 and w2 = 0.3, compute an iteration of the new weights by computing error and applying back to the inputs. Diagram https://ibb.co/wytpqbq How do I solve this with py or any other way? thanks
6th Jan 2021, 1:03 PM
Chris Owen
Chris Owen - avatar