+ 4

Why does relu train faster than sigmoid?

6th May 2020, 3:26 AM
Moses Odhiambo
Moses Odhiambo - avatar
2 odpowiedzi
+ 2
Efficiency: ReLu is faster to compute than the sigmoid function, and its derivative is faster to compute. This makes a significant difference to training and inference time for neural networks: only a constant factor, but constants can matter.
6th May 2020, 6:31 AM
SITHU Nyein
SITHU Nyein - avatar
0
SITHU Nyein does it also have to do with the fact that relu has less noise (deactivates neurons below zero completely, unlike sigmoid)?
6th May 2020, 6:36 AM
Moses Odhiambo
Moses Odhiambo - avatar