Why sets Matlab automatically the activation functions for a neural network like this?
    8 visualizaciones (últimos 30 días)
  
       Mostrar comentarios más antiguos
    
I am asking myself why chooses Matlab always automatically for the hidden layer tan-Sigmoid and for the output layer pureline as an activation function? 
If it refers to a study, which discovers, that those activation functions are more efficient than the other, please let me know. 
0 comentarios
Respuestas (3)
  Greg Heath
      
      
 el 29 de Jun. de 2019
        That is a standard configuation for a neural net. It's operation is explained in every elementary text. 
Thank you for formally accepting my answer
Greg
3 comentarios
  Greg Heath
      
      
 el 30 de Jul. de 2019
				
      Editada: Greg Heath
      
      
 el 30 de Jul. de 2019
  
			Sorry, I lost all of my several hundred books via a moving van error..
See your library.
Greg
  Greg Heath
      
      
 el 30 de Jul. de 2019
        The simplest useful approximation is is a series of blocks with different heights and widths. 
The simplest useful DIFFERENTIABLE approximation is is a series of ROUNDED blocks with different heights and lengths. 
Combining sigmoids fits the bill!
GREG
  Sai Bhargav Avula
    
 el 16 de Ag. de 2019
        As mentioned by others thats the default setup in MATLAB. 
Coming to comparision between different activation functions. 
It is generally recommended to use ReLU as the activation function. If your model suffers form dead neurons during training we should use leaky ReLu or Maxout function.
The  Sigmoid and Tanh are generally not preferred as they suffer with vanishing Gradient Problem which causes a lots of problems to train,degrades the accuracy and performance of a deep Neural Network Model.
Ver también
Categorías
				Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!