top of page

Neural Networks | Conclusions

  • Neural Networks were originally composed of perceptrons which output a binary value

  • Contemporary neural networks use nodes with non-linear activation functions instead to output non-binary values

  • Common activation functions include: sigmoid, TanH, Rectified Linear Unit (ReLU), Leaky ReLU, Exponential Linear Unit (ELU), and SoftPlus

  • Neural networks are trained by adjusting the weights and biases through a process called backpropagation

  • Backpropagation starts at the last layer and iterates backwards one layer at a time. Using the chain rule, it calculates the gradient of the loss function with respect to each weight.

bottom of page