Implementation of Multi-Layer Perceptron Artificial Neural Network in Python 3. Heavily inspired by Rising Odegua's MLP articles and Patrick David's All the Backpropagation derivatives
It accepts list of variables as an input. Those variables are being multiplied by corresponding weights. The result of multiplication is summed and provided as an input to the activation function, which calculates an output number
It's a neural network build with a single neuron
Consists of multiple neurons
Layer is a group of neurons
Layers between input and output layers are called hidden layers
It is not counted as a layer in a network
Number of nodes in the first layer equals number of features in the input data
Number of nodes (variables) in output layer depends on type of desired prediction
It represents how a given feature is important. It is multiplied by a feature's value
It is a starting value for a given neuron. It is added to sum of multiplications of weights and features values.
It determines whether a neuron's contribution to the neural network should be taken into account or not
It is a training process for a network. It consists of the following steps:
- Multiply each input feature and randomly generated corresponding weight of the first layer, sum them up and add the bias
- Use this result as an input to activation function
- Use the output from activation function as a features for the next weights and repeat this step until the last layer
- Pass the last result to the output activation function
- Compute the loss function by using last result (prediction) and actual (true) values
It is about learning process in neural network through improving network's weights and biases.
A network checks the output for various weights and evaluates them using loss function. Decrease of loss means that weights are getting better
Backpropagation uses derivatives of loss with respect to all previously calculated values - weights, biases and activation function results
Input values are not differentiated
Backward propagation steps, based on Patrick David's All the Backpropagation derivatives:
- Derivative with respect to (wrt) activation function
$\frac{\partial L}{\partial a}$ :
Derivative of the negative log likelihood function (cross-entropy):
- Derivative of sigmoid
$\frac{\partial a}{\partial z}$ :
Derivative is
- Derivative wrt linear function
Linear function is
Derivative is
- Derivative wrt weights
$\frac{\partial z}{\partial w}$ :
Derivative is
- Derivative wrt bias
$\frac{\partial L}{\partial b}$ :
Derivative is
It means looking for the best possible weights and biases in the network
Repeat these three steps:
- Forward propagation
- Backward propagation
- Update weights with calculated gradients
When implementing the network, I based on these great online resources:
- https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html
- https://medium.com/@pdquant/all-the-backpropagation-derivatives-d5275f727f60
- https://heartbeat.comet.ml/building-a-neural-network-from-scratch-using-python-part-1-6d399df8d432
- https://heartbeat.comet.ml/building-a-neural-network-from-scratch-using-python-part-2-testing-the-network-c1f0c1c9cbb0
- https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1
- https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a
© Copyright Jędrzej Paweł Maczan. Made in Poland, 2022 - 2023