sm823zw / Neural-Network-Backprop-using-C

In this repo, the backpropagation algorithm in feedforward neural networks is implemented from scratch using C.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Neural-Network-using-C

In this project, the backpropagation algorithm in feedforward neural networks is implemented using C. A data structure to store and manipulate neural network weights is created. Algorithms are designed to perform forward propagation, back propagation, and update weights. The user can specify the number of hidden layers, number of neurons in each hidden layer, the loss function, the optimizer, and the activation function to be used. Backpropagation is an algorithm used for updating weights of an artificial neural network using gradient descent optimization. Given a neural network and a loss function, the algorithm calculates the gradient of the error function with respect to each of the neural network’s weights. The calculation of errors takes place in a backward direction from the output layer, through the hidden layers to the input layer. Using the obtained errors, the weights are updated. For demonstration, a neural network model was trained and tested on the MNIST dataset for handwritten digit recognition.

About

In this repo, the backpropagation algorithm in feedforward neural networks is implemented from scratch using C.


Languages

Language:C 100.0%