pierregarreau / play-neuralnetwork

A toy library implementing feed forward neural networks

Home Page:http://pierre.garreau.de/blog/nn-datastructure

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Neural Network Playground

This repository shows the implementation of a feed forward neural network library. It is intended for educational purposes and was written to play with (1) the concepts of forward pass and backpropagation and (2) data structures, as discussed a bit in the blog post on this website.

To start the project, make sure you have pipenv installed. You can then sync and run main.py.

cd app
pipenv sync
pipenv run python main.py

Not surprisingly, this toy library sets up a neural network model in a similar fashion as keras. You first define an architecture for your network

layers = [(2, 'sigmoid'), (2, 'sigmoid'), (1, 'sigmoid')]
neural_net = NeuralNet(layers)

Currently, only sigmoid's are available for activation functions. You can however add your own in the activation.py factory. Then you specify your optimizer:

optimizer = GradientDescent(options={
    'optimizer': '',
    'maxiter': 1000,
    'tol': 1e-7,
    'jac': True,
    'learning_rate': 1.0
})

Finally, you choose the loss function you wish to use:

loss = Loss.crossentropy

One can then fit and predict in a straightforward fashion:

res = neural_net.fit(X_train, y_train, optimizer, loss)
predicted = neural_net.predict(X_test)
for p, y in zip(predicted, y_test):
    print(p,y)
loss = loss(predicted, y_test)
print('Loss: ', loss)

About

A toy library implementing feed forward neural networks

http://pierre.garreau.de/blog/nn-datastructure


Languages

Language:Python 100.0%