isaac868 / SENG475-Neural-Network-Project

A C++ implementation of a classification neural network made for the final project of SENG 475 at UVic.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Compiling:
Let $INSTALL_DIR denote the directory into which this software is to be installed.
Commands to compile (must be within project directory):
cmake -H. -Btmp_cmake -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR -DCMAKE_BUILD_TYPE=Release
cmake --build tmp_cmake --clean-first --target install

To run the demo, run the comand:
$INSTALL_DIR/bin/demo

Note: needs to be compiled in release, as it will be quite slow otherwise.

Data format:
Training and test data is expected to be formatted as a series of comma seperated value (CSV) rows.
The last element in the row is treated as the label. Therefor if a network is being trained with topology
that specifies the input layer is 10 nodes, the training data should have 11 elements per row, 10 for the
input data and one for the label. 

The format of the trained network data is undocumented as it is not indented to be parsed or generated by 
the user.

The output of test_network is also formatted as CSV, this enables easy parsing by other command line utilities
such as awk or column.


Details on test_network output:

test_network has two output types, one when it is given labeled test data and one
when it is given unlabeled test data.

When given unlabeled test data, test_network smply outputs a CSV list of all of the inputs, and their 
classifications. The precise format of this data is input_data,prediction,confidence. For models that have
large input data vectors (such as mnist), utilities like awk are reccomended for extacting only the prediction
and the confidence.

When given labeled data, test_network generates a confusion matrix that can be usefull for diagnosing the
performance of the model on a very granular level. Rows of this table are actual values, and columns are 
predicted values. The overall accuracy of the model is located in the top left of the confusion matrix. In a 
confusion matrix, correct classifications are along the diagonal. In the following example it can be seen that 
a is correctlly classified 4 times and misclassified as b once. b can be seen to be misclassified as a 50% of 
the time, and correctly classified the other 50%. c was correctly classified every time. 

78	a	b	c
a	4	1	0
b	2	2	0
c	0	0	5

Project presentation video:
https://www.youtube.com/watch?v=IY_OtZ9Xe_I&t

References:
https://ml4a.github.io/ml4a/how_neural_networks_are_trained/
http://neuralnetworksanddeeplearning.com/chap2.html
https://ruder.io/optimizing-gradient-descent/
https://stats.stackexchange.com/questions/154879/a-list-of-cost-functions-used-in-neural-networks-alongside-applications
https://machinelearningmastery.com/choose-an-activation-function-for-deep-learning/
https://en.wikipedia.org/wiki/Activation_function
https://en.wikipedia.org/wiki/Backpropagation
https://www.youtube.com/watch?v=tIeHLnjs5U8&t=532s
https://www.youtube.com/watch?v=aircAruvnKk&t=873s
http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf

About

A C++ implementation of a classification neural network made for the final project of SENG 475 at UVic.

License:MIT License


Languages

Language:C++ 89.7%Language:Shell 6.6%Language:CMake 3.3%Language:Python 0.5%