ivan-vasilev / neuralnetworks

java deep learning algorithms and deep neural networks with gpu acceleration

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep Neural Networks with GPU support

Update This is a newer version of the framework, that I developed while working at ExB Research. Currently, you can build the project, but some of the tests are not working. If you want to access the previous version it's available in the old branch.

This is a Java implementation of some of the algorithms for training deep neural networks. GPU support is provided via the OpenCL and Aparapi. The architecture is designed with modularity, extensibility and pluggability in mind.

Git structure

I'm using the git-flow model. The most stable (but older) sources are available in the master branch, while the latest ones are in the develop branch.

If you want to use the previous Java 7 compatible version you can check out this release.

Neural network types

  • Multilayer perceptron
  • Convolutional networks with max pooling, average poolng and stochastic pooling.
  • Restricted Boltzmann Machine
  • Autoencoder
  • Deep belief network
  • Stacked autoencodeer

Training algorithms

  • Backpropagation - supports multilayer perceptrons, convolutional networks and dropout.
  • Contrastive divergence and persistent contrastive divergence implemented using these and these guidelines.
  • Greedy layer-wise training for deep networks - works for stacked autoencoders and DBNs, but supports any kind of training.

All the algorithms support GPU execution.

Out of the box supported datasets are MNIST, CIFAR-10/CIFAR-100, IRIS and XOR, but you can easily implement your own.

Experimental support of RGB image preprocessing operations - affine transformations, cropping, and color scaling (see Generaltest.java -> testImageInputProvider).

Activation functions

  • Sigmoid
  • Tanh
  • ReLU
  • LRN
  • Softplus
  • Softmax

All the functions support GPU execution. They can be applied to all types of networks and all training algorithms. You can also implement new activations.

How to build the library

  • Java 8.
  • To build the project you need maven.
  • Depending on your environment you might need to download the relevant aparapi .dll or .so file (located in the root of each archive) from here and add it's location to the system PATH variable. (This)[https://code.google.com/p/aparapi/wiki/DevelopersGuideLinux] is a guide on how to set up OpenCL in linux environment.

How to run the samples

The samples are organized as unit tests. If you want see examples on various popular datasets you can go to nn-samples/src/test/java/com/github/neuralnetworks/samples/.

Library structure

There are two projects:

  • nn-core - contains the full implementation.
  • nn-samples - contains implementations of popular datasets and
  • nn-performance - some performance metrics.
  • nn-userinterface - unfinished work on visual network representation.

The software design is tiered, each tier depending on the previous ones.

Network architecture

This is the first "tier". Each network is defined by a list of layers. Each layer has a set of connections that link it to the other layers of the network, making the network a directed acyclic graph. This structure can accommodate simple feedforwad nets, but also more complex architectures like http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf. You can build your own specific network.

Data propagation

This tier is propagating data through the network. It takes advantage of it's graph structure. There are two main base components:

  • LayerCalculator - propagates data through the graph. It receives target layer and input data clamped to a given layer (considered an input layer). It ensures that the data is propagated through the layers in the correct order and that all the connections in the graph are calculated. For example, during the feedforward phase of backpropagation the training data is clamped to the input layer and is propagated to the target layer (the output layer of the network). In the bp phase the output error derivative is clamped as "input" to the layer and the weights are updated using breadth-first graph traversal starting from the output layer. Essentially the role of the LayerCalculator is to provide the order, in which the network layers are calculated.
  • ConnectionCalculator - base class for all neuron types (sigmoid, rectifiers, convolutional etc.). After the order of calculation of the layers is determined by the LayerCalculator, then the list of input connections for each layer is calculated by the ConnectionCalculator.

GPU

Most of the ConnectionCalculator implementations are optimized for GPU execution. There are two implementations - Native OpenCL and Aparapi. Aparapi imposes some important restrictions on the code that can be executed on the GPU. The most significant are:

  • only one-dimensional arrays (and variables) of primitive data types are allowed. It is not possible to use complex objects.
  • only member-methods of the Aparapi Kernel class itself are allowed to be called from the GPU executable code.

Therefore before each GPU calculation all the data is converted to one-dim arrays and primitive type variables. Because of this all Aparapi neuron types are using either AparapiWeightedSum (for fully connected layers and weighted sum input functions), AparapiSubsampling2D (for subsampling layers) or AparapiConv2D (for convolutional layers). Most of the data is represented as one-dimensional array by default (for example Matrix).

The native OpenCL implementation does not have these restrictions.

Training

All the trainers are using the Trainer base class. They are optimized to run on the GPU, but you can plug-in other implementations and new training algorithms. The training procedure has training and testing phases. Each Trainer receives parameters (for example learning rate, momentum, etc) via Properties (a HashMap). For the supported properties for each trainer please check the TrainerFactory class.

Input data

Input is provided to the neural network by the trainers via TrainingInputProvider interface. Each TrainingInputProvider provides training samples in the form of TrainingInputData (default implementation is TrainingInputDataImpl). The input can be modified by a list of modifiers - for example MeanInputFunction (for subtracting the mean value) and ScalingInputFunction (scaling within a range). Currently MnistInputProvider and IrisInputProvider are implemented.

Author

Ivan Vasilev (ivanvasilev [at] gmail (dot) com)

License

MIT License

About

java deep learning algorithms and deep neural networks with gpu acceleration


Languages

Language:Java 100.0%