MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavours of deep learning programs together to maximize the efficiency and your productivity.
- MXNet on Android
- Distributed Training
- Guide to Creating New Operators (Layers)
- Minimum MXNet Library in One File
- Training Deep Net on 14 Million Images on A Single Machine
- MXNet.jl Julia binding initial release
- Design Note: Squeeze the Memory Consumption of Deep Learning
- Documentation and Tutorials
- Open Source Design Notes
- Code Examples
- Pretrained Models
- Installation
- Features
- Contribute to MXNet
- License
- To Mix and Maximize
- Mix all flavours of programming models to maximize flexibility and efficiency.
- Lightweight, scalable and memory efficient.
- Minimum build dependency, scales to multi-GPUs with very low memory usage.
- Auto parallelization
- Write numpy-style ndarray GPU programs, which will be automatically parallelized.
- Language agnostic
- With support for python, c++, R, more to come.
- Cloud friendly
- Directly load/save from S3, HDFS, AZure
- Easy extensibility
- Extending no requirement on GPU programming.
- For reporting bugs please use the mxnet/issues page.
© Contributors, 2015. Licensed under an Apache-2.0 license.
MXNet is initiated and designed in collaboration by authors from cxxnet, minerva and purine2. The project reflects what we have learnt from the past projects. It combines important flavour of the existing projects, being efficient, flexible and memory efficient.