Mobile
Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
How to build PaddlePaddle for mobile
- Build PaddlePaddle for Android
- Build PaddlePaddle for IOS
- Build PaddlePaddle for Raspberry Pi3
- Build PaddlePaddle for PX2
- How to build PaddlePaddle mobile inference library with minimum size.
Demo
Deployment optimization methods
- Merge batch normalization before deploying the model to the mobile.
- Compress the model before deploying the model to the mobile.
- Merge multiple model parameter files into one file.
- How to deploy int8 model in mobile inference with PaddlePaddle.
Model compression
PaddlePaddle mobile benchmark
- Benchmark of Mobilenet
- Benchmark of ENet
- Benchmark of DepthwiseConvolution in PaddlePaddle