Ncnn deployment on mobile,support:YOLOv5s,YOLOv4-tiny,MobileNetV2-YOLOv3-nano,Simple-Pose,Yolact,ChineseOCR-lite,ENet,Landmark106,DBFace,MBNv2-FCN and MBNv3-Seg-small on camera.
iOS:
Xcode 12.4
macOS 11.2.3
iPhone 6sp 13.5.1
Android:
Android Studio 4.1
Win10 20H2
CPU:Qualcomm 710 GPU:Adreno 616
iOS
Select the model to be tested directly on the interface.
Android
Select the model to be tested directly on the interface.
Copy .param and .bin from "android_YOLOV5_NCNN\app\src\main\assets" to "iOS_YOLOv5NCNN\YOLOv5NCNN\res"
If it prompts that net.h can't be found, you need to download it from the ncnn official website or compile .framework(20201208) yourself and replace it in the project. If opencv2.framework(4.3.0) is useful, you need to download it again and replace it in the project.
The default library used by iOS does not include vulkan and bitcode.
Normally, you need to re-download ncnn.framework/glslang.framework/openmp.framework/opencv2.framework and replace it with the project.
For the configuration of Vulkan, please refer to the general configuration mentioned in Issues.
Android:
Due to factors such as mobile phone performance and image size, FPS varies greatly on different mobile phones. This project mainly tests the use of the NCNN framework. For the conversion of specific models, you can go to the NCNN official to view the conversion tutorial.
Because the opencv library is too large, only arm64-v8a/armeabi-v7a is reserved. If you need other versions, go to the official download.
ncnn temporarily uses the vulkan version, and acceleration needs to be turned on before loading, which is not turned on in this project. If you want to use the ncnn version, you need to modify the CMakeLists.txt configuration.
Different AS versions may have various problems with compilation. If the compilation error cannot be solved, it is recommended to use AS4.0 or higher to try.
ncnn has been updated to a new version, which includes ncnn The official import method of cmake.
This project is more about practicing the use and deployment of various models, without too much processing in terms of speed. If you have requirements for speed, you can directly obtain data such as YUV for direct input or use methods such as texture and opengl to achieve data input, reducing intermediate data transmission and conversion.
Convert locally(Will not upload model): xxxx -> ncnn