This project demonstrates how to identify faces using Apple's new Vision,CoreML and ARKit APIs.The code is written in Objective-C.
- Xcode 9
- iPhone 6s or newer
- CoreML model
-
Camera Hacking : Since ARkit uses a fixed-lense camera to render the screen, the camera won't be able to auto-focus by itself. To tune the camere you need to get access to the
AVCaptureDevice
orAVCaptureSession
. However, this is not possible in ARKit as described here . I solved this problem by accessing theavailableSenors
property ofARSesion
in runtime and find theARImageSensor
object which holds the reference to theAVCaptureDevice
andAVCaptureSession
instance. -
Machine Learning: To identity different people we need a pre-trained CoreML model. You can use
caffe
or other neural network infrastructure to train your model. For this demo, I use the Mircorsoft's Custom Vision Serivce which is free and convenient to train images online and you can also download the reslut in CoreML model format.
- Link to CoreML documentation
- Link to Apple WWDC videos, samples, and materials for information on CoreML and Vision Framework
- Link to Custom Vision Service Documentation