There are 2 repositories under coreml-vision topic.
Try CoreML models on multiple images and videos easily and quickly
CoreML Vision Text Data & Animal Detector iOS App
Face Recognize application. Using FaceNet and CoreML
Live Text Command Line Tool
This app recognises 3 hand signs - fist, high five and victory hand [ rock, paper, scissors basically :) ] with live feed camera. It uses a HandSigns.mlmodel which has been trained using Custom Vision from Microsoft.
This project shows how to use CoreML and Vision with a pre-trained deep learning SSD (Single Shot MultiBox Detector) model. There are many variations of SSD. The one we’re going to use is MobileNetV2 as the backbone this model also has separable convolutions for the SSD layers, also known as SSDLite. This app can find the locations of several different types of objects in the image. The detections are described by bounding boxes, and for each bounding box, the model also predicts a class.
Recipe recognition with CreateML
An iOS APP shows soundproof mat laid on the floor and provides the function to order samples
A iOS application that is the rock-paper-scissor game.
Core ML and Vision object classifier with a lightweight trained model. The model is trained and tested with Create ML straight from Xcode Playgrounds with the dataset I provided.
Play MiDi Chords using Apple's Vision Hand Pose ML Model.
Rythm Trainer Based on user snaping their finger. Template for WWDC 23 is the xCode Project. Final Submission Playground is the zip
Real time camera object detection with Machine Learning in swift. Basic introduction to Core ML, Vision and ARKit.
Sample project for on-device text recognition
Combining the power of MobileNetV2 with the privacy of on-device learning. Benefit from real-time updates and efficient image processing, all while ensuring your data remains securely on your device. Experience precision, speed, and trust with PixeLearner.
Simple Swift projects to get started with iOS app development
Um aplicativo de reconhecimento real-time do alfabeto em Libras para iOS.
An iOS app that detects Kacchi Biriyani in images.
Simple application using Vision framework to detect the main object in an image.
Take your food's photo and let the code guess it
iOS App that guesses what you have drawn. It is creating using SwiftUI interface and CoreML. Model is trained using Google's Quick Draw Dataset
Machine Learning iOS applications.
Rewrite of the SeeFood app using CoreML and Vision.
A SwiftUI app using CoreML & Vision to extract text content from the images and querying from the context. Underlying it uses Google's BERT model which converted into CoreML model
Detect corn (sweet corn) or cone (road cones, traffic cones or pylons).
Demo app for gender classification of facial images using GenderNet, Vision and CoreML.