This repo contains some Google Cloud Vision API examples.
The samples are organized by language and mobile platform.
This sample identifies a landmark within an image stored on Google Cloud Storage.
See the face detection tutorial in the docs.
See the label detection tutorial in the docs.
Awwvision is a Kubernetes and Cloud Vision API sample that uses the Vision API to classify (label) images from Reddit's /r/aww subreddit, and display the labelled results in a web application.
This sample uses TEXT_DETECTION
Vision API requests to build an inverted
index from the stemmed words found in the images, and stores that index in a
Redis database. The resulting index can be queried to find
images that match a given set of words, and to list text that was found in each
matching image.
For finding stopwords and doing stemming, the Python example uses the nltk (Natural Language Toolkit) library. The Java example uses the OpenNLP library.
This simple single-activity sample that shows you how to make a call to the Cloud Vision API with an image picked from your device’s gallery.
The Swift and Objective-C versions of this app use the Vision API to run label and face detection on an image from the device's photo library. The resulting labels and face metadata from the API response are displayed in the UI.
Check out the Swift or Objective-C READMEs for specific getting started instructions.