daaniyaan / awesome-openai-vision-api-experiments

Examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

awesome-openai-vision-api-experiments

ssstwitter.com_1699453288672.mp4

πŸ‘‹ hello

A set of examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams.

πŸ’» Install

# create and activate virtual environment
python3 -m venv venv
source venv/bin/activate

# install dependencies
pip install -r requirements.txt

πŸ”‘ Keys

Experimenting with the OpenAI API requires an API key. You can get one here.

πŸ§ͺ Experiments

Experiment Description Code HF Space
WebcamGPT chat with video stream GitHub HuggingFace
Grounding DINO + GPT-4V label images with Grounding DINO and GPT-4V GitHub
GPT-4V Classification classify images with GPT-4V GitHub
GPT-4V vs. CLIP label images with Grounding DINO and Autodistill GitHub
Hot Dog or not Hot Dog simple image classification GitHub HuggingFace

🦸 Contribution

I would love your help in making this repository even better! Whether you want to correct a typo, add some new experiment, or if you have any suggestions for improvement, feel free to open an issue or pull request.

About

Examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams


Languages

Language:Python 100.0%