xyz1o2 / screenshot-to-code

Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)

Home Page:https://screenshottocode.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

screenshot-to-code

This simple app converts a screenshot to code (HTML/Tailwind CSS, or React or Bootstrap or Vue). It uses GPT-4 Vision (or Claude 3) to generate the code and DALL-E 3 to generate similar-looking images. You can now also enter a URL to clone a live website.

πŸ†• Now, supporting Claude 3!

Youtube.Clone.mp4

See the Examples section below for more demos.

Follow me on Twitter for updates.

πŸš€ Try It Out!

πŸ†• Try it here (bring your own OpenAI key - your key must have access to GPT-4 Vision. See FAQ section below for details). Or see Getting Started below for local install instructions.

🌟 Recent Updates

  • Mar 8 - πŸ”₯πŸŽ‰πŸŽ Video-to-app: turn videos/screen recordings into functional apps
  • Mar 5 - Added support for Claude Sonnet 3 (as capable as or better than GPT-4 Vision, and faster!)

πŸ›  Getting Started

The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API.

Run the backend (I use Poetry for package management - pip install poetry if you don't have it):

cd backend
echo "OPENAI_API_KEY=sk-your-key" > .env
poetry install
poetry shell
poetry run uvicorn main:app --reload --port 7001

Run the frontend:

cd frontend
yarn
yarn dev

Open http://localhost:5173 to use the app.

If you prefer to run the backend on a different port, update VITE_WS_BACKEND_URL in frontend/.env.local

For debugging purposes, if you don't want to waste GPT4-Vision credits, you can run the backend in mock mode (which streams a pre-recorded response):

MOCK=true poetry run uvicorn main:app --reload --port 7001

Video to app (experimental)

output3.mp4

Record yourself using any website or app or even a Figma prototype, drag & drop in a video and in a few minutes, get a functional, similar-looking app.

You need an Anthropic API key for this functionality. Follow instructions here.

Configuration

  • You can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the backend/.env or directly in the UI in the settings dialog

Using Claude 3

We recently added support for Claude 3 Sonnet. It performs well, on par or better than GPT-4 vision for many inputs, and it tends to be faster.

  1. Add an env var ANTHROPIC_API_KEY to backend/.env with your API key from Anthropic
  2. When using the front-end, select "Claude 3 Sonnet" from the model dropdown

Docker

If you have Docker installed on your system, in the root directory, run:

echo "OPENAI_API_KEY=sk-your-key" > .env
docker-compose up -d --build

The app will be up and running at http://localhost:5173. Note that you can't develop the application with this setup as the file changes won't trigger a rebuild.

πŸ™‹β€β™‚οΈ FAQs

πŸ“š Examples

NYTimes

Original Replica
Screenshot 2023-11-20 at 12 54 03 PM Screenshot 2023-11-20 at 12 59 56 PM

Instagram page (with not Taylor Swift pics)

instagram.taylor.swift.take.1.mp4

Hacker News but it gets the colors wrong at first so we nudge it

hacker.news.with.edits.mp4

🌍 Hosted Version

πŸ†• Try it here (bring your own OpenAI key - your key must have access to GPT-4 Vision. See FAQ section for details). Or see Getting Started for local install instructions.

"Buy Me A Coffee"

About

Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)

https://screenshottocode.com

License:MIT License


Languages

Language:Python 54.5%Language:TypeScript 42.7%Language:JavaScript 1.0%Language:CSS 0.8%Language:HTML 0.6%Language:Dockerfile 0.3%Language:Shell 0.1%