jfcherng / fastapi-streaming-response

I have had struggled find how to make LLM generation work in a ChatGPT-like UI. Here's a working example!

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fastapi-Streamingresponse-Demo

FastAPI StreamingResponse demo

I have had do some digging to properly visualize LLM generated text in a ChatGPT-like UI. Here's a minimal example how it could work.

Installation

Install dependencies using poetry

cd app
poetry install
./start-wrapper.sh

Running locally

./start-wrapper.sh
# or
fastapi-streamingresponse-demo start-service --port 8000

Development

Add instructions for the future devs here

Run pre-commit (if not hooked) before raising a merge request.

pre-commit run --all-files

Every once in a while update the pre-commit config Run pre-commit before raising a merge request

pre-commit autoupdate

About

I have had struggled find how to make LLM generation work in a ChatGPT-like UI. Here's a working example!

License:MIT License


Languages

Language:Python 75.1%Language:JavaScript 14.5%Language:HTML 9.1%Language:Shell 1.4%