FastAPI StreamingResponse demo
I have had do some digging to properly visualize LLM generated text in a ChatGPT-like UI. Here's a minimal example how it could work.
Install dependencies using poetry
cd app
poetry install
./start-wrapper.sh
./start-wrapper.sh
# or
fastapi-streamingresponse-demo start-service --port 8000
Add instructions for the future devs here
Run pre-commit (if not hooked) before raising a merge request.
pre-commit run --all-files
Every once in a while update the pre-commit config Run pre-commit before raising a merge request
pre-commit autoupdate