Norod / writing-with-hebrew-gpt_neo

Reproducing "Writing with Transformer" demo, using aitextgen/FastAPI in backend, Quill/React in frontend

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Writing with GPT-2

Development

Python Backend

Make sure you are in the backend folder:

cd backend/

Install a virtual environment:

# If using venv
python3 -m venv venv
. venv/bin/activate

# If using conda
conda create -n write-with-gpt2 python=3.7
conda activate write-with-gpt2

# On Windows I use Conda to install pytorch separately
conda install pytorch cpuonly -c pytorch

# When environment is activated
pip install -r requirements.txt
python aitextgen_app.py

To run in hot module reloading mode:

uvicorn aitextgen_app:app --host 0.0.0.0 --reload

To run with multiple workers:

uvicorn aitextgen_app:app --host 0.0.0.0 --workers 4

Runs on http://localhost:8000. You can consult interactive API on http://localhost:8000/docs.

Configuration is made via environment variable or .env file. Available are:

  • MODEL_NAME:
    • to use a custom model, point to the location of the pytorch_model.bin. You will also need to pass config.json through CONFIG_FILE.
    • otherwise model from Huggingface's repository of models, defaults to distilgpt2.
  • CONFIG_FILE: path to JSON file of model architecture.
  • USE_GPU: True to generate text from GPU.

From gpt-2-simple to Pytorch

To convert gpt-2-simple model to Pytorch, see Importing from gpt-2-simple:

transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json

This will put a pytorch_model.bin and config.json in the pytorch folder, which is what you'll need to pass to .env file to load the model.

Run gpt-2-simple version

Added back the older gpt-2-simple version we add in backend/gpt2_app.

To download a model:

import gpt_2_simple as gpt2
gpt2.download_gpt2(model_name='124M')

To run app:

set MODEL_NAME=124M
uvicorn aitextgen_app:app --host 0.0.0.0

Set MODEL_NAME to any model folder inside models, or edit .env.

Streamlit Debug

You can run the Streamlit app to debug the model.

streamlit run st_app.py

React Frontend

Make sure you are in the frontend folder, and ensure backend API is working.

cd frontend/
npm install # Install npm dependencies
npm run start # Start Webpack dev server

Web app now available on http://localhost:3000.

Building the frontend

To create a production build:

npm run build

Now your React built app will be statically served by FastAPI on http://localhost:8000/app along with the other APIs. You don't need to run the Webpack devserver anymore.

Using GPU

Miniconda/Anaconda recommended on Windows.

conda command : conda install pytorch cudatoolkit=10.2 -c pytorch.

If you install manually, you can check your currently installed CUDA toolkit version with nvcc --version. Once you have CUDA toolkit installed, you can verify it by running nvidia-smi.

Beware: after installing CUDA, it seems you shouldn't try to update GPU driver though GeForce or else you'll have to reinstall CUDA toolkit ?

References

About

Reproducing "Writing with Transformer" demo, using aitextgen/FastAPI in backend, Quill/React in frontend

License:MIT License


Languages

Language:JavaScript 58.3%Language:Python 24.7%Language:CSS 10.8%Language:HTML 5.5%Language:Shell 0.7%