This Streamlit application serves as an interface to the Code Llama 70B Instruct model, providing a user-friendly platform for interacting with the model in a chat-like format. The app is designed to assist users by answering questions related to Python or executing tasks requested of the model.
- Interactive chat interface for querying the Code Llama 70B Instruct model.
- Customizable parameters for model responses.
- Session history to keep track of the conversation.
- Responsive layout suitable for various screen sizes.
To run this application on your local machine, follow these steps:
-
Clone the repository:
git clone [your-repository-link] cd [repository-name]
-
Install the required packages:
Ensure that you have Python installed on your system. Then, install the required packages using the following command:
pip install -r requirements.txt
-
Set up the API token:
The app requires a valid Replicate API token to interact with the Code Llama 70B Instruct model. Add your API token to the .streamlit/secrets.toml file as follows:
# .streamlit/secrets.toml REPLICATE_API_TOKEN = "your_replicate_token"
-
Run the app:
Start the Streamlit server by executing:
streamlit run streamlit_app.py
The app should now be running on your local server (usually http://localhost:8501).
After launching the app, you can interact with it as follows:
- Enter your Python-related queries or tasks into the chat input box.
- Customize the model behavior using the Customize expander, where you can adjust the max tokens to return and the system prompt.
- View the model's responses in a chat-like format, with the history of the conversation maintained within the session.