This is a basic skeleton code for creating an interface to interact with OpenAI's GPT-3.5 language model. The code allows you to have a conversation with the model by sending prompts and receiving responses.
- Python (version 3.6 or higher)
- OpenAI Python package (
openai
) dotenv
Python package
-
Install the required Python packages by running the following command:
pip install openai python-dotenv
-
Obtain an API key from OpenAI. Visit the OpenAI website for more information on how to get an API key.
-
Create a file named
.env
in the same directory as the script and add the following line:OpenAIKey=YOUR_API_KEY
Replace
YOUR_API_KEY
with your actual OpenAI API key.
-
Import the required modules:
import os from dotenv import load_dotenv import openai
-
Load the API key from the
.env
file:load_dotenv() openai.api_key = os.getenv('OpenAIKey')
-
Define the
chat
function:def chat(prompt): """ Function for generating a chat-based completion using the OpenAI API. Args: prompt (str): The user's message or prompt. Returns: str: The assistant's reply. """ response = openai.ChatCompletion.create( model="gpt-3.5-turbo-0613", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] ) reply = response['choices'][0]['message']['content'] return reply
-
Start the conversation loop:
while True: user_input = input("User: ") response = chat(user_input) print("Assistant:", response)
-
Run the script and start interacting with the assistant.
-
The script begins by importing the necessary modules:
os
for working with environment variables,dotenv
for loading the API key from the.env
file, andopenai
for using the OpenAI API. -
The API key is loaded from the
.env
file using theload_dotenv()
function and assigned to theopenai.api_key
variable. -
The
chat
function is defined, which takes a user's message or prompt as input and returns the assistant's reply. The function uses theopenai.ChatCompletion.create()
method to generate a chat-based completion based on the provided prompt and messages. Themodel
parameter specifies the version of the GPT-3.5 model to use, and themessages
parameter is a list of messages exchanged between the system and the user. The function extracts the assistant's reply from the API response and returns it. -
The script enters a loop where it prompts the user for input, calls the
chat
function to get the assistant's reply, and then prints the assistant's reply. -
The loop continues indefinitely until the program is terminated.
Note: This code uses the GPT-3.5 Turbo model (gpt-3.5-turbo-0613
). You can change the model to a different version if desired, but keep in mind that different models may have different capabilities and cost structures.