Add ollama integration instruction in the readme .
hemangjoshi37a opened this issue · comments
Please anyone who has write access to this repo please provide any documentation or anything on how to replace openAI models with ollama models . thanks .
Hi:) You can adjust the relevant settings in the model_backend.py, including modeltype, your API_KEY, etc.
If you want to use Ollama in local setup, you can follow my steps:
- Set environment variables:
export OPENAI_API_KEY=ollama # any value export BASE_URL=http://localhost:11434/v1 # your Ollama API server
- Replace
model
parameter to your model in Ollama:
ChatDev/camel/model_backend.py
Line 100 in bbb1450
Example:response = client.chat.completions.create(*args, **kwargs, model="gemma:2b-instruct", **self.model_config_dict)
- Run:
python3 run.py --task "[description_of_your_idea]" --name "[project_name]"
@thinh9e ok thanks. but this should be added to the readme file so that it is accessible to anyone.
If you want to use Ollama in local setup, you can follow my steps:
Set environment variables:
export OPENAI_API_KEY=ollama # any value export BASE_URL=http://localhost:11434/v1 # your Ollama API serverReplace
model
parameter to your model in Ollama:
ChatDev/camel/model_backend.py
Line 100 in bbb1450
Example:
response = client.chat.completions.create(*args, **kwargs, model="gemma:2b-instruct", **self.model_config_dict)Run:
python3 run.py --task "[description_of_your_idea]" --name "[project_name]"
Hi. I get the below error. it seems the model name has to be supported by tiktoken. Is there any way to bypass it so that open models can be used?