OpenDevin / OpenDevin

🐚 OpenDevin: Code Less, Make More

Home Page:https://opendevin.github.io/OpenDevin/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Document how to use specific LLMs

rbren opened this issue · comments

What problem or use case are you trying to solve?
Lots of folks are struggling to get OpenDevin working with non-OpenAI models. Local ollama seems to be particularly hard

Describe the UX of the solution you'd like
We should have a doc that lists out 3-4 major providers, explains how to get an API key, and how to configure OpenDevin

Do you have thoughts on the technical implementation?
Just a Models.md that we can link to from README.md

+1 for ollama support and documentation.

I get ModuleNotFoundError: No module named 'llama_index.embeddings.ollama' when
I use LLM_EMBEDDING_MODEL="llama2" and ModuleNotFoundError: No module named 'llama_index.embeddings.huggingface' when I use LLM_EMBEDDING_MODEL="local"

Do you have thoughts on the technical implementation?
Just a Models.md that we can link to from README.md

I see github still has the wiki feature. I don't see it on opendevin, but IIRC that's because it would need enabled for the project. What if we use that for a few / several model-specific pages? I don't remember if access can be enabled widely but it's a wiki, I'd assume so.

That seems reasonable to me

Can you update the supported tags for the .env with commands to connect to open llms, openAI/OllamaWebUI/Oogabooga/ETC?

IMO, Ollama can be a nuisance for people who already have gguf files because it requires GGUFs to be converted. It also has no GUI, in contrast to koboldcpp and Ooba.

I recommend using llama.cpp server, koboldcpp and Ooba instead. Those are really easy to use inference programs.

Agreed. I also made a PR for setting LLM_BASE_URL with make setup-config.
Should be enough for any OpenAI-compatible #616

For example, it works with Koboldcpp by setting it like this:
LLM_API_KEY="."
LLM_MODEL="gpt-4-0125-preview"
LLM_BASE_URL="http://localhost:5001/v1/"
WORKSPACE_DIR="./workspace"

I made a markdown file for Ollama #615.

I use oobabooga with the openai compatible api. And currently trying OpenCodeInterpreter-DS-33B (exl2) as model first. I'm not aware which non-openai/claude model works best yet...

config.toml:
WORKSPACE_DIR="./workspace"
LLM_API_KEY="sk-111111111111111111111111111111111111111111111111"
LLM_BASE_URL="http://127.0.0.1:5000"
LLM_EMBEDDING_MODEL="local"
LLM_MODEL="oobabooga/OpenCodeInterpreter-DS-33B-4.65bpw-h6-exl2"

After starting a development i currently get these kind of errors sometimes during the process:

ERROR:
'str' object has no attribute 'copy'
Traceback (most recent call last):
File "/opt/llm/OpenDevin/opendevin/controller/agent_controller.py", line 113, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/llm/OpenDevin/agenthub/monologue_agent/agent.py", line 166, in step
action = prompts.parse_action_response(action_resp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/llm/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 135, in parse_action_response
return action_from_dict(action_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/llm/OpenDevin/opendevin/action/init.py", line 24, in action_from_dict
action = action.copy()
^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'copy'

###############
ERROR:
"'action' key is not found in action={}"
Traceback (most recent call last):
File "/opt/llm/OpenDevin/opendevin/controller/agent_controller.py", line 113, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/llm/OpenDevin/agenthub/monologue_agent/agent.py", line 166, in step
action = prompts.parse_action_response(action_resp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/llm/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 135, in parse_action_response
return action_from_dict(action_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/llm/OpenDevin/opendevin/action/init.py", line 26, in action_from_dict
raise KeyError(f"'action' key is not found in {action=}")
KeyError: "'action' key is not found in action={}"

Can you update the supported tags for the .env with commands to connect to open llms, openAI/OllamaWebUI/Oogabooga/ETC?

here is a guide for oobabooga webui
08a2dfb#commitcomment-140559598

I use oobabooga with the openai compatible api. And currently trying OpenCodeInterpreter-DS-33B (exl2) as model first. I'm not aware which non-openai/claude model works best yet...

config.toml: WORKSPACE_DIR="./workspace" LLM_API_KEY="sk-111111111111111111111111111111111111111111111111" LLM_BASE_URL="http://127.0.0.1:5000" LLM_EMBEDDING_MODEL="local" LLM_MODEL="oobabooga/OpenCodeInterpreter-DS-33B-4.65bpw-h6-exl2"

After starting a development i currently get these kind of errors sometimes during the process:

ERROR: 'str' object has no attribute 'copy' Traceback (most recent call last): File "/opt/llm/OpenDevin/opendevin/controller/agent_controller.py", line 113, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/llm/OpenDevin/agenthub/monologue_agent/agent.py", line 166, in step action = prompts.parse_action_response(action_resp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/llm/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 135, in parse_action_response return action_from_dict(action_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/llm/OpenDevin/opendevin/action/init.py", line 24, in action_from_dict action = action.copy() ^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'copy'

############### ERROR: "'action' key is not found in action={}" Traceback (most recent call last): File "/opt/llm/OpenDevin/opendevin/controller/agent_controller.py", line 113, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/llm/OpenDevin/agenthub/monologue_agent/agent.py", line 166, in step action = prompts.parse_action_response(action_resp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/llm/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 135, in parse_action_response return action_from_dict(action_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/llm/OpenDevin/opendevin/action/init.py", line 26, in action_from_dict raise KeyError(f"'action' key is not found in {action=}") KeyError: "'action' key is not found in action={}"

Getting a similar error using the following config in MacOS

config.toml
LLM_MODEL="ollama/deepseek-coder:instruct"
LLM_API_KEY="ollama"
LLM_EMBEDDING_MODEL="local"
LLM_BASE_URL="http://localhost:11434"
WORKSPACE_DIR="./workspace"
Oops. Something went wrong: 'str' object has no attribute 'copy'
Oops. Something went wrong: "'action['action']=None' is not defined. Available actions: dict_keys(['kill', 'run', 'browse', 'read', 'write', 'recall', 'think', 'finish', 'add_task', 'modify_task'])"

hey, use openai/modelname like

LLM_API_KEY="na"
LLM_BASE_URL="http://0.0.0.0:5000/v1"
LLM_MODEL="openai/alpindale_Mistral-7B-v0.2-hf"
LLM_EMBEDDING_MODEL="lokal"
WORKSPACE_DIR="./workspace"
MAX_ITERATIONS=20000

although i dont think that the model name really matter for oobabooga as long as its openai/ it can be anything, at least it works for me, i can switch model without changing the config and it's working.

@Rags100 this isn't really the right place for your question. Please follow the README, and search our issues for related problems (there's an existing one for uvloop--you'll need WSL to make it work).

If you continue to have trouble, feel free to file a new issue with the template filled out. Thanks!

Hey all--lots of unrelated comments in this thread. Please try to keep this about LLM Model documentation.

I'm going to delete the unrelated comments--feel free to open new issues if you're having trouble

Documentation for Azure: #1035

Added documentation for using Google's Gemini model through AI studio as well as VertexAI through GCP #1321

We've made a lot of progress on this one, so I'm going to close it. More docs welcome!