jekalmin / extended_openai_conversation

Home Assistant custom component of conversation agent. It uses OpenAI to control your devices.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What "expected_output" for open interpreter?

tholonia opened this issue · comments

Not sure iof this si the rigth place to ask, but U have exhausted all other sources. In the existing docs online regarding crewai + local server + open interpreter, there is no mention of the required "expected_output" when defining a task. When assigned to "Report" or "Ignore" any simple command results in:

OPENAI_API_BASE_URL = http://localhost:3100/v1
OPENAI_API_KEY      = lm-studio
OPENAI_MODEL_NAME   = CodeLlama3-8B-Python.Q3_K_S.gguf

 [DEBUG]: == Working Agent: Software Engineer
 [INFO]: == Starting Task: Identify the current user id and user name and user home directory
> Entering new CrewAgentExecutor chain...
> Finished chain.
 [DEBUG]: == [Software Engineer] Task output: Agent stopped due to iteration limit or time limit.

When assigned to "" , it never gets past "Entering new CrewAgentExecutor chain..."

When assigned to "status of last instruction", it gives:

> Entering new CrewAgentExecutor chain...
Final Answer: [/IN]
</s>

> Finished chain.
 [DEBUG]: == [Software Engineer] Task output: [/IN]
</s>

So, whoever is in "expected_output" seems critical... but I have no idea what should be out there because I have no idea what the output is of whatever command it has decided to run. In this case, it only needs to run (linux) 'id' and 'whoami' and "echo $HOME".

Any suggestions on what to put there to get an open interpreter to work?