SeungyounShin / Llama2-Code-Interpreter

Make Llama2 use Code Execution, Debug, Save Code, Reuse it, Access to Internet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

working principle

lucasjinreal opened this issue · comments

when does it generates code? from my side the outut doesn't have any code involved.....
image

This is actually the expected behavior of the current version of my project. The code generation relies on in-context learning and only triggers the execution of code when the language learning model (LLM) generates the code.

I'm working on enhancing this feature by training it with data collected from GPT-4. This is an ongoing effort and you can follow its progress in Issue #1.

For now, you may want to start with simple tasks like calculating "10!" and make sure to explicitly state "use code" or specify the library (for example, "use beautifulsoup") in your instructions.

In short, include "use code" in your instructions and start with easier tasks. Please note that the current model is optimized for chat-based applications. I'm in the process of gathering data to support Software Fine-tuning (SFT).

Also, I plan to add examples of best practices for prompting soon.

I hope this clears up your query. Stay tuned for updates on Software Fine-tuning (SFT) and data collection. Let me know if you have any more questions. 😀😀

@SeungyounShin so the code actually triggered on certrain questions. By how?
what kinds of questions will trigger code gen?

close [this is resolved by finetuning model released