langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications

Home Page:https://python.langchain.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

bind_tools NotImplementedError when using ChatOllama

hyhzl opened this issue · comments

commented

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

def init_ollama(model_name:str = global_model):
# llm = Ollama(model=model_name)
llm = ChatOllama(model=model_name)
return llm

llm = init_ollama()
llama2 = init_ollama(model_name=fallbacks)
llm_with_fallbacks = llm.with_fallbacks([llama2])

def agent_search():
search = get_Tavily_Search()
retriver = get_milvus_vector_retriver(get_webLoader_docs("https://docs.smith.langchain.com/overview"),global_model)
retriver_tool = create_retriever_tool(
retriver,
"langsmith_search",
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
tools = [search,retriver_tool]
# llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) # money required
prompt = hub.pull("hwchase17/openai-functions-agent")
agent = create_tool_calling_agent(llm,tools,prompt) # no work.
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "hi!"})

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
File "agent.py", line 72, in
agent = create_tool_calling_agent(llm,tools,prompt)
File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain/agents/tool_calling_agent/base.py", line 88, in create_tool_calling_agent
llm_with_tools = llm.bind_tools(tools)
File "/home/anaconda3/envs/languagechain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 912, in bind_tools
raise NotImplementedError()
NotImplementedError

Description

because ollama provide great convenient for developers to develop and practice LLM app, so hoping this issue to be handled as soon as possible
Appreciate sincerely !

System Info

langchain==0.1.19
platform: centos
python version 3.8.19

@hyhzl, no random mention, please.

Any one gt the solution for that

image

Even structuredoutut is not working

error -

image

You can use Ollama's OpenAI compatible API like

from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
    api_key="ollama",
    model="llama3",
    base_url="http://localhost:11434/v1",
)
llm = llm.bind_tools(tools)

Pretend the Ollama mode as OpenAI and have fun with the LLM developping!

@subhash137 . According to the Ollama docs, their Chat Completions API does not support function calling yet. Did you have any success?

@subhash137 would you please show, how you achieved function calling in that way?

commented

@subhash137 would you please show, how you achieved function calling in that way?

tcztzy's comment should work

commented

@subhash137 would you please show, how you achieved function calling in that way?

Oh sorry I just tried, seems the tools are not invoked this way. Did someone successfully make the model use the tools provided?