langchain-ai / langchainjs

πŸ¦œπŸ”— Build context-aware reasoning applications πŸ¦œπŸ”—

Home Page:https://js.langchain.com/docs/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

No tools_call in message error on ChatVertexAI

talhaFayyazfolio opened this issue Β· comments

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

`
const calculatorSchema = z.object({
      operation: z
        .enum(['add', 'subtract', 'multiply', 'divide'])
        .describe('The type of operation to execute'),
      number1: z.number().describe('The first number to operate on.'),
      number2: z.number().describe('The second number to operate on.'),
    });

    const model = new ChatVertexAI({
      temperature: 0.7,
      model: 'gemini-1.0-pro',
    }).withStructuredOutput(calculatorSchema);

    const response = await model.invoke('What is 1628253239 times 81623836?');
    console.log(response);`

Error Message and Stack Trace (if applicable)

Error: No tools_call in message [{"message":{"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessageChunk"],"kwargs":{"content":"","additional_kwargs":{},"tool_call_chunks":[],"tool_calls":[],"invalid_tool_calls":[],"response_metadata":{"usage_metadata":{},"safety_ratings":[{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE","probability_score":0.12973313,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.09912086},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE","probability_score":0.22592856,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.13591877},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE","probability_score":0.16545822,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.08632348},{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE","probability_score":0.08299415,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.05155819}]}}},"text":""}]
    at JsonOutputToolsParser.parseResult (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@langchain/core/dist/output_parsers/openai_tools/json_output_tools_parsers.cjs:110:19)
    at JsonOutputKeyToolsParser.parseResult (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@langchain/core/dist/output_parsers/openai_tools/json_output_tools_parsers.cjs:215:50)
    at JsonOutputKeyToolsParser._callWithConfig (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@langchain/core/dist/output_parsers/base.cjs:45:72)
    at JsonOutputKeyToolsParser._callWithConfig (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@langchain/core/dist/runnables/base.cjs:208:33)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at RunnableSequence.invoke (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@langchain/core/dist/runnables/base.cjs:1075:27)
    at OpenAiService.langChain (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/src/open-ai/open-ai.service.ts:146:22)
    at FoliosService.langChain (/Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/src/folios/folios.service.ts:181:22)
    at /Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@nestjs/core/router/router-execution-context.js:46:28
    at /Users/talhafayyaz/Documents/workArea/soraUnion/folio/folio-backend/node_modules/@nestjs/core/router/router-proxy.js:9:17

Description

I am trying to integrate ChatVertexAI with withStructuredOutput.
This works fine without withStructuredOutput.

System Info

langchain@0.2.0 | MIT | deps: 16 | versions: 271
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/

keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

dist
.tarball: https://registry.npmjs.org/langchain/-/langchain-0.2.0.tgz
.shasum: 555d84538962720cd7223f6c3ca4bd060978ebf3
.integrity: sha512-8c7Dg9OIPk4lFIQGyfOytXbUGLLSsxs9MV53cLODspkOGzaUpwy5FGBie30SrOxIEFJo+FDaJgpDAFO3Xi4NMw==
.unpackedSize: 3.4 MB

dependencies:
@langchain/core: ~0.2.0          binary-extensions: ^2.2.0        jsonpointer: ^5.0.1              ml-distance: ^4.0.0              uuid: ^9.0.0                     zod: ^3.22.4                     
@langchain/openai: ~0.0.28       js-tiktoken: ^1.0.12             langchainhub: ~0.0.8             openapi-types: ^12.1.3           yaml: ^2.2.1                     
@langchain/textsplitters: ~0.0.0 js-yaml: ^4.1.0                  langsmith: ~0.1.7                p-retry: 4                       zod-to-json-schema: ^3.22.3      

maintainers:
- nfcampos <nuno@boringbits.io>
- jacoblee93 <jacoblee93@gmail.com>
- andrewnguonly <andrewnguonly@gmail.com>
- davidduong <david@duong.cz>
- hwchase17 <hw.chase.17@gmail.com>
- basproul <braceasproul@gmail.com>

dist-tags:
latest: 0.2.0  next: 0.2.0    

published 2 days ago by basproul <braceasproul@gmail.com>

The error you're encountering, "No tools_call in message," suggests that the input message does not prompt any tool calls, likely because it doesn't match the expected patterns or the structured output does not include tool calls. To address this, ensure your input message is designed to trigger tool calls and that the schema passed to withStructuredOutput accurately represents the expected structure of the response from ChatVertexAI.

Here's a step-by-step approach to troubleshoot and potentially resolve the issue:

  1. Verify Schema Compatibility: Make sure the schema you're using with withStructuredOutput matches the expected output from ChatVertexAI. The error might be due to a mismatch between the actual output and the expected schema defined by your Zod object.

  2. Check Configuration and Usage: Ensure that the withStructuredOutput function is correctly configured. The name used in the configuration should match the expected function name in your ChatVertexAI setup, and the model should be capable of producing output that conforms to your schema.

  3. Debugging and Error Handling: Implement error handling around your withStructuredOutput call to catch any issues during execution. Logging the raw output from ChatVertexAI can help you understand how it differs from your schema expectations.

  4. Adjust According to Raw Output: If necessary, adjust your schema or processing logic based on the raw output from ChatVertexAI to ensure compatibility.

Here is a simplified example of how you might adjust your code:

import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";

const calculatorSchema = z.object({
  operation: z.enum(["add", "subtract", "multiply", "divide"]),
  number1: z.number(),
  number2: z.number(),
});

const modelWithStructuredOutput = chatOpenAIInstance.withStructuredOutput(calculatorSchema, {
  name: "calculator",
  includeRaw: true, // Optional: Include raw output for debugging
});

Ensure your input message and schema are correctly aligned with what ChatVertexAI is expected to produce. If the issue persists, providing more details about when the error occurs could help further diagnose the problem.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

To continue the conversation, mention @dosu.

This seems odd. But looking into it.

The system info dependencies list doesn't show any of the @langchain/google-* modules. Can you confirm which ones are installed and their versions?

Same problem here. When running in verbose mode it looks like the response when using withStructuredInput is an empty string.

[llm/start] [1:llm:ChatGoogleStructuredOutput] Entering LLM run with input: {
  "messages": [
    [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain_core",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "What is 1628253239 times 81623836?",
          "additional_kwargs": {},
          "response_metadata": {}
        }
      }
    ]
  ]
}
[llm/end] [1:llm:ChatGoogleStructuredOutput] [1.39s] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain_core",
            "messages",
            "AIMessageChunk"
          ],
          "kwargs": {
            "content": "",
            "additional_kwargs": {},
            "tool_call_chunks": [],
            "tool_calls": [],
            "invalid_tool_calls": [],
            "response_metadata": {
              "usage_metadata": {},
              "safety_ratings": [
                {
                  "category": "HARM_CATEGORY_HATE_SPEECH",
                  "probability": "NEGLIGIBLE",
                  "probability_score": 0.12973313,
                  "severity": "HARM_SEVERITY_NEGLIGIBLE",
                  "severity_score": 0.09912086
                },
                {
                  "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                  "probability": "NEGLIGIBLE",
                  "probability_score": 0.22592856,
                  "severity": "HARM_SEVERITY_NEGLIGIBLE",
                  "severity_score": 0.13591877
                },
                {
                  "category": "HARM_CATEGORY_HARASSMENT",
                  "probability": "NEGLIGIBLE",
                  "probability_score": 0.16545822,
                  "severity": "HARM_SEVERITY_NEGLIGIBLE",
                  "severity_score": 0.08632348
                },
                {
                  "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
                  "probability": "NEGLIGIBLE",
                  "probability_score": 0.08299415,
                  "severity": "HARM_SEVERITY_NEGLIGIBLE",
                  "severity_score": 0.05155819
                }
              ]
            }
          }
        }
      }
    ]
  ],
  "llmOutput": {
    "usage_metadata": {},
    "safety_ratings": [
      {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "probability": "NEGLIGIBLE",
        "probability_score": 0.12973313,
        "severity": "HARM_SEVERITY_NEGLIGIBLE",
        "severity_score": 0.09912086
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "probability": "NEGLIGIBLE",
        "probability_score": 0.22592856,
        "severity": "HARM_SEVERITY_NEGLIGIBLE",
        "severity_score": 0.13591877
      },
      {
        "category": "HARM_CATEGORY_HARASSMENT",
        "probability": "NEGLIGIBLE",
        "probability_score": 0.16545822,
        "severity": "HARM_SEVERITY_NEGLIGIBLE",
        "severity_score": 0.08632348
      },
      {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "probability": "NEGLIGIBLE",
        "probability_score": 0.08299415,
        "severity": "HARM_SEVERITY_NEGLIGIBLE",
        "severity_score": 0.05155819
      }
    ]
  }
}
commented

same issue here

Same issue here, same problem as well. Works fine without using model.withStructuredOutput. I get the following error message:

 β¨― Error: No tools_call in message [{"message":{"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessageChunk"],"kwargs":{"content":"","additional_kwargs":{},"tool_call_chunks":[],"tool_calls":[],"invalid_tool_calls":[],"response_metadata":{"usage_metadata":{},"safety_ratings":[{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE","probability_score":0.05738754,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.049681153},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE","probability_score":0.09877259,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.09301681},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE","probability_score":0.089136936,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.046378203},{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE","probability_score":0.116965584,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.043528143}]}}},"text":""}]

While the name of the fix doesn't look like it should address it, #5571 should fix the issue. Thanks @talhaFayyazfolio for the test case, which made it much easier to verify.

If you can, check out the patch to see if it looks good to you before merged.

@afirstenberg thanks for this fix. It looks good to me .

Thank you so much @afirstenberg!

Fix is live in 0.0.17 of your @langchain/google-* package!

Tested with the new live version in 0.0.17. If I do not attach a file input, the withOutputParser works great! However, if I attach a file, I am getting an error when I try to invoke the function. Example code below:


const zodChatMessage = z.object({
	chatMessage: z.string().optional().describe("the chat message answer responding to the user's question"),
	chatCitation: z.string().optional().describe('1-3 key sentences from the text segment justifying the extracted chat message'),
	chatPage: z.number().optional().describe('the page number where this answer was extracted from')
});

const model = new ChatVertexAI({
	temperature: 0.7,
	model: 'gemini-1.5-pro',
}).withStructuredOutput(zodChatMessage);

	
const inputContent: MessageContent = [{ type: 'text', text: 'What is the powerhouse of the cell?' }];
inputContent.push({ type: 'image_url', image_url: `data:application/pdf;base64,${pdfFileString}` });
const input = [new HumanMessage({ content: inputContent })];

const response = await model.invoke(input);
console.log(response);

This looks to be the relevant error message:

errors: [
    {
      message: 'Unable to submit request because Function Calling is not supported with non-text input. Remove the function declarations or remove inline_data/file_data from contents. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling',
      domain: 'global',
      reason: 'badRequest'
    }
  ],

From the sound of it, the issue seems to be with VertexAI / Gemini itself. However, in my mind it would seem a little odd if this is the case - This seems like it would be something relatively important.

I was hoping we can confirm that this is expected behavior from the withStructuredOutputParser as well? It would be really nice if you could use the same chain with and without input files!

Okay actually it looks like this needs to be opened back up. I am getting the No tools_call in message once again when attaching file types that are of plain text, which should be supported by Gemini. The following code results in a No tools_call in message. Note the change of file format from pdf to csv.


const zodChatMessage = z.object({
	chatMessage: z.string().describe("your response to the input message."),
	// chatCitation: z.string().optional().describe('1-3 key sentences from the text segment justifying the extracted chat message'),
	// chatPage: z.number().optional().describe('the page number or file where this answer was extracted from')
});

const model = new ChatVertexAI({
	temperature: 0.7,
	model: 'gemini-1.5-pro',
	authOptions: { credentials: googleServiceAccountDevelopment, projectId: 'datum-development-403722' }
}).withStructuredOutput(zodChatMessage);

const inputContent: MessageContent = [{ type: 'text', text: 'Can you read this file? If so, please describe whats happening here.' }];
const inputCSV = readFileSync('/path/to/my/file.csv').toString('base64');

inputContent.push({ type: 'image_url', image_url: `data:text/csv;base64,${inputCSV}` });
const input = [new HumanMessage({ content: inputContent })];

The error message from the above comment is different from the error message in this one (which is expected, as csvs are plain texts).

https://ai.google.dev/gemini-api/docs/prompting_with_media?lang=python

Will check it out - I wonder if it's not forcing the tool call. What if you prompt more strongly?

I'm investigating, but I wanted to point a few things out while I do:

Oh yes, sorry about that πŸ˜‚ it was late last night. I mostly just wanted to point out that the csv is interpreted as text by it's file format definition.

I tried with a regular text file as well and resulted in the same issue.

I'm actually pretty surprised that vertex AI doesn't support it - perhaps it just interprets as text/plain instead?

I can't duplicate the problem.

It looks like Vertex AI is ok with "text/csv", so the documentation seems to be wrong (big shock).

My test code:

    const zodChatMessage = z.object({
      chatMessage: z.string().optional().describe("the chat message answer responding to the user's question"),
      chatCitation: z.string().optional().describe('1-3 key sentences from the text segment justifying the extracted chat message'),
      chatPage: z.number().optional().describe('the page number where this answer was extracted from')
    });

    const model = new ChatVertexAI({
      temperature: 0.7,
      model: 'gemini-1.5-pro',
    }).withStructuredOutput(zodChatMessage);

    const filePath = "src/tests/data/structuredOutput.csv";
    const fileString = loadFile(filePath);

    const inputContent: MessageContent = [{ type: 'text', text: 'What is the price of a shrimp cocktail?' }];
    inputContent.push({ type: 'image_url', image_url: `data:text/csv;base64,${fileString}` });
    const input = [new HumanMessage({ content: inputContent })];

    const response = await model.invoke(input);
    console.log(response);

Gives a response:

{ chatMessage: 'The price of the Shrimp Cocktail is $13.50.' }

When I run the test using the raw JSON it gives me:

{
  "candidates": [
    {
      "content": {
        "role": "model",
        "parts": [
          {
            "text": "The price of the Shrimp Cocktail is $13.50. \n"
          },
          {
            "functionCall": {
              "name": "extract",
              "args": {
                "chatMessage": "The price of the Shrimp Cocktail is $13.50."
              }
            }
          }
        ]
      },
      "finishReason": "STOP",
      "safetyRatings": [
        {
          "category": "HARM_CATEGORY_HATE_SPEECH",
          "probability": "NEGLIGIBLE",
          "probabilityScore": 0.111627996,
          "severity": "HARM_SEVERITY_NEGLIGIBLE",
          "severityScore": 0.1362632
        },
        {
          "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
          "probability": "NEGLIGIBLE",
          "probabilityScore": 0.34620965,
          "severity": "HARM_SEVERITY_NEGLIGIBLE",
          "severityScore": 0.1468197
        },
        {
          "category": "HARM_CATEGORY_HARASSMENT",
          "probability": "NEGLIGIBLE",
          "probabilityScore": 0.24526574,
          "severity": "HARM_SEVERITY_NEGLIGIBLE",
          "severityScore": 0.08787644
        },
        {
          "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
          "probability": "NEGLIGIBLE",
          "probabilityScore": 0.17009404,
          "severity": "HARM_SEVERITY_NEGLIGIBLE",
          "severityScore": 0.07356305
        }
      ]
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 421,
    "candidatesTokenCount": 33,
    "totalTokenCount": 454
  }
}

which seems correct.

The only thing I can guess about why my prompt is working and yours isn't is that by the time Gemini gets the tokens, it doesn't see a "file" anywhere. It just sees more tokens. So a prompt that asks about a "file" doesn't make any sense to it unless there is something that explicitly says "here is a file" or something.

The CSV file is a pretty straightforward CSV file that contains a menu item, description, and price.
The loadFile function you see there just loads the file and turns it into Base 64.

Okay I am using a different computer from the one last night, however it seems like the output is still quite inconsistent. When attaching a .csv, I will sometimes get the proper output structure, whereas other times it will not be correct (i.e. null vs. a number or optional), resulting in a error. For example:

 OutputParserException [Error]: Failed to parse. Text: "{
  "chatCitation": "The length of the lease, in months",
  "chatMessage": "63 months",
  "chatPage": null
}". Error: [{"code":"invalid_type","expected":"number","received":"null","path":["chatPage"],"message":"Expected number, received null"}]

It seems to like to put "null" as the response for optional inputs, however if you use zod's .nullable(), you will get an error. This is an issue with or without the file attachment.

Lastly, it looks like you can still result in a no tools_call in message occasionally if you play around with the input prompt (i.e. ask it something irrelevant to the attached document), but this is much less common.

I think the only way to avoid any possibility of "no tools_call" is to add a "fallback" tool and to force it to reply with a tool using the ANY function calling mode (which we haven't implemented support for yet - see #5072).

Gemini ignoring the type seems odd, however. I haven't encountered that before.

I'm not sure there is anything that LangChain.js can do differently aside from implementing #5072, but I'm open to suggestions.