mistralai / client-js

JS Client library for Mistral AI platform

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error with Cloudflare Workers

xezpeleta opened this issue · comments

Hi!

When I use this JS client with the docs example to retrieve the chat response from the Mistral API, I get the following error:

  "logs": [
    {
      "message": [
        {
          "message": "There is no suitable adapter to dispatch the request since :\n- adapter xhr is not supported by the environment\n- adapter http is not available in the build",
          "name": "AxiosError",
          "stack": "AxiosError: There is no suitable adapter to dispatch the request since :\n- adapter xhr is not supported by the environment\n- adapter http is not available in the build\n    at Object.getAdapter (worker.js:1724:15)\n    at Axios.dispatchRequest (worker.js:1753:38)\n    at async MistralClient._request (worker.js:2460:24)\n    at async MistralClient.chat (worker.js:2537:24)\n    at async mistralChat (worker.js:2669:26)\n    at async onUpdate (worker.js:2643:7)",
          "code": "ERR_NOT_SUPPORT",
          "status": null
        }
      ],
      "level": "error"

Fixed thanks to https://github.com/haverstack/axios-fetch-adapter/

Instructions

  1. Install mistralai js client: npm install @mistralai/mistralai
  2. Install axios-fetch-adapter: npm install @haverstack/axios-fetch-adapter
  3. Edit the file node_modules/@mistralai/mistralai/src/client.js:

Import the module:

import fetchAdapter from "@haverstack/axios-fetch-adapter";  //import fetchAdapter (line 3)

Specify the adapter:

    ...
    const response = await axios({
      adapter: fetchAdapter, //Use the fetchAdapter (line 47)
      method: method,
      ...

I was also facing this issue and couldn't work out how to get the adapter to work so I just wrote my own client for the rest API.

Might be useful for some people:

interface Message {
  role: "user" | "system" | "assistant";
  content: string;
}

interface MistralConfig {
  model: "mistral-tiny" | "mistral-small" | "mistral-medium";
  temperature?: number;
  maxTokens?: number;
  topP?: number;
  randomSeed?: number;
  safeMode?: boolean;
}

export async function* streamMistralChat(
  messages: Message[],
  config: MistralConfig,
  apiKey: string | undefined = undefined,
): AsyncGenerator<string, void, void> {
  const r = await fetch("https://api.mistral.ai/v1/chat/completions", {
    method: "post",
    headers: {
      Authorization: `Bearer ${apiKey ?? process.env.MISTRAL_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      model: config.model,
      messages: messages,
      temperature: config.temperature,
      max_tokens: config.maxTokens,
      top_p: config.topP,
      random_seed: config.randomSeed,
      stream: true,
      safe_prompt: config.safeMode,
    }),
  });

  if (!r.ok) {
    console.error(`Error fetching from Mistral API ${r.status}`);
    console.error(await r.text());
    throw new Error(`Mistral API error, status: ${r.status}`);
  }

  const reader = r.body.getReader();
  const decoder = new TextDecoder();
  let buffer: string[] = [];

  while (true) {
    const { done, value: bytes } = await reader.read();
    if (done) {
      break;
    }

    const chunk = decoder.decode(bytes, { stream: true });

    for (let i = 0, len = chunk.length; i < len; ++i) {
      // We've got newline delimited JSON which has a double newline to separate chunks
      const isChunkSeparator = chunk[i] === "\n" && buffer[buffer.length - 1] === "\n";

      // Keep buffering unless we've hit the end of a data chunk
      if (!isChunkSeparator) {
        buffer.push(chunk[i]);
        continue;
      }

      const chunkLine = buffer.join("");
      if (chunkLine.trim() === "data: [DONE]") {
        break;
      }

      if (chunkLine.startsWith("data:")) {
        const chunkData = chunkLine.substring(6).trim();
        if (chunkData !== "[DONE]") {
          const chunkObject = JSON.parse(chunkData);
          // We just stream the completion text
          yield chunkObject.choices[0].delta.content ?? "";
        }
      } else {
        throw Error(`Invalid chunk line encountered: ${chunkLine}`);
      }

      buffer = [];
    }
  }
}

Just put your API key in the MISTRAL_API_KEY env var and run

const response = streamMistralChat([{ role: "user", content: "Hello!" }], {
      model: "mistral-tiny",
      temperature: 0.7,
      // etc
});

for await (const chunk of response) {
    // Do whatever you want with the streamed text
    console.log(chunk); 
}