run-llama / tool

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

@llamaindex/tool

⚠️Haven't published yet, you can try it locally.

Transform JS function for LLM tool call.

  • ✅OpenAI
  • 🚧ClaudeAI
  • ✅LlamaIndexTS
  • 🚧LangChainJS

Usage

In your code

// @file: index.llama.ts

// you can write JSDoc to improve the tool's performance
/**
 * @name getWeather
 * @description Get the weather of a city
 * @param city City name
 * @returns The weather in the city
 */
export function getWeather(city: string) {
  return `The weather in ${city} is sunny.`
}
// you don't need to worry about the shcema with different llm tools
export function getTemperature(city: string) {
  return `The temperature in ${city} is 25°C.`
}
export function getCurrentCity() {
  return 'New York'
}
// @file: app.ts
import Tools from './index.llama'
import { registerTools, convertTools } from '@llamaindex/tool'
// Register tools on top level
registerTools(Tools)

import { OpenAI } from 'openai'
const openai = new OpenAI()
openai.chat.completions.create({
  messages: [
    {
      role: 'user',
      content: 'What is the weather in the current city?'
    }
  ],
  tools: convertTools('openai')
})

// or you can use llamaindex openai agent
import { OpenAIAgent } from 'llamaindex'
const agent = new OpenAIAgent({
  tools: convertTools('llamaindex')
})
const { response } = await agent.chat({
  message: 'What is the temperature in the current city?'
})
console.log('Response:', response)

Run with Node.js

node --import tsx --import @llamaindex/tool/register ./app.ts

Vite (WIP)

import { defineConfig } from 'vite'
import tool from '@llamaindex/tool/vite'

export default defineConfig({
  plugins: [
    tool()
  ]
})

Next.js (WIP)

// next.config.js
const withTool = require('@llamaindex/tool/next')

const config = {
  // Your Next.js config
}
module.exports = withTool(config)

About


Languages

Language:TypeScript 100.0%