JavaScript implementation of LiteLLM.
npm install litellm
import { completion } from 'litellm';
process.env['OPENAI_API_KEY'] = 'your-openai-key';
const response = await completion({
model: 'gpt-3.5-turbo',
messages: [{ content: 'Hello, how are you?', role: 'user' }],
});
// or stream the results
const stream = await completion({
model: "gpt-3.5-turbo",
messages: [{ content: "Hello, how are you?", role: "user" }],
stream: true
});
for await (const part of stream) {
process.stdout.write(part.choices[0]?.delta?.content || "");
}
We aim to support all features that LiteLLM python package supports.
- Standardised completions
- Standardised embeddings
- Standardised input params π§ - List is here
- Caching β
- Proxy β
Provider | Completion | Streaming | Embedding |
---|---|---|---|
openai | β | β | β |
cohere | β | β | β |
anthropic | β | β | β |
ollama | β | β | β |
ai21 | β | β | β |
replicate | β | β | β |
deepinfra | β | β | β |
mistral | β | β | β |
huggingface | β | β | β |
together_ai | β | β | β |
openrouter | β | β | β |
vertex_ai | β | β | β |
palm | β | β | β |
baseten | β | β | β |
azure | β | β | β |
sagemaker | β | β | β |
bedrock | β | β | β |
vllm | β | β | β |
nlp_cloud | β | β | β |
aleph alpha | β | β | β |
petals | β | β | β |
git clone https://github.com/zya/litellmjs.git
npm install
npm t
First copy the example env file.
cp .example.env .env
Then fill the variables with your API keys to be able to run the E2E tests.
OPENAI_API_KEY=<Your OpenAI API key>
....
Then run the command below to run the tests
npm run test:e2e