Andres77872 / magic-llm

Simple llm router

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Magic LLM

Magic LLM is a simplified wrapper designed to facilitate connections with various LLM providers, including:

Tested LLM Providers with OpenAI Compatibility

The following LLM providers have been tested for compatibility with OpenAI's API:

Note: Other LLM providers have not been tested.

Features

  • Chat completion
  • Text completion
  • Embedding
  • No stream response
  • Stream response reforged to map to OpenAI response format
  • Function calling (Only tested with OpenAI)
  • Stream yields a chunk object
  • Usage in response
  • Vision adapter to OpenAI schema
    • OpenAI
    • Anthropic
    • Google AI Studio
  • Text to Speech
    • OpenAI
  • Fallback client
    • Stream completion
    • completion
provider Streaming completion embedding async streaming async completion async embedding
OpenAI
Cloudflare
AWS Bedrock
Google AI Studio
Cohere
Anthropic
Perplexity AI
Together.AI
OpenRouter
DeepInfra
Fireworks.AI
Mistral
Deepseek
Groq
LeptonAI
OctoAI
NovitaAI

Purpose

This client is not intended to replace the full functionality of the OpenAI client. Instead, it has been developed as the core component for another project, Magic UI, which is currently under development. The goal of Magic UI is to create a robust application generator (RAG).

Clients

This client is built to be compatible with OpenAI's client, aiming to unify multiple LLM providers under the same framework.

OpenAI and any other with the same API compatibility

client = MagicLLM(
    engine='openai',
    model='gpt-3.5-turbo-0125',
    private_key='sk-',
)

Cloudflare

client = MagicLLM(
    engine='cloudflare',
    model='@cf/mistral/mistral-7b-instruct-v0.1',
    private_key='a...b',
    account_id='c...1',
)

Amazon bedrock

client = MagicLLM(
    engine='amazon',
    model='amazon.titan-text-express-v1',
    aws_access_key_id='A...B',
    aws_secret_access_key='a...b',
    region_name='us-east-1',
)

Google AI studio

client = MagicLLM(
    engine='google',
    model='gemini-pro',
    private_key='A...B',
)

Cohere

client = MagicLLM(
    engine='cohere',
    model='command-light',
    private_key='a...b',
)

Usage (same for all clients)

from magic_llm import MagicLLM
from magic_llm.model import ModelChat

client_fallback = MagicLLM(
    engine='openai',
    model='gpt-3.5-turbo-0125',
    private_key='sk-',
    # base_url='API'
)

client = MagicLLM(
    engine='openai',
    model='model_fail',
    private_key='sk-',
    # base_url='API',
    fallback=client_fallback
)

chat = ModelChat(system="You are an assistant who responds sarcastically.")
chat.add_user_message("Hello, my name is Andres.")
chat.add_assistant_message("What an original name. 🙄")
chat.add_user_message("Thanks, you're also as original as an ant in an anthill.")

for i in client.llm.stream_generate(chat):
    print(i)

About

Simple llm router

License:Apache License 2.0


Languages

Language:Python 100.0%