pandyamarut / helicone

Home Page:https://www.helicone.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Helicone

Twitter

Open-source observability platform for LLMs

  • ๐Ÿ“ Logs all of your requests to OpenAI in a user-friendly UI

  • ๐Ÿ’พ Caching, custom rate limits, and retries

  • ๐Ÿ“Š Track costs and latencies by users and custom properties

  • ๐ŸŽฎ Every log is a playground: iterate on prompts and chat conversations in a UI

  • ๐Ÿš€ Share results and collaborate with your friends or teammates

  • ๐Ÿ”œ (Coming soon) APIs to log feedback and evaluate results

Quick Use โšก๏ธ

Get your API key by signing up here.

export HELICONE_API_KEY=<your API key>
pip install helicone
from helicone import openai

response = openai.Completion.create(
	model="text-davinci-003",
	prompt="What is Helicone?",
	user="alice@bob.com",
	# Optional Helicone features:
	cache=True,
	properties={"conversation_id": 12},
	rate_limit_policy={"quota": 100, "time_window": 60, "segment": "user"}
)

๐Ÿ‘‰ Then view your logs at Helicone.

More resources

Local Setup ๐Ÿ’ป

Helicone's cloud offering is deployed on Cloudflare and ensures the lowest latency add-on to your API requests.

To get started locally, Helicone is comprised of four services:

  • Frontend (Node)
  • The proxy worker (Wrangler)
  • Application database (Supabase)
  • Analytics database (ClickHouse)

If you have any questions, contact help@helicone.ai or join discord.

Install Wrangler and Yarn

nvm install 18.11.0
nvm use 18.11.0
npm install -g wrangler
npm install -g yarn

Install Supabase

brew install supabase/tap/supabase

Install and setup ClickHouse

# https://clickhouse.com/docs/en/install
curl https://clickhouse.com/ | sh


# This will start clickhouse locally
python3 clickhouse/ch_hcone.py --start

Run all services

cd web

# start supabase to log all the db stuff...
supabase start

# start frontend
yarn
yarn dev

# start worker (simulates oai.hconeai.com)
# in another terminal
cd worker
yarn
wrangler dev --local

# Make your request to local host
curl --request POST \
  --url http://127.0.0.1:8787/v1/chat/completions \
  --header 'Authorization: Bearer <KEY>' \
  --data '{
	"model": "gpt-3.5-turbo",
	"messages": [
		{
			"role": "user",
			"content": "Can you give me a random number?"
		}
	],
	"temperature": 1,
	"max_tokens": 7
}'

# Now you can go to localhost:3000 and create an account and see your request.
# When creating an account on localhost, you will automatically be signed in.

Setup .env file

Make sure your .env file is in web/.env. Here is an example:

NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY=""
STRIPE_SECRET_KEY=""
NEXT_PUBLIC_HELICONE_BILLING_PORTAL_LINK=""
NEXT_PUBLIC_HELICONE_CONTACT_LINK="https://calendly.com/d/x5d-9q9-v7x/helicone-discovery-call"
STRIPE_PRICE_ID=""
STRIPE_STARTER_PRICE_ID=""
STRIPE_ENTERPRISE_PRODUCT_ID=""
STRIPE_STARTER_PRODUCT_ID=""
DATABASE_URL="postgresql://postgres:postgres@localhost:54322/postgres"
NEXT_PUBLIC_SUPABASE_ANON_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0"
NEXT_PUBLIC_SUPABASE_URL="http://localhost:54321"
SUPABASE_URL="http://localhost:54321"
SUPABASE_SERVICE_KEY="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU"

Community ๐ŸŒ

Packages that use Helicone

Name Docs
nextjs-chat-app Docs
langchain Docs
langchainjs Docs

Contributing

We are extremely open to contributors on documentation, integrations, and feature requests.

About

https://www.helicone.ai

License:Apache License 2.0


Languages

Language:TypeScript 94.0%Language:Python 4.0%Language:PLpgSQL 1.7%Language:JavaScript 0.1%Language:Shell 0.1%Language:CSS 0.1%