CharlyWargnier / langsmith-cookbook

Home Page:https://langsmith-cookbook.vercel.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LangSmith Cookbook

Welcome to the LangSmith Cookbook — your practical guide to mastering LangSmith. While our standard documentation covers the basics, this repository delves into common patterns and some real-world use-cases, empowering you to optimize your LLM applications further.

This repository is your practical guide to maximizing LangSmith. As a tool, LangSmith empowers you to debug, evaluate, test, and improve your LLM applications continuously. These recipes dive deeper than the , presenting real-world scenarios for you to adapt and implement.

Your Input Matters

Help us make the cookbook better! If there's a use-case we missed, or if you have insights to share, please raise a GitHub issue (feel free to tag Will) or contact the LangChain development team. Your expertise shapes this community.

Tracing your code

Tracing allows for seamless debugging and improvement of your LLM applications. Here's how:

  • Tracing without LangChain: learn to trace applications independent of LangChain using the Python SDK's @traceable decorator.
  • REST API: get acquainted with the REST API's features for logging LLM and chat model runs, and understand nested runs. The run logging spec can be found in the LangSmith SDK repository.
  • Customing Run Names: improve UI clarity by assigning bespoke names to LangSmith chain runs—includes examples for chains, lambda functions, and agents.

LangChain Hub

Efficiently manage your LLM components with the LangChain Hub. For dedicated documenation, please see the hub docs.

  • RetrievalQA Chain: use prompts from the hub in an exampe RAG pipeline.
  • Prompt Versioning: ensure deployment stability by selecting specific prompt versions over the 'latest'.
  • Runnable PromptTemplate: streamline the process of saving prompts to the hub from the playground and integrating them into runnable chains.

Testing & Evaluation

Test and benchmark your LLM systems using methods in these evaluation recipes:

Python Examples

TypeScript / JavaScript Testing Examples

Incorporate LangSmith into your TS/JS testing and evaluation workflow:

Using Feedback

Harness user feedback and other signals to improve, monitor, and personalize your applications:

Exploratory Data Analysis

Turn your trace data into actionable insights:

  • Exporting LLM Runs and Feedback: extract and interpret LangSmith LLM run data, making them ready for various analytical platforms.
  • Lilac: enrich datasets using the open-source analytics tool, Lilac, to detect near-duplicates, check for PII, and more.

Exporting data for fine-tuning

Fine-tune an LLM on collected run data using these recipes:

  • OpenAI Fine-Tuning: list LLM runs and convert them to OpenAI's fine-tuning format efficiently.

About

https://langsmith-cookbook.vercel.app


Languages

Language:Jupyter Notebook 89.6%Language:Python 5.2%Language:TypeScript 5.1%Language:JavaScript 0.1%Language:CSS 0.1%