imaxwel / yandex-chain

LangChain-compatible integrations with YandexGPT and YandexGPT Embeddings

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

yandex-chain - LangChain-compatible integrations with YandexGPT and YandexGPT Embeddings

This library is community-maintained Python package that provides support for Yandex GPT LLM and Embeddings for LangChain Framework.

Currently, Yandex GPT is in preview stage, so this library may occasionally break. Please use it at your own risk!

What's Included

The library includes the following two main classes:

Usage

You can use YandexLLM in the following manner:

from yandex_chain import YandexLLM

LLM = YandexLLM(folder_id="...", api_key="...")
print(LLM("How are you today?"))

You can use YandexEmbeddings to compute embedding vectors:

from yandex_chain import YandexEmbeddings

embeddings = YandexEmbeddings(...)
print(embeddings("How are you today?"))

Authentication

In order to use Yandex GPT, you need to provide one of the following authentication methods, which you can specify as parameters to YandexLLM and YandexEmbeddings classes:

  • A pair of folder_id and api_key
  • A pair of folder_id and iam_token
  • A path to config.json file, which may in turn contain parameters listed above in a convenient JSON format.

Complete Example

A pair of LLM and Embeddings are a good combination to create problem-oriented chatbots using Retrieval-Augmented Generation (RAG). Here is a short example of this approach, inspired by this LangChain tutorial.

To begin with, we have a set of documents docs (for simplicity, let's assume it is just a list of strings), which we store in vector storage. We can use YandexEmbeddings to compute embedding vectors:

from yandex_chain import YandexLLM, YandexEmbeddings
from langchain.vectorstores import FAISS

embeddings = YandexEmbeddings(config="config.json")
vectorstore = FAISS.from_texts(docs, embedding=embeddings)
retriever = vectorstore.as_retriever()

We can now retrieve a set of documents relevant to a query:

query = "Which library can be used to work with Yandex GPT?"
res = retriever.get_relevant_documents(query)

Now, to provide a full-text answer to the query, we can use LLM. We will prompt the LLM, giving it retrieved documents as a context, and the input query, and ask it to answer the question. This can be done using LangChain chains:

from operator import itemgetter

from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough

template = """Answer the question based only on the following context:
{context}

Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = YandexLLM(config="config.json")

chain = (
    {"context": retriever, "question": RunnablePassthrough()} 
    | prompt 
    | model 
    | StrOutputParser()
)

This chain can now answer our questions:

chain.invoke(query)

Lite vs. Full Models

YandexGPT model comes in two flavours - YandexGPT Lite and full YandexGPT. By default, YandexGPT Lite is used. If you want to use full YandexGPT, you need to specify use_lite=False parameter when instantiating YandexLLM language model class.

Testing

This repository contains some basic unit tests. To run them, you need to place a configuration file config.json with your credentials into tests folder. Use config_sample.json as a reference. After that, please run the following at the repository root directory:

python -m unittest discover -s tests

Credits

About

LangChain-compatible integrations with YandexGPT and YandexGPT Embeddings

License:MIT License


Languages

Language:Python 100.0%