SamurAIGPT / EmbedAI

An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks

Home Page:https://www.thesamur.ai/?utm_source=github&utm_medium=link&utm_campaign=github_privategpt

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Implementing LangChain CustomLLM Class for use with other Models

zfreeman32 opened this issue · comments

The interface seems like it is currently only compatible with GPT4All or LlamaCpp models. I have Fine-Tuned a Vicuna-7b base model and want to utilize that in the Interface. How do I Integrate a CustomLLM into privategpt.py?
Langchain claims they can support custom models with the Class below but how do I implement their CustomLLM class into this particular Interface?

from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM

class CustomLLM(LLM):
    n: int

    @property
    def _llm_type(self) -> str:
        return "custom"

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        if stop is not None:
            raise ValueError("stop kwargs are not permitted.")
        return prompt[: self.n]

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {"n": self.n}

llm = CustomLLM(n=10)