EleutherAI / lm-evaluation-harness

A framework for few-shot evaluation of language models.

Home Page:https://www.eleuther.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add support to azure-openai deployed models

bcarvalho-via opened this issue · comments

Apparently Azure OpenAI deployed models can't be reached with the openai.OpenAI client instead it requires the openai.AzureOpenAI client which has a slightly different interface (see link for further details).

I was able to do what I needed by adding this rough solution in the "lm_eval/models/openai_completions.py":

@register_model("azure-openai-chat-completions")
class AzureOpenaiChatCompletionsLM(OpenaiChatCompletionsLM):
    def __init__(
        self,
        model: str = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),  
        base_url: str = os.getenv("AZURE_OPENAI_ENDPOINT"),
        api_version: str = "2023-12-01-preview",
        truncate: bool = False,
        **kwargs,
    ) -> None:
        super().__init__()
        try:
            import openai  # noqa: E401
        except ModuleNotFoundError:
            raise Exception(
                "attempted to use 'openai' LM type, but package `openai` or `tiktoken` are not installed. \
    please install these via `pip install lm-eval[openai]` or `pip install -e .[openai]`",
            )
        self.model = model
        self.base_url = base_url
        self.truncate = truncate
        self.client = openai.AzureOpenAI(azure_endpoint=base_url, api_version=api_version)

Therefore, I believe that a more general/better thought solution won't require much effort.

Thanks Bruno. What task were you evaluating? I get this error when I run it with lambada_openai:
NotImplementedError: No support for logits.
I've heard that the azure code slightly lag the openai code so may not be surprise that something hasn't been implemented yet