fauxpilot / fauxpilot

FauxPilot - an open-source alternative to GitHub Copilot server

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Will the codex openai library work with fauxpilot?

smith-co opened this issue · comments

Thanks for this fantastic work. I will do a quick comparison for code completion between fauxpiot and copilot.

I will be making calls with the following API invocation:

        response = openai.Completion.create(
            model="code-davinci-002",
            prompt=input_prompt,
            temperature=temperature,
            max_tokens=max_tokens,
            top_p=1,
            n=number_of_suggestions,
            frequency_penalty=frequency_penalty,
            presence_penalty=0,
            stop="###",
        )
        # suggestions = response['choices']
        result = ""
        if 'choices' in response:
            x = response['choices']
            if len(x) > 0:
                for i in range(0, len(x)):
                    result = x[i]['text']
            else:
                result = ''

        # are these metrics present?
        response_completion_tokens = response["usage"]["completion_tokens"]
        response_prompt_tokens = response["usage"]["prompt_tokens"]
        response_total_tokens = response["usage"]["total_tokens"]

Can you please let me know whether this API invocation would work for fauxpilot?

The usage metrics are computed and returned by FauxPilot, so yes, this should work. One thing to be aware of is that the context length supported by code-davinci-002 is 8000, whereas the CodeGen models only support up to 2048, so you won't be able to directly compare prompts longer than that.

I'll close this for now but let me know if you have other questions!