mgallo / openai.ex

community-maintained OpenAI API Wrapper written in Elixir.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bug: http_options configuration is not used

APB9785 opened this issue · comments

I'm using OpenAI chat completion with Stream

in runtime.exs I have the config set as documented:

if config_env() in [:prod, :dev] do
  config :openai,
    # find it at https://platform.openai.com/account/api-keys
    api_key: System.get_env("OPENAI_API_KEY"),
    # find it at https://platform.openai.com/account/org-settings under "Organization ID"
    organization_key: System.get_env("OPENAI_ORG_KEY"),
    # optional, passed to [HTTPoison.Request](https://hexdocs.pm/httpoison/HTTPoison.Request.html) options
    http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]
end

And then running the example from the documentation:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true,
  ]
)
|> Stream.each(fn res ->
  IO.inspect(res)
end)
|> Stream.run()

But nothing happens, the process hangs indefinitely, with no inspect output.

When creating the stream with inline config, it works OK:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true,
  ],
  %OpenAI.Config{http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]}
)

But I would prefer to not use inline config, and instead use application config as shown in the documentation.

I'm encountering the same problem.
I believe the stream_to config is causing the issue.
Currently, self() is obtaining the process ID (pid) of the application or configuration. However, what we actually need is to stream_to the chat_completion pid.

sorry for the super late response, I don't have much time to follow the library in these days. By the way what @zengbo says is correct, I'm going to push a commit to make the function use self() by default, so it is more easy to use the library without passing an inline configuration every time.. Of course it will be possible to change the stream_to parameter from inline configuration, so if you are already using it, it will continue to work