`invalid_request_error`
0x4007 opened this issue Β· comments
m1:ubiquity-dollar nv$ auto-commit
Loading Data...
π Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "That model does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
π Analyzing Codebase...m1:ubiquity-dollar nv$
m1:ubiquity-dollar nv$ auto-commit --verbose
There are no staged files to commit.
Try running `git add` to stage some files.
Loading Data...
π Analyzing Codebase...[2022-11-01T08:55:12Z DEBUG openai_api] Request: Request { method: Post, url: Url { scheme: "https", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("api.openai.com")), port: None, path: "/v1/engines/code-davinci-002/completions", query: None, fragment: None }, headers: {"content-type": "application/json"}, version: None, body: Body { reader: "<hidden>", length: Some(2686), bytes_read: 0 }, local_addr: None, peer_addr: None, ext: Extensions, trailers_sender: Some(Sender { .. }), trailers_receiver: Some(Receiver { .. }), has_trailers: false }
[2022-11-01T08:55:12Z DEBUG hyper::client::connect::dns] resolving host="api.openai.com"
[2022-11-01T08:55:12Z DEBUG hyper::client::connect::http] connecting to 52.152.96.252:443
[2022-11-01T08:55:12Z DEBUG hyper::client::connect::http] connected to 52.152.96.252:443
π Analyzing Codebase...[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] flushed 209 bytes
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] flushed 2686 bytes
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] read 439 bytes
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::io] parsed 7 headers
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::conn] incoming body is content-length (158 bytes)
[2022-11-01T08:55:13Z DEBUG hyper::proto::h1::conn] incoming body completed
[2022-11-01T08:55:13Z DEBUG hyper::client::pool] pooling idle connection for ("https", api.openai.com)
[2022-11-01T08:55:13Z DEBUG openai_api] Response: Response { response: Response { status: NotFound, headers: {"connection": "keep-alive", "strict-transport-security": "max-age=15724800; includeSubDomains", "content-length": "158", "date": "Tue, 01 Nov 2022 08:55:13 GMT", "content-type": "application/json; charset=utf-8", "vary": "Origin", "x-request-id": "7dd1ef5d977c0dae864d89fbcbeeaa37"}, version: Some(Http1_1), has_trailers: false, trailers_sender: Some(Sender { .. }), trailers_receiver: Some(Receiver { .. }), upgrade_sender: Some(Sender { .. }), upgrade_receiver: Some(Receiver { .. }), has_upgrade: false, body: Body { reader: "<hidden>", length: Some(158), bytes_read: 0 }, ext: Extensions, local_addr: None, peer_addr: None } }
thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "That model does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
π Analyzing Codebase...m1:ubiquity-dollar nv$
With RUST_BACKTRACE=full
m1:ubiquity-dollar nv$ auto-commit
Loading Data...
π Analyzing Codebase...thread 'main' panicked at 'Couldn't complete prompt.: Api(ErrorMessage { message: "That model does not exist", error_type: "invalid_request_error" })', src/main.rs:137:10
stack backtrace:
0: 0x100803cb8 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h1543c132bc4e188c
1: 0x10082077c - core::fmt::write::hda8e8eb84b49cbfc
2: 0x1007fe498 - std::io::Write::write_fmt::hb84c8996aec7120c
3: 0x1008054c4 - std::panicking::default_hook::{{closure}}::hdf06011cb093de6a
4: 0x100805228 - std::panicking::default_hook::hd7ceb942fff7b170
5: 0x10080595c - std::panicking::rust_panic_with_hook::h053d4067a63a6fcb
6: 0x100805890 - std::panicking::begin_panic_handler::{{closure}}::hea9e6c546a23e8ff
7: 0x100804194 - std::sys_common::backtrace::__rust_end_short_backtrace::hd64e012cf32134c6
8: 0x1008055e8 - _rust_begin_unwind
9: 0x10083422c - core::panicking::panic_fmt::hbfde5533e1c0592e
10: 0x100834318 - core::result::unwrap_failed::h68832e989a8867c1
11: 0x1005f5544 - auto_commit::main::{{closure}}::hde9ffac744f15d7c
12: 0x1005d8a78 - std::thread::local::LocalKey<T>::with::h8299edffc48b47fb
13: 0x1005e1498 - tokio::runtime::enter::Enter::block_on::h8cd42799fe53fdaa
14: 0x1005eb9d4 - tokio::runtime::context::enter::hd622a04884cced71
15: 0x1005dc83c - tokio::runtime::handle::Handle::enter::hb597e6521843e9f1
16: 0x1005e26a4 - auto_commit::main::h54076be9b6549311
17: 0x1005fd8c8 - std::sys_common::backtrace::__rust_begin_short_backtrace::h27a8d6ce065a0fc5
18: 0x1005e84cc - std::rt::lang_start::{{closure}}::h901a2890ae649abe
19: 0x1007f976c - std::rt::lang_start_internal::hef2161f9571a51d7
20: 0x1005e2768 - _main
π Analyzing Codebase...m1:ubiquity-dollar nv$
Looks like only gpt-3 is public and the others require beta invitations. I think this could also be considered to add to the README.
I've been trying to access the engine named code-davinci-002 which is a private beta version engine. So without access it's not possible to access the engine. It seems only the GPT-3 models are of public usage. We need to need to join the OpenAI Codex Private Beta Waitlist in order to access Codex models through API.
This is already on the README! See
Line 25 in 64cc176
Tho I agree a better error message could be presented
This is already on the README! See
Line 25 in 64cc176
I didn't realize that they had one public model and other private beta models. I assumed that I signed up for the beta some time ago, which I assumed gave me access at all to any OpenAI APIs