How to setting max tokens?
eallion opened this issue · comments
Bug description
│
◇ Changes analyzed
│
└ ✖ OpenAI API Error: 400 - Bad Request
{
"error": {
"message": "This model's maximum context length is 4097 tokens. However, your messages resulted in 35569 tokens. Please reduce the length of the messages.",
"type": "invalid_request_error",
"param": "messages",
"code": "context_length_exceeded"
}
}
aicommits version
1.11.0
Environment
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 13th Gen Intel(R) Core(TM) i5-13490F
Memory: 20.50 GB / 31.83 GB
Binaries:
Node: 20.10.0 - ~\scoop\apps\nodejs-lts\current\node.EXE
npm: 10.2.3 - ~\scoop\apps\nodejs-lts\current\npm.CMD
pnpm: 8.12.0 - ~\scoop\shims\pnpm.EXE
Can you contribute a fix?
- I’m interested in opening a pull request for this issue.
Please use a model with a longer context length. Now the default model is gpt-3.5-turbo
that a context length is 4k, so set gpt-3.5-turbo-16k
to your config as following
$ aicommits config set model=gpt-3.5-turbo-16k
The max_tokens
option only affects generation. It cannnot limit the input tokens.
https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens
The maximum number of tokens that can be generated in the chat completion.
Thanks!