joonspk-research / generative_agents

Generative Agents: Interactive Simulacra of Human Behavior

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can't run simulation "TOKEN LIMIT EXCEEDED" error

justtiberio opened this issue · comments

I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1,
it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.

Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?

Edit: Also, it always ends in an error before asking to enter an option again:

Today is February 13, 2023. From 00:00AM ~ 00:00AM, Isabella Rodriguez is planning on TOKEN LIMIT EXCEEDED.
In 5 min increments, list the subtasks Isabella does when Isabella is TOKEN LIMIT EXCEEDED from  (total duration in minutes 1440): 
1) Isabella is
TOKEN LIMIT EXCEEDED
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
TOODOOOOOO
TOKEN LIMIT EXCEEDED
-==- -==- -==- 
Traceback (most recent call last):
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 468, in open_server
    rs.start_server(int_count)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/reverie.py", line 379, in start_server
    next_tile, pronunciatio, description = persona.move(
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 222, in move
    plan = self.plan(maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/persona.py", line 148, in plan
    return plan(self, maze, personas, new_day, retrieved)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 959, in plan
    _determine_action(persona, maze)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 573, in _determine_action
    generate_task_decomp(persona, act_desp, act_dura))
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/cognitive_modules/plan.py", line 164, in generate_task_decomp
    return run_gpt_prompt_task_decomp(persona, task, duration)[0]
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 439, in run_gpt_prompt_task_decomp
    output = safe_generate_response(prompt, gpt_param, 5, get_fail_safe(),
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/gpt_structure.py", line 268, in safe_generate_response
    return func_clean_up(curr_gpt_response, prompt=prompt)
  File "/Users/tiberio/Documents/Cesar School (Mac)/TCC/generative_agents/reverie/backend_server/persona/prompt_template/run_gpt_prompt.py", line 378, in __func_clean_up
    duration = int(k[1].split(",")[0].strip())
IndexError: list index out of range
Error.
Enter option: 

I installed it all correctly, everything seems to work, but when I try to run it, even with step count to 1, it just prints "TOKEN LIMIT EXCEEDED". I'm not sure where the problem is, this OpenAI changes something that broke the code? I have a paid API with usage tier 3, so I think it was supposed to work, I tracked the API key the code is using but it seems that It's not even calling for the GPT3, it just makes calls for the "Text-embedding-ada-002-v2", if the OpenAI API tracking is accurate it didn't make any calls for the GPT API.

Anyone was able to run this recently? Anyone has any idea of what could e the cause of this error?

Mine runs for a little while. I will re-run it in the next few days, and can show you how far mine gets. Seems less of an error, and maybe tied to OpenAI limits I may change. But glad to hear I am not the only one with the issue, because I thought there may have been something wrong with my code.

I have the exact same issue

I solved the problem. I will list the solution here and make a PR later (although author seems to be inactive nowadays to review PRs)

The issue is simple, on Jan 4th 2024 OpenAI stopped all models with names such as "text-davinci-00x" (x=123). However, since model names are hard coded in Generative Agents all over the place instead of a single setup file, that means you need to search through the entire project for key word "davinci" and replace all of them to "gpt-3.5-turbo-instruct" (see: https://platform.openai.com/docs/deprecations)

And also a note why this error is hard to spot is because original version assumed that the only error is from token out of bound (hard coded bounds); they did not expect models get replaced. I found this error by replacing the TRY-EXCEPT structure and print the exception message directly instead of printing a hard coded TOKEN LIMIT EXCEEDED msg.

I have changed all the "text-davinci-00x" to "gpt-3.5-turbo-instruct", but still receive the TOKEN LIMIT EXCEEDED msg.
Why is that?