princeton-nlp / tree-of-thought-llm

[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models

Home Page:https://arxiv.org/abs/2305.10601

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Errors when running sh scripts/game24/bfs.sh and when directly running run.py

Weroupolmitneir opened this issue · comments

Working with the release version available on git, Windows 10, all python requirements as well as pandas installed.

When running sh scripts/game24/bfs.sh from a bash terminal, the process gets stuck in the OpenAI library code and fails to make an API request. Trace:

$ sh scripts/game24/bfs.sh
Namespace(backend='gpt-3.5-turbo', temperature=0.7, task='game24', task_file_path='24.csv', task_start_index=900, task_end_index=1000, naive_run=False, prompt_sample=None, method_generate='propose', method_evaluate='value', method_select='greedy', n_generate_sample=1, n_evaluate_sample=3, n_select_sample=5)
functools.partial(<function gpt at 0x0000021440E7E200>, model='gpt-3.5-turbo', temperature=0.7)
-- new_ys --: ('6 * 5 = 30 (left: 4 6 30)\n', '10 - 6 = 4 (left: 4 5 4)\n', '10 - 4 = 6 (left: 5 6 10)\n', '6 / 4 = 1.5 (left: 5 1.5 10)\n', '10 - 5 = 5 (left: 4 6 5)\n', '4 + 5 = 9 (left: 6 9 10)\n', '5 + 6 = 11 (left: 4 11 10)\n', '6 - 4 = 2 (left: 2 6 10)\n')
-- sol values --: (22.0, 21.001, 3.0, 3.0, 3.0, 1.002, 1.002, 1.002)
-- choices --: ['6 * 5 = 30 (left: 4 6 30)\n', '10 - 6 = 4 (left: 4 5 4)\n', '10 - 4 = 6 (left: 5 6 10)\n', '6 / 4 = 1.5 (left: 5 1.5 10)\n', '10 - 5 = 5 (left: 4 6 5)\n']

Traceback (most recent call last):
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 160, in
run(args)
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 113, in run
ys, info = solve(args, task, i)
^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 70, in solve
values = get_values(task, x, new_ys, args.n_evaluate_sample)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 27, in get_values
value = get_value(task, x, y, n_evaluate_sample, cache_value=cache_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 14, in get_value
value_outputs = gpt(value_prompt, n=n_evaluate_sample, stop=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\models.py", line 24, in gpt
return chatgpt(messages, model=model, temperature=temperature, max_tokens=max_tokens, n=n, stop=stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\models.py", line 32, in chatgpt
res = completions_with_backoff(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens, n=cnt, stop=stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\backoff_sync.py", line 105, in retry
ret = target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\models.py", line 20, in completions_with_backoff
return openai.ChatCompletion.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 220, in request
result = self.request_raw(
^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 520, in request_raw
result = _thread_context.session.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\connectionpool.py", line 536, in _make_request
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\urllib3\connection.py", line 454, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\http\client.py", line 1375, in getresponse
response.begin()
File "C:\Python311\Lib\http\client.py", line 318, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\http\client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\ssl.py", line 1278, in recv_into
return self.read(nbytes, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\ssl.py", line 1134, in read
return self._sslobj.read(len, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When running py run.py --task=game24 --task_file_path=24.csv ... with various choice of n parameters, the process successfully makes API requests, but fails at line 113, solve method, select_ids. Trace:

$ py run.py --task=game24 --task_file_path=24.csv --prompt_sample=cot --method_generate=propose --method_evaluate=value --n_generate_sample=1 --n_evaluate_sample=1 --n_select_sample=1
Namespace(backend='gpt-3.5-turbo', temperature=0.7, task='game24', task_file_path='24.csv', task_start_index=900, task_end_index=1000, naive_run=False, prompt_sample='cot', method_generate='propose', method_evaluate='value', method_select=None, n_generate_sample=1, n_evaluate_sample=1, n_select_sample=1)
functools.partial(<function gpt at 0x0000021DAA4EE0C0>, model='gpt-3.5-turbo', temperature=0.7)
Traceback (most recent call last):
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 160, in
run(args)
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 113, in run
ys, info = solve(args, task, i)
^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\tree-of-thought-llm-publish\run.py", line 78, in solve
select_new_ys = [new_ys[select_id] for select_id in select_ids]
^^^^^^^^^^
UnboundLocalError: cannot access local variable 'select_ids' where it is not associated with a value

Hi, for the first case, you can try to remove

@backoff.on_exception(backoff.expo, openai.error.OpenAIError)

from models.py and see what kind of OpenAIError you get.

For the second case, you got error because you did not specify --method_select greedy. It'd be safer to stick to scripts/.

I'll close for now, feel free to open new issues.