run-llama / create_llama_projects

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

APITimeoutError

yigit353 opened this issue · comments

While indexing files, I get the error below. What do you think could be done to fix this?

/.cache/pypoetry/virtualenvs/llamaindex-fastapi-streaming-2J97Ot5s-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 887, in _request
    raise APITimeoutError(request=request) from err
openai.APITimeoutError: Request timed out.

When I restart it, it's back to normal. But after a while (like after 20 steps) request time out can happen.

Maybe cache the intermediate results and restart from the cache when time out happens. Don't you come across this?

Or is this related to something else?

hm i haven't personally run into this, can stress test it a bit more

I tried it on another computer. Since it's an API-related thing the error persisted.

It always happens at 20/105 while indexing the first file.

Maybe it's a very large chunk? But why wouldn't anyone else have the problem? I just enabled billing today. Do I have an extra rate limit or something?

I removed the indexing of the first file. And get the following warning for the first time:

 Creating hierarchical node tree from tesla 10k documents
~/.conda/envs/llama/lib/python3.11/site-packages/unstructured/documents/html.py:498: FutureWarning: The behavior of this method will change in future versions. Use specific 'len(elem)' or 'elem is not None' test instead.
  rows = body.findall("tr") if body else []

When I only indexed the 2020 file, it took a long time however did not time out and worked as intended.