ollama / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.

Home Page:https://ollama.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

langchain-python-rag-privategpt "Cannot submit more than 5,461 embeddings at once"

dcasota opened this issue · comments

What is the issue?

In langchain-python-rag-privategpt, there is a bug 'Cannot submit more than x embeddings at once' which already has been mentioned in various different constellations, lately see #2572.

Now with Ollama version 0.1.38 the chromadb version already has been updated to 0.47, but the max_batch_size calculation still seems to produce issues, see actual issue case chroma-core/chroma#2181.

Meanwhile, is there a workaround for Ollama?

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$ python ./ingest.py
Creating new vectorstore
Loading documents from source_documents
Loading new documents: 100%|████████████████| 1355/1355 [00:15<00:00, 88.77it/s]
Loaded 80043 new documents from source_documents
Split into 478012 chunks of text (max. 500 tokens each)
Creating embeddings. May take some minutes...
Traceback (most recent call last):
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 161, in <module>
    main()
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/./ingest.py", line 153, in main
    db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 612, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 576, in from_texts
    chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 222, in add_texts
    raise e
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 208, in add_texts
    self._collection.upsert(
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 298, in upsert
    self._client._upsert(
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/api/segment.py", line 290, in _upsert
    self._producer.submit_embeddings(coll["topic"], records_to_submit)
  File "/home/dcaso/ollama/examples/langchain-python-rag-privategpt/.venv/lib/python3.11/site-packages/chromadb/db/mixins/embeddings_queue.py", line 127, in submit_embeddings
    raise ValueError(
ValueError:
                Cannot submit more than 5,461 embeddings at once.
                Please submit your embeddings in batches of size
                5,461 or less.

(.venv) dcaso [ ~/ollama/examples/langchain-python-rag-privategpt ]$

OS

WSL2

GPU

Nvidia

CPU

Intel

Ollama version

0.1.38

Research findings

In ingest.py, in def maint(), I've modified the else condition as following but it didn't help (same issue).
image

It may be yet another subcomponent issue. With v0.1.38, langchain version is 0.0.274.

pip3 list | grep langchain
langchain                0.0.274

Not use of e.g. langchain_community.

As workaround, I've updated all components. This is not recommended because usually it creates more side effects and it's more difficult to reproduce issues.

pip --disable-pip-version-check list --outdated --format=json | python -c "import json, sys; print('\n'.join([x['name'] for x in json.load(sys.stdin)]))" | sudo xargs -n1 pip install -U

Afterwards, the following langchain packages are installed:

pip3 list |grep langchain
langchain                0.1.20
langchain-community      0.0.38
langchain-core           0.1.52
langchain-text-splitters 0.0.2

python ingest.py and python privateGPT.py run successfully, but the output contains warnings with various deprecated langchain components. From the findings so far, a curated requirements.txt list would be helpful.

python ingest.py always starts with Creating new vectorstore. It does not preserve already loaded documents. Why?

Same issue with v0.1.39. Luckily the workaround works, with Nvidia drivers 552 (see #4563).

edited June 5th: Same with v0.1.41.