ollama / ollama

Get up and running with Llama 3.2, Mistral, Gemma 2, and other large language models.

Home Page:https://ollama.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

API HTTP code: 500, "error":"failed to generate embedding with langchain

buaa39055211 opened this issue · comments

What is the issue?

after version 0.1.32 of Ollama,there always have a bug with the api of embedding
the embedding model I used is "smartcreation/bge-large-zh-v1.5",and dztech/bge-large-zh:v1.5 pulled from ollama

`
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.document_loaders import (
CSVLoader,
UnstructuredWordDocumentLoader,
)
from langchain_community.vectorstores import Qdrant
from qdrant_client import QdrantClient
base_url="http://127.0.0.1:11434")
embeddings = OllamaEmbeddings(model="dztech/bge-large-zh:v1.5",base_url=base_url)
LOADER_MAPPING = {
".csv": (CSVLoader, {}),
# ".docx": (Docx2txtLoader, {}),
".doc": (UnstructuredWordDocumentLoader, {"mode": "elements"}),
".docx": (UnstructuredWordDocumentLoader, {}),}
def split(uploaded_file_name):
# Create embeddings
print("Creating new vectorstore")
texts = process_documents(uploaded_file_name)
print(f"Creating embeddings. May take some minutes...")
db = Qdrant.from_documents(texts, embedding=embeddings,url='localhost:7541', collection_name=uploaded_file_name)
print(uploaded_file_name)
query = "insert"
docs = db.similarity_search(query)
print(docs[0].page_content)

`
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/vectorstores/qdrant.py", line 2037, in _embed_texts
embeddings = self.embeddings.embed_documents(list(texts))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 204, in embed_documents
embeddings = self._embed(instruction_pairs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 192, in _embed
return [self.process_emb_response(prompt) for prompt in iter]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 192, in
return [self.process_emb_response(prompt) for prompt in iter]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/anaconda3/envs/ag2/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 166, in _process_emb_response
raise ValueError(
ValueError: Error raised by inference API HTTP code: 500, {"error":"failed to generate embedding"}

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.33-0.1.38