langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications

Home Page:https://python.langchain.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Incorrect deprication instructions given for ChatOpenAI class

lineality opened this issue · comments

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain_openai import ChatOpenAI

Error Message and Stack Trace (if applicable)

The problem is ~80 seconds of extreme CPU ramping lag, not an error message.

Description

An incorrect message may be printed to terminal:
"langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning:
The class ChatOpenAI was deprecated in LangChain 0.0.10 and will be
removed in 0.3.0. An updated version of the class exists in the
langchain-openai package and should be used instead. To use it
run pip install -U langchain-openai
and import as from langchain_openai import ChatOpenAI."

when there is no actual problem, only the erronious deprication warning, the line
used in the code is:
from langchain_community.chat_models import ChatOpenAI

The warning says "from LangChain import ChatOpenAI" is deprecated, but that is
a non sequetor, as "from LangChain import ChatOpenAI" is NOT BEING USED.

And the above suggestion is broken: from langchain_openai import ChatOpenAI (DO NOT USE THIS)

Do NOT use this:
# from langchain_openai import ChatOpenAI # do NOT use this, it is broken or wrong or both
This causes a massive rampup in cpu usage for ~80-90 sec before completing the process.
This may not happen, or it may always happen, randomly, for the exact same task.

the working solution is:
from langchain_community.chat_models import ChatOpenAI # correct source

"langchain_community" is a correct source

System Info

System Details Report


Report details

  • Date generated: 2024-05-15 17:34:49

Hardware Information:

  • Hardware Model: Dell Inc. Inspiron 3501
  • Memory: 12.0 GiB
  • Processor: 11th Gen Intel® Core™ i5-1135G7 × 8
  • Graphics: Intel® Xe Graphics (TGL GT2)
  • Disk Capacity: 256.1 GB

Software Information:

  • Firmware Version: 1.29.0
  • OS Name: Fedora Linux 40 (Workstation Edition)
  • OS Build: (null)
  • OS Type: 64-bit
  • GNOME Version: 46
  • Windowing System: Wayland
  • Kernel Version: Linux 6.8.9-300.fc40.x86_64

pip freeze
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
asttokens==2.4.1
attrs==23.2.0
blinker==1.8.2
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
dataclasses-json==0.6.6
decorator==5.1.1
distro==1.9.0
dnspython==2.6.1
elevenlabs==0.2.27
executing==2.0.1
filelock==3.14.0
Flask==2.3.2
Flask-Cors==4.0.0
Flask-JWT-Extended==4.5.2
frozenlist==1.4.1
fsspec==2024.5.0
greenlet==3.0.3
gunicorn==21.2.0
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.23.0
idna==3.7
ipython==8.24.0
itsdangerous==2.2.0
jedi==0.19.1
Jinja2==3.1.4
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.1.52
langchain-openai==0.1.7
langchain-text-splitters==0.0.1
langsmith==0.1.58
lxml==5.2.2
MarkupSafe==2.1.5
marshmallow==3.21.2
matplotlib-inline==0.1.7
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
openai==1.30.1
orjson==3.10.3
packaging==23.2
pandas==2.2.0
parso==0.8.4
pexpect==4.9.0
prompt-toolkit==3.0.43
ptyprocess==0.7.0
pure-eval==0.2.2
pydantic==2.7.1
pydantic_core==2.18.2
Pygments==2.18.0
PyJWT==2.8.0
pymongo==4.4.0
pypdf==4.0.1
python-dateutil==2.9.0.post0
python-docx==1.1.0
python-dotenv==0.21.0
pytz==2024.1
PyYAML==6.0.1
regex==2024.5.15
requests==2.31.0
safetensors==0.4.3
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.30
stack-data==0.6.3
tenacity==8.3.0
tiktoken==0.7.0
tokenizers==0.19.1
tqdm==4.66.4
traitlets==5.14.3
transformers==4.40.2
typing-inspect==0.9.0
typing_extensions==4.11.0
tzdata==2024.1
urllib3==2.2.1
wcwidth==0.2.13
websockets==12.0
Werkzeug==3.0.3
yarl==1.9.4

from langchain_openai import ChatOpenAI is the way to use this
https://pypi.org/project/langchain-openai/

Do you have a flame chart for the "massive rampup in cpu usage"?