Tele-AI / Telechat

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

12B模型ValueError: Tokenizer class TelechatTokenizer does not exist or is not currently imported.

wuxiulike opened this issue · comments

运行python telechat_service.py
报错
Traceback (most recent call last):
File "/root/Telechat/service/telechat_service.py", line 16, in
tokenizer = AutoTokenizer.from_pretrained(PATH)
File "/root/anaconda3/envs/telechat/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 688, in from_pretrained
raise ValueError(
ValueError: Tokenizer class TelechatTokenizer does not exist or is not currently imported.
(telechat) root@estar-ESC8000-G4:~/Telechat/service# python telechat_service.py
Traceback (most recent call last):
File "/root/Telechat/service/telechat_service.py", line 16, in
tokenizer = AutoTokenizer.from_pretrained(PATH)
File "/root/anaconda3/envs/telechat/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 688, in from_pretrained
raise ValueError(
ValueError: Tokenizer class TelechatTokenizer does not exist or is not currently imported.

模型为7B时可以正常运行,修改为12B路径时,报上面的错

###############telechat_service.py####################
import uvicorn
import os

from fastapi.middleware.cors import CORSMiddleware
from contextlib import asynccontextmanager
from fastapi.encoders import jsonable_encoder
from fastapi import FastAPI
from fastapi.responses import StreamingResponse

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

os.environ["CUDA_VISIBLE_DEVICES"] = "0"
PATH = '/opt/models/TeleChat-12B'
tokenizer = AutoTokenizer.from_pretrained(PATH)
model = AutoModelForCausalLM.from_pretrained(PATH, trust_remote_code=True, device_map="auto",
torch_dtype=torch.float16)
generate_config = GenerationConfig.from_pretrained(PATH)
model.eval()

找到问题了,把telechat_service.py第16行
tokenizer = AutoTokenizer.from_pretrained(PATH)
修改为:
tokenizer = AutoTokenizer.from_pretrained(PATH,trust_remote_code=True)
问题解决。

commented

感谢,我也遇到了你的这个问题,按你的已解决