[THUDM/ChatGLM-6B]在安装完Gradio之后,执行python web_demo.py时出错。 请高手指点一下

2024-05-20 668 views
5

在安装完Gradio之后,执行python web_demo.py时出错。 请高手指点一下 E:\chatGLM\ChatGLM-6B>python web_demo.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Traceback (most recent call last): File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1965, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "C:\Users\中文用户名/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\tokenization_chatglm.py", line 209, in init self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens) File "C:\Users\中文用户名/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\tokenization_chatglm.py", line 61, in init self.text_tokenizer = TextTokenizer(vocab_file) File "C:\Users\中文用户名/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\tokenization_chatglm.py", line 22, in init self.sp.Load(model_path) File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\sentencepiece__init__.py", line 905, in Load return self.LoadFromFile(model_file) File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\sentencepiece__init__.py", line 310, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) OSError: Not found: "C:\Users\中文用户名/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\658202d88ac4bb782b99e99ac3adff58b4d0b813\ice_text.model": No such file or directory Error #2

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "E:\chatGLM\ChatGLM-6B\web_demo.py", line 4, in tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 702, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1967, in _from_pretrained raise OSError( OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.

win11,在安装了pytroch11.8,CUDA,Gradio之后,执行python web_demo.py

Environment
- OS:win11
- Python:python3.10
- Transformers:
- PyTorch:11.8
- CUDA Support :true

回答

1

THUDM/chatglm-6b模型应该是没下载成功。下载到本地并替换正确路径试试。

5

已经在本地了,还是运行不成功。

7

先别改代码,试试默认下载地址能不能正常跑。 如果可以那就是你自己下到本地的文件不完整或者代码里面改错了。

4

差不多的问题。模型已经下载了。 tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, device='cuda')

修改为本地路径。都试过了。好像也是不行,是不是windows环境运行是有问题的。

6

报的错发出来看看,这个我部了几台window机器,应该没问题的