在安装完Gradio之后,执行python web_demo.py时出错。
请高手指点一下
E:\chatGLM\ChatGLM-6B>python web_demo.py
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1965, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\Users\中文用户名/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\tokenization_chatglm.py", line 209, in init
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
File "C:\Users\中文用户名/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\tokenization_chatglm.py", line 61, in init
self.text_tokenizer = TextTokenizer(vocab_file)
File "C:\Users\中文用户名/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\658202d88ac4bb782b99e99ac3adff58b4d0b813\tokenization_chatglm.py", line 22, in init
self.sp.Load(model_path)
File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\sentencepiece__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\sentencepiece__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "C:\Users\中文用户名/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\658202d88ac4bb782b99e99ac3adff58b4d0b813\ice_text.model": No such file or directory Error #2
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "E:\chatGLM\ChatGLM-6B\web_demo.py", line 4, in tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 702, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1811, in from_pretrained return cls._from_pretrained( File "C:\Users\中文用户名\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1967, in _from_pretrained raise OSError( OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
win11,在安装了pytroch11.8,CUDA,Gradio之后,执行python web_demo.py
Environment- OS:win11
- Python:python3.10
- Transformers:
- PyTorch:11.8
- CUDA Support :true