SamurAIGPT / EmbedAI

An app to interact privately with your documents using the power of GPT, 100% privately, no data leaks

Home Page:https://www.thesamur.ai/?utm_source=github&utm_medium=link&utm_campaign=github_privategpt

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Model download error on 100% progress

cmshot opened this issue · comments

commented

I have tried to download a model and got this error:

Download Progress: 100.0%
Found model file.
gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
[2023-06-03 18:09:31,688] ERROR in app: Exception on /download_model [GET]
Traceback (most recent call last):
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask_cors\extension.py", line 165, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\micha\privateGPT\server\privateGPT.py", line 197, in download_and_save
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic\main.py", line 1102, in pydantic.main.validate_model
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\llms\gpt4all.py", line 139, in validate_environment values["client"] = GPT4AllModel(
^^^^^^^^^^^^^
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt4all\gpt4all.py", line 49, in init
self.model.load_model(model_dest)
File "C:\Users\micha\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\gpt4all\pyllmodel.py", line 141, in load_model
llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8'))
OSError: [WinError -1073741795] Windows Error 0xc000001d
127.0.0.1 - - [03/Jun/2023 18:09:31] "GET /download_model HTTP/1.1" 500 -

Is it possible to fix that?
Thank you and have a great day.

I faced an error when downloading as well. I recommend you to just go to https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin directly, download it, and put it in server/model/. Same thing.

commented

I had this same error, tried download the model directly as you said but I'm still getting the error.

yes,I have always encounted this pronlem.

I faced an error when downloading as well. I recommend you to just go to https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin directly, download it, and put it in server/model/. Same thing. Dude, I have this problem too, and your method does work, but is there a link to download other models?