ollama / ollama

Get up and running with Llama 3, Mistral, Gemma, and other large language models.

Home Page:https://ollama.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to load a model from local disk path?

quzhixue-Kimi opened this issue · comments

hi there,

I have two ubuntu 20.04 server (one is local machine, the another one is product server.) with latest ollama binary installed based on the document via https://github.com/ollama/ollama/blob/main/docs/linux.md

My local ubuntu 20.04 has got the internet to run the command and download the llama3 and llama3:70b model and stored in /usr/share/ollama/.ollama/models

The another ubuntu 20.04 server has no internet as a product server!!!
I just copied all model files from my local ubuntu 20.04 to the ubuntu 20.04 product server with the same model path : '/usr/share/ollama/.ollama/models'

After running the ollama process. I ran the 'ollama list' command in product server, there is no model listed. And, when I ran 'ollama run llama3', there has been one error occurred as 'pulling manifest Error: pull model manifest: Get https://registry.ollama.ai/v2/library/llama3/manifests/latest: dial tcp: lookup registry.ollama.ai on 127.0.0.53:53 server misbehaving'

The above issue was caused by on internet on my product server.

It is appreciated that you could tell me wheter there is one environment variable to set without internet or not?

BR
Kimi

This of transforming the names of the .gguf files into hash names is a terrible method, the llm models are large and take up a lot of space at a certain point it is not convenient to duplicate them just to be able to use them with other llm runners, and also through the names in hash it is very difficult to identify them.

Furthermore the same hash files or .ollama folder cannot be shared between windows and linux because the hash name of model one is called for example "sha256-b9a918323fcb82484b5a51ecd08c251821a16920c4b57263dc8a2f8fc3348923" on windows and the same sha256:b9a918323 fcb82484b5a51ecd08c251821a16920c4b57263dc8a2f8fc3348923 on linux.

Making it complicated to share models on a single external disk.

The issue has been fixed as editing the /etc/systemd/system/ollama.service file

[Service]
Environment="OLLAMA_MODELS=my_model_path"

systemctl daemon-reload
systemctl restart ollama.service

my_mode_path is just /home/kimi/.ollama/models , and in this model folder just has two folders named blobs and manifests
In blobs folder, there have been these sha256-XXXXXXXXXX files, do not add any other model folders!

If configuration has been corrected. Then running 'ollama list'. The models will be listed.

BR
Kimi