Unable to use ollama create to load custom gguf model
ngwarrencinyen opened this issue · comments
Issue: Error When Using ollama create to Load a Model
Description
When attempting to use the ollama create API to load a model, the following error is returned:
*** Error:
{"error":"path or Modelfile are required"}
***
Environment
- Ollama Version: 0.5.4
- Docker Compose Service: ollama running on
localhost:8015
Steps to Reproduce
-
Generate SHA-256 Checksum:
sha256sum {model path}Example:
sha256sum Llama-3.2-3B-Instruct-Q4_K_M.gguf
Output:
6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff Llama-3.2-3B-Instruct-Q4_K_M.gguf -
Push Blob to Ollama:
curl -T Llama-3.2-3B-Instruct-Q4_K_M.gguf -X POST http://localhost:8015/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff
-
Verify Blob Exists:
curl -I http://localhost:8015/api/blobs/sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff
Output:
HTTP/1.1 200 OK Date: Thu, 03 Apr 2025 06:43:44 GMT -
Use
ollama createto Load the Model:curl http://localhost:8015/api/create -d '{ "model": "onepiece", "files": { "Llama-3.2-3B-Instruct-Q4_K_M.gguf": "sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff" }, "template": "{{- if .System }}\n<|system|>\n{{ .System }}\n</s>\n{{- end }}\n<|user|>\n{{ .Prompt }}\n</s>\n<|assistant|>", "parameters": { "temperature": 0.2, "num_ctx": 8192, "stop": ["<|system|>", "<|user|>", "<|assistant|>", "</s>"] }, "system": "You are Luffy from One Piece, acting as an assistant." }'
-
Error Output:
{"error":"path or Modelfile are required"}
Expected Behavior
The model should be successfully created and loaded into the Ollama server.
Actual Behavior
The server returns the following error:
{"error":"path or Modelfile are required"}
Additional Information
-
Command Used:
curl http://localhost:8015/api/create -d '{ "model": "onepiece", "files": { "Llama-3.2-3B-Instruct-Q4_K_M.gguf": "sha256:6c1a2b41161032677be168d354123594c0e6e67d2b9227c84f296ad037c728ff" }, "template": "{{- if .System }}\n<|system|>\n{{ .System }}\n</s>\n{{- end }}\n<|user|>\n{{ .Prompt }}\n</s>\n<|assistant|>", "parameters": { "temperature": 0.2, "num_ctx": 8192, "stop": ["<|system|>", "<|user|>", "<|assistant|>", "</s>"] }, "system": "You are Luffy from One Piece, acting as an assistant." }'
-
Model Reference:
The model used for this process is available at Llama-3.2-3B-Instruct-Q4_K_M.gguf.
Questions
- Is the
pathormodelfilefield required in the/api/createpayload? If so, how should it be structured? - Are there additional steps or configurations needed to successfully create a model from a GGUF file?
Request for Assistance
Any guidance on how to resolve this issue and successfully create a model using the /api/create endpoint is greatly appreciated!!!
OS
Docker
GPU
Intel
CPU
Intel
Ollama version
0.5.4
Hi, the request format you are using is for recent versions of ollama, but for ollama 0.5.4 it is not supported: https://github.com/ollama/ollama/blob/v0.5.4/api/types.go#L297
Seems for 0.5.4 you can only create using a Modelfile following the sample command here: https://github.com/ollama/ollama/blob/v0.5.4/docs/api.md#request-20
We are actively upgrading ollama to v0.6.x and you can use your command after we complete the upgrade. Will keep you updated about our upgrade progress, thanks!
Alright thanks for the information. Cheers!
