sobelio / llm-chain

`llm-chain` is a powerful rust crate for building chains in large language models allowing you to summarise text and complete complex tasks

Home Page:https://llm-chain.xyz

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LLAMA model paths are mishandled before being sent to c++

williamhogman opened this issue · comments

This works

cargo run --example alpaca -- /workspace/llama.cpp/models/gpt4-x-alpaca-13b-native-ggml-model-q4.bin  ✔  3s   base 
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
Running `/workspace/llm-chain/target/debug/examples/alpaca /workspace/llama.cpp/models/gpt4-x-alpaca-13b-native-ggml-model-q4.bin
llama.cpp: loading model from /workspace/llama.cpp/models/gpt4-x-alpaca-13b-native-ggml-model-q4_.bin

But this does not work

cargo run --example alpaca -- /workspace/llama.cpp/models/gpt4-x-alpaca-13b-native-ggml-model-q4_0.bin  INT ✘  base 
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
Running /workspace/llm-chain/target/debug/examples/alpaca /workspace/llama.cpp/models/gpt4-x-alpaca-13b-native-ggml-model-q4_0.bin`
error loading model: failed to open /workspace/llama.cpp/models/gpt4-x-alpaca-13b-native-ggml-model-q4_0.binq: No such file or directory

Both files exist and in the second case a q is appended.

Still a problem?

The code was not changed so the ticket should be still relevant. To fix it the Rust string needs to be converted to std::ffi::CString like this:

let path = CString::new(path).expect("could not convert to CString");
and then pass path.into_raw()

That fixed it for me.