rustformers / llm

[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models

Home Page:https://docs.rs/llm/latest/llm/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Currently in dev any inference is broken

gadLinux opened this issue · comments

warning: llm (lib) generated 1 warning (run cargo fix --lib -p llm to apply 1 suggestion)
Finished release [optimized] target(s) in 0.26s
Running target/release/llm infer -m ../models/vicuna-13b-v1.5.Q4_K_M.gguf -p 'Write a long story' -r mistralai/Mistral-7B-v0.1
⣻ Loading model...2024-02-08T17:56:25.386579Z INFO infer: cached_path::cache: Cached version of https://huggingface.co/mistralai/Mistral-7B-v0.1/resolve/main/tokenizer.json is up-to-date
✓ Loaded 363 tensors (7.9 GB) after 292ms
The application panicked (crashed).
Message: not yet implemented
Location: crates/llm-base/src/inference_session.rs:120

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

Code is commented and not possible to infer anything. Is there an ETA for this to resolve.
Can we know what's the current status?
Where does help is required?

Hi, apologies - I realised that updating to the latest llama.cpp would require a rewrite, and it's been hard to find the motivation to do so. I have a few ideas for a redesign / reimplementation, but I haven't made the time to attend to them.

In the meantime, I'd suggest sticking to the gguf branch (which uses an older llama.cpp's GGML and supports Llama/Mistral) or https://github.com/edgenai/llama_cpp-rs .

I have a few ideas for a redesign / reimplementation, but I haven't made the time to attend to them.

If you can share those I could give it a try, I've wanted to familiarize myself with the ggml library anyway.