rustformers / llm

[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models

Home Page:https://docs.rs/llm/latest/llm/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Implement SuperHOT/interpolated RoPE support

philpax opened this issue · comments

Another llama.cpp feature that seems to have shrunk the paper-to-implementation pipeline to less than one week!

This allows for a much longer context (assuming you have the (V)RAM for it)

We can probably close out #77 if this is done.

To do this we only need a new rope_scaling model parameter. Or am i missing something?