unslothai / unsloth

Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory

Home Page:https://unsloth.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Phi-3-medium-128k-instruct support

win4r opened this issue · comments

Phi-3-medium-128k-instruct support

Yep! It's a bit more complex due to the weird different long context RoPE scaling mechanisms in the 128K version

commented

can #620 be implemented?

@rezzie-rich It'll be supported once we add all model automatic support!