armbues / SiLLM

SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Modifications to sillm.chat

magnusviri opened this issue · comments

I've modified sillm.chat a lot. Let me know what you would be interested in adding to your repo and I'll put those changes in my fork and do a pull request. Here's a list of changes.

  • Arrow key support, including history. This requires the cmd module.
  • Pressing esc while generating a reply stops it. This requires tty, termios, select, and sys modules.
  • Debug messages, user chat, and assistant chat are all colored differently. The colors are not settable at this point, nor is disabling color output.
  • Uses "/" to do commands. Everything else (including "." and "") are sent to the assistant.
  • /exit, /help - kind of obvious.
  • /seed, /temperature, /max_tokens, /system_prompt - sets the values.
  • /clear - resets the conversation.
  • /conversation - print the conversation.
  • /rewrite - rewrite the last assistant reply and put that in the conversation (I have bug in this somewhere because it doesn't always work).
  • /settings - print the settings (seed, temp, max tokens).

This sounds great! Most of the improvements really make sense.

Are you using any external dependencies? That would be my only reason to be hesitant and double-check what is used and if it's maintained etc.

I checked your fork of the repo and the modified version of chat is not in there yet, right?

Does changing the assistant response also evaluate it in the model?

Are you using any external dependencies? That would be my only reason to be hesitant and double-check what is used and if it's maintained etc.

These are the new dependencies.

import sys
import select
import tty
import termios
from cmd import Cmd

I checked your fork of the repo and the modified version of chat is not in there yet, right?

It wasn't there. I've uploaded it now.

Does changing the assistant response also evaluate it in the model?

The intent is that it will evaluate it after the user enters the next reply. This is my easy way of trying to jailbreak the model.