bakks / butterfish

A shell with AI superpowers

Home Page:https://butterfi.sh

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

getting into a strange state after modifying zshrc

samjhecht opened this issue Β· comments

I started the butterfish shell and the little fish shows up. then i went to add some aliases in my zshrc related to butterfish and the fish goes away after source ~/.zshrc. does the presence of the fish emoji or its absence indicate the state of the wrapper?

➜  code brew install bakks/bakks/butterfish && butterfish shell
Running `brew update --auto-update`...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> New Formulae
ansible@7              fastgron               libint                 openfga                tern
apko                   git-credential-oauth   libomemo-c             procps@3               votca
aws-amplify            joshuto                libpaho-mqtt           shodan                 wzprof
bashate                libecpint              melange                spotify_player         xbyak
ddns-go                libfastjson            nexttrace              swift-outdated
==> New Casks
chatbox                dintch                 eusamanager            lasso                  processmonitor
copilot                engine-dj              filemonitor            loupedeck              tea
craft                  eu                     firefly-shimmer        motu-m-series          yealink-meeting

You have 35 outdated formulae and 2 outdated casks installed.

Warning: bakks/bakks/butterfish 0.0.31 is already installed and up-to-date.
To reinstall 0.0.31, run:
  brew reinstall butterfish
Logging to /var/tmp/butterfish.log

➜  code export PS1="$PS1🐠 "
➜  code 🐠
➜  code 🐠 butterfish prompt "please add `alias bf="butterfish"` to my zshrc file"
I'm sorry, but I cannot add anything to your zshrc file as I am an AI language model and do not have access to your computer's file system. However, I can provide you with instructions on how to edit your zshrc file.

To edit your zshrc file, follow these steps:

1. Open your terminal.
2. Type `nano ~/.zshrc` and press Enter. This will open your zshrc file in the Nano text editor.
3. Make the necessary changes to your zshrc^C
➜  code 🐠 code ~/.zshrc
➜  code 🐠 source ~/.zshrc
➜  code bf
Usage: butterfish <command>

Do useful things with LLMs from the command line, with a bent towards software engineering.

Butterfish is a command line tool for working with LLMs. It has two modes: CLI command mode, used to prompt LLMs,
summarize files, and manage embeddings, and Shell mode: Wraps your local shell to provide easy prompting and
autocomplete.

Butterfish stores an OpenAI auth token at ~/.config/butterfish/butterfish.env and the prompt wrappers it uses at
~/.config/butterfish/prompts.yaml.

To print the full prompts and responses from the OpenAI API, use the --verbose flag. Support can be found at
https://github.com/bakks/butterfish.

If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use. If
you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by disabling
shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000). See "butterfish shell --help".

v0.0.31 darwin arm64 (commit 8cc7f94) (built 2023-04-21T02:11:22Z) MIT License - Copyright (c) 2023 Peter Bakkum

Flags:
  -h, --help       Show context-sensitive help.
  -v, --verbose    Verbose mode, prints full LLM prompts.

Commands:
  shell
    Start the Butterfish shell wrapper. This wraps your existing shell, giving you access to LLM prompting by
    starting your command with a capital letter. LLM calls include prior shell context. This is great for keeping a
    chat-like terminal open, sending written prompts, debugging commands, and iterating on past actions.

    Use:

      - Type a normal command, like 'ls -l' and press enter to execute it

      - Start a command with a capital letter to send it to GPT, like 'How do I find local .py files?'

      - Autosuggest will print command completions, press tab to fill them in

      - Type 'Status' to show the current Butterfish configuration

      - GPT will be able to see your shell history, so you can ask contextual questions like 'why didn't my last
        command work?'

        Here are special Butterfish commands:

      - Status : Show the current Butterfish configuration

      - Help : Give hints about usage

    If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use.
    If you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by
    disabling shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000).

  prompt [<prompt> ...]
    Run an LLM prompt without wrapping, stream results back. This is a straight-through call to the LLM from the
    command line with a given prompt. This accepts piped input, if there is both piped input and a prompt then they
    will be concatenated together (prompt first). It is recommended that you wrap the prompt with quotes. The default
    GPT model is gpt-3.5-turbo.

  summarize [<files> ...]
    Semantically summarize a list of files (or piped input). We read in the file, if it is short then we hand it
    directly to the LLM and ask for a summary. If it is longer then we break it into chunks and ask for a list of
    facts from each chunk (max 8 chunks), then concatenate facts and ask GPT for an overall summary.

  gencmd <prompt> ...
    Generate a shell command from a prompt, i.e. pass in what you want, a shell command will be generated. Accepts
    piped input. You can use the -f command to execute it sight-unseen.

  rewrite <prompt>
    Rewrite a file using a prompt, must specify either a file path or provide piped input, and can output to stdout,
    output to a given file, or edit the input file in-place. This command uses the OpenAI edit API rather than the
    completion API.

  exec [<command> ...]
    Execute a command and try to debug problems. The command can either passed in or in the command register (if you
    have run gencmd in Console Mode).

  index [<paths> ...]
    Recursively index the current directory using embeddings. This will read each file, split it into chunks,
    embed the chunks, and write a .butterfish_index file to each directory caching the embeddings. If you re-run this
    it will skip over previously embedded files unless you force a re-index. This implements an exponential backoff
    if you hit OpenAI API rate limits.

  clearindex [<paths> ...]
    Clear paths from the index, both from the in-memory index (if in Console Mode) and to delete .butterfish_index
    files. Defaults to loading from the current directory but allows you to pass in paths to load.

  loadindex [<paths> ...]
    Load paths into the index. This is specifically for Console Mode when you want to load a set of cached indexes
    into memory. Defaults to loading from the current directory but allows you to pass in paths to load.

  showindex [<paths> ...]
    Show which files are present in the loaded index. You can pass in a path but it defaults to the current
    directory.

  indexsearch <query>
    Search embedding index and return relevant file snippets. This uses the embedding API to embed the search string,
    then does a brute-force cosine similarity against every indexed chunk of text, returning those chunks and their
    scores.

  indexquestion <question>
    Ask a question using the embeddings index. This fetches text snippets from the index and passes them to the LLM
    to generate an answer, thus you need to run the index command first.

Run "butterfish <command> --help" for more information on a command.

butterfish: error: expected one of "shell",  "prompt",  "summarize",  "gencmd",  "rewrite",  ...
➜  code code ~/.zshrc
➜  code source ~/.zshrc
➜  code source ~/.zshrc
➜  code bfs
Logging to /var/tmp/butterfish.log
Butterfish shell is already running, cannot wrap shell again (detected with BUTTERFISH_SHELL env var).
➜  code bfp
Usage: butterfish <command>

Do useful things with LLMs from the command line, with a bent towards software engineering.

Butterfish is a command line tool for working with LLMs. It has two modes: CLI command mode, used to prompt LLMs,
summarize files, and manage embeddings, and Shell mode: Wraps your local shell to provide easy prompting and
autocomplete.

Butterfish stores an OpenAI auth token at ~/.config/butterfish/butterfish.env and the prompt wrappers it uses at
~/.config/butterfish/prompts.yaml.

To print the full prompts and responses from the OpenAI API, use the --verbose flag. Support can be found at
https://github.com/bakks/butterfish.

If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use. If
you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by disabling
shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000). See "butterfish shell --help".

v0.0.31 darwin arm64 (commit 8cc7f94) (built 2023-04-21T02:11:22Z) MIT License - Copyright (c) 2023 Peter Bakkum

Flags:
  -h, --help       Show context-sensitive help.
  -v, --verbose    Verbose mode, prints full LLM prompts.

Commands:
  shell
    Start the Butterfish shell wrapper. This wraps your existing shell, giving you access to LLM prompting by
    starting your command with a capital letter. LLM calls include prior shell context. This is great for keeping a
    chat-like terminal open, sending written prompts, debugging commands, and iterating on past actions.

    Use:

      - Type a normal command, like 'ls -l' and press enter to execute it

      - Start a command with a capital letter to send it to GPT, like 'How do I find local .py files?'

      - Autosuggest will print command completions, press tab to fill them in

      - Type 'Status' to show the current Butterfish configuration

      - GPT will be able to see your shell history, so you can ask contextual questions like 'why didn't my last
        command work?'

        Here are special Butterfish commands:

      - Status : Show the current Butterfish configuration

      - Help : Give hints about usage

    If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use.
    If you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by
    disabling shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000).

  prompt [<prompt> ...]
    Run an LLM prompt without wrapping, stream results back. This is a straight-through call to the LLM from the
    command line with a given prompt. This accepts piped input, if there is both piped input and a prompt then they
    will be concatenated together (prompt first). It is recommended that you wrap the prompt with quotes. The default
    GPT model is gpt-3.5-turbo.

  summarize [<files> ...]
    Semantically summarize a list of files (or piped input). We read in the file, if it is short then we hand it
    directly to the LLM and ask for a summary. If it is longer then we break it into chunks and ask for a list of
    facts from each chunk (max 8 chunks), then concatenate facts and ask GPT for an overall summary.

  gencmd <prompt> ...
    Generate a shell command from a prompt, i.e. pass in what you want, a shell command will be generated. Accepts
    piped input. You can use the -f command to execute it sight-unseen.

  rewrite <prompt>
    Rewrite a file using a prompt, must specify either a file path or provide piped input, and can output to stdout,
    output to a given file, or edit the input file in-place. This command uses the OpenAI edit API rather than the
    completion API.

  exec [<command> ...]
    Execute a command and try to debug problems. The command can either passed in or in the command register (if you
    have run gencmd in Console Mode).

  index [<paths> ...]
    Recursively index the current directory using embeddings. This will read each file, split it into chunks,
    embed the chunks, and write a .butterfish_index file to each directory caching the embeddings. If you re-run this
    it will skip over previously embedded files unless you force a re-index. This implements an exponential backoff
    if you hit OpenAI API rate limits.

  clearindex [<paths> ...]
    Clear paths from the index, both from the in-memory index (if in Console Mode) and to delete .butterfish_index
    files. Defaults to loading from the current directory but allows you to pass in paths to load.

  loadindex [<paths> ...]
    Load paths into the index. This is specifically for Console Mode when you want to load a set of cached indexes
    into memory. Defaults to loading from the current directory but allows you to pass in paths to load.

  showindex [<paths> ...]
    Show which files are present in the loaded index. You can pass in a path but it defaults to the current
    directory.

  indexsearch <query>
    Search embedding index and return relevant file snippets. This uses the embedding API to embed the search string,
    then does a brute-force cosine similarity against every indexed chunk of text, returning those chunks and their
    scores.

  indexquestion <question>
    Ask a question using the embeddings index. This fetches text snippets from the index and passes them to the LLM
    to generate an answer, thus you need to run the index command first.

Run "butterfish <command> --help" for more information on a command.

butterfish: error: unknown flag -p, did you mean one of "-h", "-v"?
➜  code bfp "what's going on"
Usage: butterfish <command>

Do useful things with LLMs from the command line, with a bent towards software engineering.

Butterfish is a command line tool for working with LLMs. It has two modes: CLI command mode, used to prompt LLMs,
summarize files, and manage embeddings, and Shell mode: Wraps your local shell to provide easy prompting and
autocomplete.

Butterfish stores an OpenAI auth token at ~/.config/butterfish/butterfish.env and the prompt wrappers it uses at
~/.config/butterfish/prompts.yaml.

To print the full prompts and responses from the OpenAI API, use the --verbose flag. Support can be found at
https://github.com/bakks/butterfish.

If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use. If
you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by disabling
shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000). See "butterfish shell --help".

v0.0.31 darwin arm64 (commit 8cc7f94) (built 2023-04-21T02:11:22Z) MIT License - Copyright (c) 2023 Peter Bakkum

Flags:
  -h, --help       Show context-sensitive help.
  -v, --verbose    Verbose mode, prints full LLM prompts.

Commands:
  shell
    Start the Butterfish shell wrapper. This wraps your existing shell, giving you access to LLM prompting by
    starting your command with a capital letter. LLM calls include prior shell context. This is great for keeping a
    chat-like terminal open, sending written prompts, debugging commands, and iterating on past actions.

    Use:

      - Type a normal command, like 'ls -l' and press enter to execute it

      - Start a command with a capital letter to send it to GPT, like 'How do I find local .py files?'

      - Autosuggest will print command completions, press tab to fill them in

      - Type 'Status' to show the current Butterfish configuration

      - GPT will be able to see your shell history, so you can ask contextual questions like 'why didn't my last
        command work?'

        Here are special Butterfish commands:

      - Status : Show the current Butterfish configuration

      - Help : Give hints about usage

    If you don't have OpenAI free credits then you'll need a subscription and you'll need to pay for OpenAI API use.
    If you're using Shell Mode, autosuggest will probably be the most expensive part. You can reduce spend here by
    disabling shell autosuggest (-A) or increasing the autosuggest timeout (e.g. -t 2000).

  prompt [<prompt> ...]
    Run an LLM prompt without wrapping, stream results back. This is a straight-through call to the LLM from the
    command line with a given prompt. This accepts piped input, if there is both piped input and a prompt then they
    will be concatenated together (prompt first). It is recommended that you wrap the prompt with quotes. The default
    GPT model is gpt-3.5-turbo.

  summarize [<files> ...]
    Semantically summarize a list of files (or piped input). We read in the file, if it is short then we hand it
    directly to the LLM and ask for a summary. If it is longer then we break it into chunks and ask for a list of
    facts from each chunk (max 8 chunks), then concatenate facts and ask GPT for an overall summary.

  gencmd <prompt> ...
    Generate a shell command from a prompt, i.e. pass in what you want, a shell command will be generated. Accepts
    piped input. You can use the -f command to execute it sight-unseen.

  rewrite <prompt>
    Rewrite a file using a prompt, must specify either a file path or provide piped input, and can output to stdout,
    output to a given file, or edit the input file in-place. This command uses the OpenAI edit API rather than the
    completion API.

  exec [<command> ...]
    Execute a command and try to debug problems. The command can either passed in or in the command register (if you
    have run gencmd in Console Mode).

  index [<paths> ...]
    Recursively index the current directory using embeddings. This will read each file, split it into chunks,
    embed the chunks, and write a .butterfish_index file to each directory caching the embeddings. If you re-run this
    it will skip over previously embedded files unless you force a re-index. This implements an exponential backoff
    if you hit OpenAI API rate limits.

  clearindex [<paths> ...]
    Clear paths from the index, both from the in-memory index (if in Console Mode) and to delete .butterfish_index
    files. Defaults to loading from the current directory but allows you to pass in paths to load.

  loadindex [<paths> ...]
    Load paths into the index. This is specifically for Console Mode when you want to load a set of cached indexes
    into memory. Defaults to loading from the current directory but allows you to pass in paths to load.

  showindex [<paths> ...]
    Show which files are present in the loaded index. You can pass in a path but it defaults to the current
    directory.

  indexsearch <query>
    Search embedding index and return relevant file snippets. This uses the embedding API to embed the search string,
    then does a brute-force cosine similarity against every indexed chunk of text, returning those chunks and their
    scores.

  indexquestion <question>
    Ask a question using the embeddings index. This fetches text snippets from the index and passes them to the LLM
    to generate an answer, thus you need to run the index command first.

Run "butterfish <command> --help" for more information on a command.

butterfish: error: unknown flag -p, did you mean one of "-h", "-v"?

Ahh! So when you run shell mode it edits the PS1 variable, which is what sets the command prompt. If you do source .zshrc then that resets the prompt. So basically you would have to exit and restart butterfish shell after sourcing the shell profile. After a bunch of trying I haven't been able to find a better way to add the emoji. The reason I can't only add it at the Butterfish layer is that zsh itself does calculation based on the width of the prompt.

A potential workaround is that you could turn off the prompt editing in butterfish shell, and then set it yourself when you start it, but the idea is for it to work automatically without you having to change your own config.

Super open to ideas here