bakks / butterfish

A shell with AI superpowers

Home Page:https://butterfi.sh

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature Request: Enhanced Multiline and Clipboard Input in `butterfish prompt`

tomlue opened this issue · comments

Problem:
When pasting multiline content from the clipboard into butterfish prompt, it doesn't process correctly.

Proposed Solution:
Allow butterfish prompt to recognize when invoked without any arguments, and then automatically open the user's default text editor (or fallback to vim) for input. This behavior would mirror the functionality seen with:

apt-get install moreutils
vipe | butterfish prompt

Benefit Over Existing Solutions:
Using the vipe approach, the context written in the text editor disappears in the terminal once the input is submitted. With a custom method, this context could be retained internally by butterfish, allowing users to reference it later without cluttering the terminal output.

@tomlue Thanks for this suggestion! Can you say more about the multiline paste not working? This is controlled by the shell and in some cases it works for me, i.e. I can do something like butterfish prompt "<paste>" and the shell will extend down several lines, though more esoteric characters can break this I think.

I think it will be difficult to get a solution that works in every shell. One immediate problem is just with escaping quotes. The vipe | butterfish prompt approach works really well and almost makes this unnecessary. With vipe the context of the text written into the text editor is lost in the butterfish shell though.

Maybe there are better feature requests though. I will keep thinking.

Got it, yep quotes may present a problem. My initial thought is that the best pattern is to edit a local file and then pipe that file in, e.g.

vim context.txt
butterfish prompt < context.txt
butterfish prompt "here is a prompt that uses the context" < context.txt

In some ways I think that is simpler behavior than popping an editor open but I'm curious about your opinion on the pros/cons.

The vipe approach works pretty well for me, if you like I'm satisfied enough with it for you to close this issue. In ubuntu

apt-get install moreutils
alias bpc="vipe | butterfish prompt"

the bpc alias will then drop you into a command line text editor and feed the results to butterfish prompt.

Makes sense but I want to make sure I understand your workflow - is that something you would use frequently or only if you're having trouble with a multiline prompt or some other weirdly shaped prompt?

It is something I find myself using very frequently. Frequently I am constructing prompts where I do something like:

  1. run some code I'm working on
  2. get a stack trace
  3. copy the stack trace to clipboard
  4. run vipe | butterfish prompt
  5. create a prompt like "can you help me with this stacktrace? {paste stacktrace} 'it came from this code' {paste code}"

In those scenarios some multistep text editing and copy/paste is happening.

A separate feature request might be to parse the prompt for things that look like file paths, cat the file path, and add the context to the prompt. This way you could just copy and paste a stack trace into butterfish and it could handle the rest. This would give butterfish a capacity that the web app chatgpt doesn't have, that is, reading local file paths to build custom prompts.

Ok I think I see -- Please try out Shell Mode and see if that works for you, I use it for that kind of workflow really frequently. Basically the shell history becomes the prompt context. I'm really curious if it solves your problem, please give feedback if it doesn't quite work.

Here's an example:
Screen Shot 2023-08-25 at 12 47 01 PM

2 more thoughts:

  • The common problem with the pattern above (and working with code in general) is that you fill up the context pretty quickly. One way I've been working around that is this thing: https://github.com/bakks/tako, which is pretty underdeveloped but the basic idea is you can pull meaningful stuff out of code files to fit into the context window, for example to fetch a specific function based on the function's name.
  • I really like the idea of being able to paste in a stack trace or other text that has file paths and be able to pull open those paths automatically, let me think more about that.

cating the file so that it gets added to the shell context mostly works. It does seem like sometimes butterfish is losing context. I guess it takes the most recent context?

I agree thought 2 seems like a nice feature, and something butterfish can do that the chatgpt web app can't.

Using tree sitter to parse code also seems smart. Something like:

  1. Tree sit all the code files and extract expressions
  2. embed the expressions in a vector store
  3. get a prompt and embed it, use vector store to find the most relevant embedded expressions.
  4. build new prompt with the relevant expressions.

You start wanting to use a local LLM though, because it's a lot of embeddings. Which is another feature request (maybe better to add these to the issue list). We should be able to use LLAMA2 or other LLMs.

In terms of the terminal context getting too long, you could also chunk the terminal context and do the same vector store approach.

It would be interesting to keep a local vector store running with access to a wider context, plugins for email and the like would be handy.

cating the file so that it gets added to the shell context mostly works. It does seem like sometimes butterfish is losing context. I guess it takes the most recent context?

Yeah, it fits as much history into the context window as it can, but it truncates specific line items (like a specific command output) as a strategy to manage this, plus the context window fills up pretty quickly if a lot of stuff is printed. So the history is like a quick way to build context.

If you want to see what's actually going into the prompt you can run shell mode with the -v flag, e.g. butterfish shell -v, and then watch the log file, it can be useful to tell if you're fitting what you want into the prompt.

I'm less bullish on the vector store / RAG approach because 1) i think that strategy is more effective for written text than code / terminal commands and 2) like all search mechanisms it isn't guaranteed to surface exactly what you need, and unless you show the vector store response to the user it's really hard to know if you're getting bad results because of the LLM or because of the vector store.

The question I've been pondering lately is how much coding-agent stuff I want to try / put into Butterfish, and I've pretty much decided that it doesn't make sense to go very far down that road because I think the best stuff will be partly built into the editor (like sourcegraph cody or cursor.so).

So here's what I'm concluding - I think the shell history is a good/cheap/stupid/simple way to manage context in most cases, but one good addition might be to have a command like butterfish promptedit which opens your command line editor to a buffer that is then sent as the prompt. I don't want to do this on butterfish prompt because I think that might be confusing if someone didn't know about that functionality and forgot to pass in a prompt, or the stdin pipe didn't work or something. So two final questions:

  • Do you think that would be useful / would you use that immediately?
  • Would you want that to always open up the same file/buffer every time you run it or should it be a new buffer every time?

Sorry if this is a lot of back and forth but it's super helpful for me to understand how people are using this tool and what features might be good, thanks for your help!

If you built that I would use it rather than the alias I have for bpc = vipe | butterfish prompt. I use the vipe | butterfish prompt very frequently.

I commonly use it to copy in bits of code. Enabling references to expressions/paths in the prompts would maybe be a better way of doing this.

Added this command, will release soon. Note that this isn't doing anything to deal with a buffer that extends past the token limit of the model you're using.

butterfish promptedit --help
Usage: butterfish promptedit

Like the prompt command, but this opens a local file with your
default editor (set with the EDITOR env var) that will then be
passed as a prompt in the LLM call.

Flags:
  -h, --help                     Show context-sensitive help.
  -v, --verbose                  Verbose mode, prints full LLM
                                 prompts (sometimes to log file).
                                 Use multiple times for more
                                 verbosity, e.g. -vv.
  -V, --version                  Print version information and
                                 exit.

  -f, --file="~/.config/butterfish/prompt.txt"
                                 Cached prompt file to use.
  -e, --editor=""                Editor to use for the prompt.
  -m, --model="gpt-3.5-turbo"    GPT model to use for the prompt.
  -n, --num-tokens=1024          Maximum number of tokens to
                                 generate.
  -T, --temperature=0.7          Temperature to use for the prompt,
                                 higher temperature indicates more
                                 freedom/randomness when generating
                                 each token.

OK! This is deployed now in v0.1.8, please give it a try. What I did for myself is add a line like this in ~/.zshrc:

export EDITOR=nvim

works for me, very nice!