neuma
is a minimalistic ChatGPT interface for the command line.
- Conversations management (create, save, copy, delete)
- Modes (normal, table, code, translate, impersonate, summarize, csv, image, terminal)
- Personae profiles with custom starting prompt
- Embeddings management (embed documents, create vector dbs)
- Voice input / output
- and a few other things...
Those instructions are for Linux, they may vary for other systems.
Make sure a recent version of the following packages are installed on your system:
git python python-pip python-virtualenv portaudio19-dev
You can launch the install script with the following command:
bash <(wget -qO- https://raw.githubusercontent.com/mwmdev/neuma/main/install.sh)
During the installation process you will be prompted for a ChatGPT API key.
If you prefer manual install, do the following:
Clone this repository to your local machine using:
git clone https://github.com/mwmdev/neuma.git
Navigate to the directory where the repository was cloned:
cd neuma
Create a virtual environment with:
python -m venv env
Activate the virtual environment with:
source env/bin/activate
Install the required dependencies by running:
pip install -r requirements.txt
Rename the .env_example
to .env
with:
mv .env_example .env
Edit .env
and add your ChatGPT API key.
Move all config files to your .config/neuma/
folder with:
mkdir ~/.config/neuma && mv .env config.toml persona.toml ~/.config/neuma/
Finally, run the script with:
python neuma.py
To make it easier to run neuma
, you can create an alias in your .bashrc
or .zshrc
file by adding the following line:
alias n='source /path/to/neuma/env/bin/activate && python /path/to/neuma.py'
Use neuma
as an interactive chat, write your prompt and press Enter
. Wait for the answer, then continue the discussion.
Press h
followed by Enter
to list all the commands.
> h
┌───────────────────┬─────────────────────────────────────────────────┐
│ Command │ Description │
├───────────────────┼─────────────────────────────────────────────────┤
│ h │ Display this help section │
│ r │ Restart │
│ c │ List saved conversations │
│ c [conversation] │ Open conversation [conversation] │
│ cc │ Create a new conversation │
│ cs [conversation] │ Save the current conversation as [conversation] │
│ ct [conversation] │ Trash conversation [conversation] │
│ cy │ Copy current conversation to clipboard │
│ m │ List available modes │
│ m [mode] │ Switch to mode [mode] │
│ p │ List available personae │
│ p [persona] │ Switch to persona [persona] │
│ vi │ Switch to voice input │
│ vo │ Switch on voice output │
│ d │ List available vector dbs │
│ d [db] │ Create or switch to vector db [db] │
│ dt [db] │ Trash vector db [db] │
│ e [/path/to/folder] │ Embed all files in [/path/to/folder] into current db │
│ y │ Copy last answer to clipboard │
│ t │ Get the current temperature │
│ t [temp] │ Set the temperature to [temp] │
│ mt │ Get the current max_tokens value │
│ mt [max_tokens] │ Set the max_tokens to [max_tokens] │
│ g │ List available GPT models │
│ g [model] │ Set GPT model to [model] │
│ lm │ List available microphones │
│ cls │ Clear the screen │
│ q │ Quit │
└───────────────────┴─────────────────────────────────────────────────┘
A conversaton is a series of prompts and answers. Conversations are stored as .neu
text files in the data folder defined in config.toml
.
c
: List all saved conversations
c [conversation]
: Open conversation [conversation]
cc
: Create a new conversation
cs [conversation]
: Save the current conversation as [conversation]
ct [conversation]
: Trash the conversation [conversation]
cy
: Copy the current conversation to the clipboard
Modes define specific expected output behaviors. Custom modes are added by editing the [modes]
section in the config.toml
file.
m
: List available modes
m [mode]
: Switch to mode [mode]
Here are some of the built-in modes :
m table
Displays the response in a table. Works best when column headers are defined explicitly in the prompt and temperature
is set to 0.
Example:
> Five Hugo prize winners by : Name, Book, Year
Output:
┏━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━┓
┃ Name ┃ Book ┃ Year ┃
┡━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━┩
│ Isaac Asimov │ Foundation’s Edge │ 1983 │
├────────────────────┼───────────────────────────────────────┼──────┤
│ Orson Scott Card │ Ender’s Game │ 1986 │
├────────────────────┼───────────────────────────────────────┼──────┤
│ Ursula K. Le Guin │ The Dispossessed: An Ambiguous Utopia │ 1975 │
├────────────────────┼───────────────────────────────────────┼──────┤
│ Arthur C. Clarke │ Rendezvous with Rama │ 1974 │
├────────────────────┼───────────────────────────────────────┼──────┤
│ Robert A. Heinlein │ Double Star │ 1956 │
└────────────────────┴───────────────────────────────────────┴──────┘
m code
Displays syntax highlighted code. Works best when temperature
is set to 0.
Start with #
followed by the name of the language and the prompt.
Example:
> #html simple login form
Output:
<!DOCTYPE html>
<html>
<head>
<title>Login Form</title>
</head>
<body>
<!-- Login form starts here -->
<form action="#" method="post">
<h2>Login</h2>
<label for="username">Username:</label><br>
<input type="text" id="username" name="username"><br><br>
<label for="password">Password:</label><br>
<input type="password" id="password" name="password"><br><br>
<input type="submit" value="Submit">
</form>
<!-- Login form ends here -->
</body>
</html>
m trans
Translates text into another language. Works best when temperature
is set to 0.
Start with #
followed by the name of the language to translate into and the word or phrase to translate.
Example:
> #german What's the carbon footprint of nuclear energy ?
Output:
Wie groß ist der CO2-Fußabdruck von Kernenergie?
m char
Impersonates a character.
Start with #
followed by the name of the character you want to be impersonated and your prompt.
Example:
> #Bob_Marley Write the chorus to a new song.
Output:
"Rise up and stand tall,
Embrace the love that's all,
Let your heart blaze and brawl,
As we rock to the beat of this call."
m csv
Generates a CSV table. Works best when temperature
is set to 0.
Start with #
followed by the separator you want to use and your prompt.
Example:
> #; Five economics nobel prize winners by name, year, country and school of thought
Output:
1; Milton Friedman; 1976; USA; Monetarism;
2; Amartya Sen; 1998; India; Welfare economics;
3; Joseph Stiglitz; 2001; USA; Information economics;
4; Paul Krugman; 2008; USA; New trade theory;
5; Esther Duflo; 2019; France; Development economics
m img
Generate images with dall-e
.
Example:
> a peaceful lake scenery
Output:
Image generated and saved to : ./img/a-peaceful-lake-scenery-20240328175639.png
Image settings are available in the config.toml
file :
[images]
model = "dall-e-2" # either "dall-e-2" or "dall-e-3"
size = "1024x1024" # for available sizes see https://platform.openai.com/docs/api-reference/images/create
quality = "standard" # either "standard" or "hd" (only for "dall-e-3")
path = "./img/" # path to save the generated images
open = false # open the generated image automatically
open_command = "feh" # the command to open the image
m term
Generates terminal commands. Works best when temperature
is set to 0.
Describe what you want to achieve and it will return a corresponding terminal command.
Example:
> find all files in this directory modified in the last 7 days
Output:
find . -type f -mtime -7
You can then copy-paste the command into your terminal and run it (use with caution!).
Personae are profiles defined by a specific starting prompt and temperature, they are configured in the personae.toml
file.
p
: List available personae
p [persona]
: Switch to persona
The default persona has this starting prompt :
[[persona]]
name = "default"
temp = 0.5
[[persona.messages]]
role = "system"
content = "You are a helpful assistant."
[[persona.messages]]
role = "user"
content = "What is the capital of Mexico?"
[[persona.messages]]
role = "assistant"
content = "The capital of Mexico is Mexico City"
To add new personae, copy paste the default persona and give it a new name, then edit the system prompt.
The user and assistant messages are optional, but help with accuracy. You can add as many user/assistant messages as you like (increases token count).
Here are some examples of personae :
[[persona]]
name = "teacher"
temp = 0.5
[[persona.messages]]
role = "system"
content = "Teach me how # works by asking questions about my level of understanding of necessary concepts. With each response, fill in gaps in my understanding, then recursively ask me more questions to check my understanding."
[[persona]]
name = "handyman"
temp = 0.65
[[persona.messages]]
role = "system"
content = "You are a helpful handyman and a DIY expert. You will teach me to complete simple home improvementand maintenance projects using lists of necessary tools and simple step by step instructions."
[[persona.messages]]
role = "user"
content = "My lightbulb is broken."
[[persona.messages]]
role = "assistant"
content = "I can help you replace your lightbulb. You will need : a ladder, a new lightbulb, and a screwdriver. 1. First, turn off the light switch. For more security you can also turn off the electricity at the circuit breaker. 2. Then, climb the ladder and unscrew the lightbulb. 3. Finally, screw in the new lightbulb and turn the light switch back on."
Voice is defined in config.toml
, here's a list of supported voices.
vo
: Toggle voice output
Voice input can be used to transcribe voice to text.
vi
: Switch to voice input
Saying "Disable voice input" will switch back to text input mode.
You can list available microphones with lm
and set the one you want to use in the audio
section of the config file.
[audio]
input_device = 4 # the device for voice input (list devices with "lm")
input_timeout = 5 # the number of seconds after which listening stops and transcription starts
input_limit = 20 # the maximum number of seconds that can be listened to in one go
Embeddings allow you to embed documents into the discussion to serve as context for the answers.
d
: List all available vector dbs
d [db]
: Create or switch to [db] vector db
dt [db]
: Trash [db] vector db (will delete all files and folders related to this vector db)
e [/path/to/files]
: Embed all files in /path/to/files/
and store them in the current vector db
So, to chat with documents you can do the following :
- Create a persona with a profile that restricts answers to the context, here's an example:
[[persona]]
name = "docs"
temp = 0.2
[[persona.messages]]
role = "system"
content = "Answer the question based only on the following context: \n\n {context} \n\n---\n\n Answer the question based on the above context: "
- Switch to that persona with
p docs
- Create a vector db with
d mydb
- Embed the documents with
e /path/to/files
- Ask a question
You can also reference documents directly (without embedding), using the ~{f:
}~
notation.
> Refactor the following code : ~{f:example.py}~
Use the ~{w:
}~
notation to insert the content of a URL into the prompt.
> Summarize the following article : ~{w:https://www.freethink.com/health/lsd-mindmed-phase-2}~
Note: This can highly increase the number of tokens, use with caution. For large content use embeddings instead.
You can switch between different GPT models. The default model is defined in the config.toml
file.
g
: List available GPT models
> g
GPT Models
gpt-3.5-turbo-0125
gpt-4-turbo-preview
gpt-4-0125-preview
gpt-3.5-turbo-1106
gpt-4-1106-preview
gpt-4-vision-preview
gpt-3.5-turbo-instruct-0914
gpt-3.5-turbo-instruct
gpt-4
gpt-4-0613
gpt-3.5-turbo-0613
gpt-3.5-turbo-16k-0613
gpt-3.5-turbo-16k
gpt-3.5-turbo-0301
gpt-3.5-turbo <
g [model]
: Set GPT model to [model]
> g gpt-3.5-turbo
Model set to gpt-3.5-turbo.
> when is your knowledge cutoff
My training data includes information up until September 2021.
> g gpt-4-turbo-preview
Model set to gpt-4-turbo-preview.
> when is your knowledge cutoff
My knowledge is up to date until April 2023.
y
: Copy the last answer to the clipboard
t [temperature]
: Set the ChatGPT model's temperature.
tp [top_p]
: Set the ChatGPT model's top_p.
mt [max_tokens]
: Set the ChatGPT model's max_tokens.
cls
: Clear the screen
r
: Restart the application
q
: Quit
By default neuma
starts in interactive mode, but you can also use command line arguments to return an answer right away, which can be useful for output redirection or piping.
> python neuma.py -h
usage: neuma.py [-h] [-i INPUT] [-p PERSONAE] [-m MODE] [-t TEMP]
neuma is a minimalistic ChatGPT interface for the command line.
options:
-h, --help Show this help message and exit
-i INPUT, --input INPUT Input prompt
-p PERSONA, --persona PERSONA Set persona
-m MODE, --mode MODE Set mode
-t TEMP, --temp TEMP Set temperature
-vo, --voice-output Enable voice output
Examples :
> python neuma.py -t 1.2 -i "Write a haiku about the moon"
Silver orb casts light,
Guiding night journeys below
Moon’s tranquil, bright glow.
> python neuma.py -t 0 -m "table" -i "Five US National parks by : name, size, climate"
┏━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━┓
┃ ┃ National Park ┃ Size (acres) ┃ Climate ┃ ┃
┡━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━┩
│ │ Yellowstone │ 2,219,791 │ Continental │ │
├──┼────────────────────┼────────────────┼───────────────────────────┼──┤
│ │ Yosemite │ 761,747 │ Mediterranean │ │
├──┼────────────────────┼────────────────┼───────────────────────────┼──┤
│ │ Grand Canyon │ 1,217,262 │ Arid │ │
├──┼────────────────────┼────────────────┼───────────────────────────┼──┤
│ │ Glacier │ 1,013,125 │ Continental │ │
├──┼────────────────────┼────────────────┼───────────────────────────┼──┤
│ │ Rocky Mountain │ 265,807 │ Alpine │ │
└──┴────────────────────┴────────────────┴───────────────────────────┴──┘
> python neuma.py -m img -i "Escher's lost masterpiece"
Image generated and saved to : ./img/escher-s-lost-masterpiece-20240411203242.png
python neuma.py -m term -i "join all PDFs in this directory ordered by name into presentation.pdf"
pdfunite $(ls -1v *.pdf) presentation.pdf
The colors of each type of text (prompt, answer, info msg, etc.) are defined in the config.toml
file (default is gruvbox dark).
[theme]
section = "#d3869b" # pink
info = "#8ec07c" # aqua
success = "#b8bb26" # green
warning = "#fabd2f" # yellow
error = "#fb4934" # red
prompt = "#928374" # grey
answer = "#83a598" # blue
If you get a ImportError: GLIBCXX_3.4.30 not found
error during install, run the following command:
conda install -c conda-forge gcc=12.1.0
neuma
is derived from the greek πνεῦμα
meaning breath or spirit.