pluja / maestro

Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

maestro banner

maestro converts natural language instructions into cli commands. It's designed for both offline use with Ollama and online integration with ChatGPT API.

Key Features

  • Ease of Use: Simply type your instructions and press enter.
  • Direct Execution: Use the -e flag to directly execute commands with a confirmation prompt for safety.
  • Context Awareness: Maestro understands your system's context, including the current directory, system, and user.
  • Support for Multiple LLM Models: Choose from a variety of models for offline and online usage.
  • Lightweight: Maestro is a single small binary with no dependencies.

Installation

  1. Download the latest binary from the releases page.
  2. Execute ./maestro -h to start.

Tip: Place the binary in a directory within your $PATH and rename it to maestro for global access, e.g., sudo mv ./maestro /usr/local/bin/maestro.

Offline Usage with Ollama

Important

You need at least Ollama v0.1.24 or greater

  1. Install Ollama from here (or use ollama's docker image).
  2. Download models using ollama pull <model-name>.
    • Note: If you haven't changed it, you will need to pull the default model: ollama pull dolphin-mistral:latest
  3. Start the Ollama server with ollama serve.
  4. Configure Maestro to use Ollama with ./maestro -set-ollama-url <ollama-url>, for example, ./maestro -set-ollama-url http://localhost:8080.

Online Usage with OpenAI's API

  1. Obtain an API token from OpenAI.
  2. Set the token using ./maestro -set-openai-token <your-token>.
  3. Choose between GPT4-Turbo with -4 flag and GPT3.5-Turbo with -3 flag.
    • Example: ./maestro -4 <prompt>

About

Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.


Languages

Language:Go 100.0%