iMerica / SecureAI-Tools

Private and secure AI tools for everyone's productivity.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SecureAI Tools

Private and secure AI tools for everyone's productivity.

Discord

Highlights

  • Chat with AI: Allows you to chat with AI models (i.e. ChatGPT).
  • Chat with Documents: Allows you to chat with documents (PDFs for now). Demo videos below
  • Local inference: Runs AI models locally. Supports 100+ open-source (and semi-open-source) AI models through Ollama.
  • Built-in authentication: A simple email/password authentication so it can be opened to internet and accessed from anywhere.
  • Built-in user management: So family members or coworkers can use it as well if desired.
  • Self-hosting optimized: Comes with necessary scripts and docker-compose files to get started in under 5 minutes.
  • Lightweight: A simple web app with SQLite DB to avoid having to run docker container for DB. Data is persisted on host machine through docker volumes

Demos

Chat with documents demo: OpenAI's GPT3.5

Chat with documents demo: OpenAI's GPT3.5

Chat with documents demo: Locally running Mistral (M2 MacBook)

Chat with documents demo: Locally running Mistral

Install

Docker Compose [Recommended]

1. Create a directory

mkdir secure-ai-tools && cd secure-ai-tools

2. Run set-up script

The script downloads docker-compose.yml and generates a .env file with sensible defaults.

curl -sL https://github.com/SecureAI-Tools/SecureAI-Tools/releases/latest/download/set-up.sh | sh

3. [Optional] Edit .env file

Customize the .env file created in the above step to your liking.

4. [Optional] On Linux machine with Nvidia GPUs, enable GPU support

To accelerate inference on Linux machines, you will need to enable GPUs. This is not strictly required as the inference service will run on CPU-only mode as well, but it will be slow on CPU. So if your machine has Nvidia GPU then this step is recommended.

  1. Install Nvidia container toolkit if not already installed.
  2. Uncomment the deploy: block in docker-compose.yml file. It gives inference service access to Nvidia GPUs.

5. Run docker compose

docker compose up -d

6. Post-installation set-up

  1. Login at http://localhost:28669/log-in using the initial credentials below, and change the password.

    • Email

      bruce@wayne-enterprises.com
      
    • Password

      SecureAIToolsFTW!
      
  2. Set up the AI model by going to http://localhost:28669/-/settings?tab=ai

  3. Navigate to http://localhost:28669/- and start using AI tools

Features wishlist

A set of features on our todo list (in no particular order).

  • ✅ Chat with documents
  • ✅ Support for OpenAI, Claude etc APIs
  • Support for markdown rendering
  • Chat sharing
  • Mobile friendly UI
  • Specify AI model at chat-creation time
  • Prompt templates library

About

Private and secure AI tools for everyone's productivity.

License:GNU Affero General Public License v3.0


Languages

Language:TypeScript 98.1%Language:JavaScript 0.6%Language:Dockerfile 0.6%Language:Shell 0.5%Language:CSS 0.2%