daralthus / llmail

Experiments with an LLM for your inbox

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

llmail

๐Ÿค– LLM + โœ‰๏ธ email = ๐Ÿ”ฅ llmail

A bunch of experiments for running local LLM-s to label incoming emails.

Setup

  1. conda create -n llmail python=3.11
  2. pip install -r requirements.txt
  3. Create an app password for your gmail account
  4. cp .env.exammple .env

Run

You can try work in progress notebooks in the experiments folder.

Looking for ๐Ÿ”Ž

I am looking to reduce llama 2 CPU latency as much as possible. Let me know if you have a good solution. I am exploring FHE/MPC, Speculative sampling and MoE ATM.

About

Experiments with an LLM for your inbox

License:GNU General Public License v3.0


Languages

Language:JavaScript 45.4%Language:Jupyter Notebook 33.9%Language:Python 18.4%Language:HTML 2.1%Language:CSS 0.2%