Travis James's repositories
AI-tamago
A local-ready LLM-generated and LLM-driven virtual pet with thoughts and feelings. 100% Javascript.
candle-vllm
Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.
carta
A lightweight, fast and extensible Svelte Markdown editor and viewer.
electric
Local-first sync layer for web and mobile apps. Build reactive, realtime, local-first apps directly on Postgres.
gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
immich
Self-hosted photo and video backup solution directly from your mobile phone.
iroh
Sync anywhere
lit-gpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
llama-recipes
Examples and recipes for Llama 2 model
make-real
Draw a ui and make it real
manas
Manas project aims to create a modular framework and ecosystem to create robust storage servers adhering to Solid protocol in rust.
Memory-Cache
MemoryCache is an experimental development project to turn a local desktop environment into an on-device AI agent
mistral-go
Mistral API Client in Golang
mlx-examples
Examples in the MLX framework
node-solid-server
Solid server on top of the file-system in NodeJS
openrouter-runner
Inference engine powering open source models on OpenRouter
OrchardCore
Orchard Core is an open-source modular and multi-tenant application framework built with ASP.NET Core, and a content management system (CMS) built on top of that framework.
postgres_lsp
A Language Server for Postgres
privateGPT
Interact with your documents using the power of GPT, 100% privately, no data leaks
promptbase
All things prompt engineering
rags
Build ChatGPT over your data, all with natural language
refactor-platform-fe
Refactor Software Engineering Coaching Platform (Frontend)
refactor-platform-rs
Refactor Software Engineering Coaching Platform (Backend)
revm
Ethereum Virtual Machine written in rust that is fast and simple to use
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
svelte-exmarkdown
Svelte component to render markdown.
tensorrtllm_backend
The Triton TensorRT-LLM Backend
WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath