Jan (janhq)

Jan

janhq

Geek Repo

An open source alternative to OpenAI that runs on your own computer or server

Home Page:https://jan.ai

Twitter:@janframework

Github PK Tool:Github PK Tool

Jan's repositories

jan

Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)

Language:TypeScriptLicense:AGPL-3.0Stargazers:18755Issues:101Issues:1412

cortex

Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers đź‘‹ Jan

Language:C++License:AGPL-3.0Stargazers:1651Issues:15Issues:269

awesome-local-ai

An awesome repository of local AI tools

nitro-tensorrt-llm

Nitro is an C++ inference server on top of TensorRT-LLM. OpenAI-compatible API. Run blazing fast inference on Nvidia GPUs. Used in Jan

Language:C++License:Apache-2.0Stargazers:27Issues:1Issues:15

docs

Jan.ai Website & Documentation

Language:JavaScriptLicense:MITStargazers:4Issues:2Issues:2
Language:PLpgSQLStargazers:2Issues:5Issues:0

cortex.python-runtime

C++ code that run Python embedding

Language:C++License:AGPL-3.0Stargazers:2Issues:0Issues:0

thinking-machines

Thinking Machines

Language:C++License:AGPL-3.0Stargazers:1Issues:0Issues:0
Language:JavaScriptLicense:AGPL-3.0Stargazers:1Issues:5Issues:0

open-foundry

R&D experiments

Language:Jupyter NotebookLicense:AGPL-3.0Stargazers:1Issues:1Issues:0

Real-ESRGAN

Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.

Language:PythonLicense:BSD-3-ClauseStargazers:1Issues:1Issues:0

TensorRT

NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.

Language:C++License:Apache-2.0Stargazers:1Issues:1Issues:0

charts

This repository contains helm chart for our team

Language:SmartyStargazers:0Issues:4Issues:0
Language:TypeScriptStargazers:0Issues:5Issues:0
Language:JavaScriptLicense:MITStargazers:0Issues:5Issues:0
Stargazers:0Issues:2Issues:0
Stargazers:0Issues:0Issues:0

infinity

The AI-native database built for LLM applications, providing incredibly fast vector and full-text search

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:TypeScriptLicense:MITStargazers:0Issues:1Issues:0

llama.cpp-avx-vnni

Port of Facebook's LLaMA model in C/C++

Language:C++License:MITStargazers:0Issues:1Issues:0

openai_trtllm

OpenAI compatible API for TensorRT LLM triton backend

Language:RustLicense:MITStargazers:0Issues:1Issues:0

pymaker

Make the py

Stargazers:0Issues:0Issues:0

tensorrtllm_backend

The Triton TensorRT-LLM Backend

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0

trt-llm-as-openai-windows

This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows instead of cloud.

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0