Sigrid Jin (ง'̀-'́)ง oO (sigridjineth)

sigridjineth

Geek Repo

Company:Machine Learning & Backend Engineer

Location:Seoul • .°• Bay Area (SF)

Home Page:sigridjin.medium.com

Twitter:@sigridjin_eth

Github PK Tool:Github PK Tool


Organizations
30th-THE-SOPT-Server-Part
angelhackseoul
Code-for-Korea
DSRV-DevGuild
NullFull
postech-dao
sullivanproject

Sigrid Jin (ง'̀-'́)ง oO's repositories

Language:HTMLStargazers:8Issues:0Issues:0

candle-vllm

Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.

License:MITStargazers:1Issues:0Issues:0

mpc-uniqueness-check

MPC Uniqueness Check

Language:RustLicense:Apache-2.0Stargazers:1Issues:0Issues:0

rm25

BM25 implementation in Rust

Stargazers:1Issues:0Issues:0

smol-vision

Recipes for shrinking, optimizing, customizing cutting edge vision models. 💜

License:Apache-2.0Stargazers:1Issues:0Issues:0

cuda_practice

CUDA Playground

Language:C++Stargazers:0Issues:0Issues:0

1.5-Pints

A compact LLM pretrained in 9 days by using high quality data

License:MITStargazers:0Issues:0Issues:0

chatbot-starter

Minimal NextJS chatbot starter template

Stargazers:0Issues:0Issues:0

ComfyUI-Docker

🐳Dockerfile for 🎨ComfyUI. | 容器镜像与启动脚本

License:NOASSERTIONStargazers:0Issues:0Issues:0

dom-to-semantic-markdown

DOM to Semantic-Markdown for use in LLMs

License:MITStargazers:0Issues:0Issues:0

ebpf_exporter

Prometheus exporter for custom eBPF metrics

License:MITStargazers:0Issues:0Issues:0

freezegun

Let your Python tests travel through time

License:Apache-2.0Stargazers:0Issues:0Issues:0

gpt_server

gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。

License:Apache-2.0Stargazers:0Issues:0Issues:0

Liger-Kernel

Efficient Triton Kernels for LLM Training

License:BSD-2-ClauseStargazers:0Issues:0Issues:0

llamatutor

An AI personal tutor built with Llama 3.1

Stargazers:0Issues:0Issues:0

llm-search

Querying local documents, powered by LLM

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

mako

An extremely fast, production-grade web bundler based on Rust.

Language:RustLicense:MITStargazers:0Issues:0Issues:0

marlin

FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Minitron

A family of compressed models obtained via pruning and knowledge distillation

Stargazers:0Issues:0Issues:0

rank_llm

RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.

License:Apache-2.0Stargazers:0Issues:0Issues:0
Language:ScalaStargazers:0Issues:1Issues:0

semantic-grep

grep for words with similar meaning to the query

Language:GoLicense:MITStargazers:0Issues:0Issues:0

sglang

SGLang is yet another fast serving framework for large language models and vision language models.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:0Issues:0

SmoothMQ

A drop-in replacement for SQS designed for great developer experience and efficiency.

License:AGPL-3.0Stargazers:0Issues:0Issues:0

spark-instructor

A library for building structured LLM responses with Spark

License:MITStargazers:0Issues:0Issues:0

stable-diffusion.cpp

Stable Diffusion in pure C/C++

Language:C++License:MITStargazers:0Issues:0Issues:0

swiftide

Fast, streaming indexing and query library for AI (RAG) applications, written in Rust

License:MITStargazers:0Issues:0Issues:0

tevatron

Tevatron - A flexible toolkit for neural retrieval research and development.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

text-embeddings-inference

A blazing fast inference solution for text embeddings models

Language:RustLicense:Apache-2.0Stargazers:0Issues:0Issues:0