TJ (tjy-dev)

tjy-dev

Geek Repo

Company:Tokyo Institute of Technology / Jelly Inc / CoeFont

Twitter:@TitaniumJely

Github PK Tool:Github PK Tool

TJ's starred repositories

rolldown

Fast Rust bundler for JavaScript with Rollup-compatible API.

Language:RustLicense:MITStargazers:7219Issues:0Issues:0

jp-postal-code-api

無料で商用にも使える日本の郵便番号API

Language:PHPLicense:MITStargazers:372Issues:0Issues:0

ovrdrive

Security focused USB drive.

Language:TeXLicense:MITStargazers:221Issues:0Issues:0

perf-book

The book "Performance Analysis and Tuning on Modern CPU"

Language:TeXLicense:CC0-1.0Stargazers:1979Issues:0Issues:0

gpt-2

Code for the paper "Language Models are Unsupervised Multitask Learners"

Language:PythonLicense:NOASSERTIONStargazers:21880Issues:0Issues:0

gpt-2-output-dataset

Dataset of GPT-2 outputs for research in detection, biases, and more

Language:PythonLicense:MITStargazers:1916Issues:0Issues:0

corenet

CoreNet: A library for training deep neural networks

Language:PythonLicense:NOASSERTIONStargazers:6727Issues:0Issues:0

mlx-swift-examples

Examples using MLX Swift

Language:SwiftLicense:MITStargazers:457Issues:0Issues:0

OpenVoice

Instant voice cloning by MyShell.

Language:PythonLicense:MITStargazers:27079Issues:0Issues:0

continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains

Language:TypeScriptLicense:Apache-2.0Stargazers:13100Issues:0Issues:0
Language:C++Stargazers:131Issues:0Issues:0

llm.c

LLM training in simple, raw C/CUDA

Language:CudaLicense:MITStargazers:21349Issues:0Issues:0

SciencePlots

Matplotlib styles for scientific plotting

Language:PythonLicense:MITStargazers:6713Issues:0Issues:0

langchain

🦜🔗 Build context-aware reasoning applications

Language:PythonLicense:MITStargazers:88487Issues:0Issues:0

RAD-MMM-phonemizer

text processing helpers for RAD-MMM

Language:PythonLicense:GPL-3.0Stargazers:4Issues:0Issues:0

RAD-MMM

A TTS model that makes a speaker speak new languages

Language:RoffLicense:MITStargazers:73Issues:0Issues:0

livekit

End-to-end stack for WebRTC. SFU media server and SDKs.

Language:GoLicense:Apache-2.0Stargazers:8919Issues:0Issues:0

lightning-thunder

Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.

Language:PythonLicense:Apache-2.0Stargazers:1072Issues:0Issues:0

pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

Language:PythonLicense:Apache-2.0Stargazers:27444Issues:0Issues:0

thrust

[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl

Language:C++License:NOASSERTIONStargazers:4879Issues:0Issues:0

gtc-2023-SE52140

Developer Breakout - Accelerating Enterprise Workflows With Triton Server and DALI

Language:Jupyter NotebookStargazers:1Issues:0Issues:0

DALI

A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.

Language:C++License:Apache-2.0Stargazers:4982Issues:0Issues:0

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Language:C++License:Apache-2.0Stargazers:10151Issues:0Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:7372Issues:0Issues:0

open-gpu-kernel-modules

NVIDIA Linux open GPU kernel module source

Language:CLicense:NOASSERTIONStargazers:14169Issues:0Issues:0

FasterTransformer

Transformer related optimization, including BERT, GPT

Language:C++License:Apache-2.0Stargazers:5629Issues:0Issues:0

whisperX

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Language:PythonLicense:BSD-4-ClauseStargazers:9949Issues:0Issues:0

DeepSpeed-MII

MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.

Language:PythonLicense:Apache-2.0Stargazers:1755Issues:0Issues:0

uv

An extremely fast Python package installer and resolver, written in Rust.

Language:RustLicense:Apache-2.0Stargazers:14580Issues:0Issues:0