Ke Yan (0x1DA9430)

0x1DA9430

Geek Repo

Company:UoE AI

Location:Edinburgh

Github PK Tool:Github PK Tool

Ke Yan's starred repositories

LLM101n

LLM101n: Let's build a Storyteller

Stargazers:15251Issues:0Issues:0

decision-transformer

Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.

Language:PythonLicense:MITStargazers:2256Issues:0Issues:0
Stargazers:13Issues:0Issues:0

decision-mamba

Decision Mamba: Reinforcement Learning via Sequence Modeling with Selective State Spaces

Language:PythonLicense:MITStargazers:20Issues:0Issues:0

Awesome-state-space-models

Collection of papers on state-space models

Stargazers:477Issues:0Issues:0

MambaOut

MambaOut: Do We Really Need Mamba for Vision?

Language:PythonLicense:Apache-2.0Stargazers:1877Issues:0Issues:0

reinforcement-learning

Implementation of Reinforcement Learning Algorithms. Python, OpenAI Gym, Tensorflow. Exercises and Solutions to accompany Sutton's Book and David Silver's course.

Language:Jupyter NotebookLicense:MITStargazers:20249Issues:0Issues:0

awesome-rl

Reinforcement learning resources curated

Stargazers:8707Issues:0Issues:0

Reinforcement-Learning-2nd-Edition-by-Sutton-Exercise-Solutions

Solutions of Reinforcement Learning, An Introduction

Language:Jupyter NotebookLicense:MITStargazers:1930Issues:0Issues:0

GPT-SoVITS

1 min voice data can also be used to train a good TTS model! (few shot voice cloning)

Language:PythonLicense:MITStargazers:28936Issues:0Issues:0

Awesome-Mamba-Papers

Awesome Papers related to Mamba.

Stargazers:960Issues:0Issues:0

easy-rl

强化学习中文教程(蘑菇书🍄),在线阅读地址:https://datawhalechina.github.io/easy-rl/

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:8649Issues:0Issues:0

aiXcoder-7B

official repository of aiXcoder-7B Code Large Language Model

Language:PythonLicense:Apache-2.0Stargazers:2153Issues:0Issues:0

Jamba

PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"

Language:PythonLicense:MITStargazers:99Issues:0Issues:0

lightning-whisper-mlx

An extremely fast implementation of whisper optimized for Apple Silicon using MLX.

Language:PythonStargazers:469Issues:0Issues:0

devika

Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.

Language:PythonLicense:MITStargazers:17886Issues:0Issues:0

ComfyUI

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.

Language:PythonLicense:GPL-3.0Stargazers:41690Issues:0Issues:0

Mamba-ND

Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data

Language:PythonStargazers:33Issues:0Issues:0

WeatherBench

A benchmark dataset for data-driven weather forecasting

Language:Jupyter NotebookLicense:MITStargazers:679Issues:0Issues:0

mamba-chat

Mamba-Chat: A chat LLM based on the state-space model architecture 🐍

Language:PythonLicense:Apache-2.0Stargazers:878Issues:0Issues:0
Language:PythonLicense:MITStargazers:230Issues:0Issues:0

Open-Sora

Open-Sora: Democratizing Efficient Video Production for All

Language:PythonLicense:Apache-2.0Stargazers:20429Issues:0Issues:0

grok-1

Grok open release

Language:PythonLicense:Apache-2.0Stargazers:49157Issues:0Issues:0

annotated_deep_learning_paper_implementations

🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

Language:PythonLicense:MITStargazers:51576Issues:0Issues:0

mamba-notes

Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)

Stargazers:131Issues:0Issues:0

DRL

Deep Reinforcement Learning

License:NOASSERTIONStargazers:3046Issues:0Issues:0

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Language:PythonLicense:Apache-2.0Stargazers:11984Issues:0Issues:0
Language:TeXLicense:NOASSERTIONStargazers:3565Issues:0Issues:0

rm-hacks

Small improvements and tweaks for rM devices, covering both rM1 and rM2.

Language:ShellLicense:NOASSERTIONStargazers:446Issues:0Issues:0

mamba

Mamba SSM architecture

Language:PythonLicense:Apache-2.0Stargazers:11595Issues:0Issues:0