Michael Hu (michaelnny)

michaelnny

Geek Repo

Location:Shanghai

Home Page:www.vectortheta.com

Github PK Tool:Github PK Tool

Michael Hu's repositories

deep_rl_zoo

A collection of Deep Reinforcement Learning algorithms implemented with PyTorch to solve Atari games and classic control tasks like CartPole, LunarLander, and MountainCar.

Language:PythonLicense:Apache-2.0Stargazers:93Issues:4Issues:19

alpha_zero

A PyTorch implementation of DeepMind's AlphaZero agent to play Go and Gomoku board games

Language:PythonLicense:MITStargazers:44Issues:3Issues:5

InstructLLaMA

Implements pre-training, supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF), to train and fine-tune the LLaMA2 model to follow human instructions, similar to InstructGPT or ChatGPT, but on a much smaller scale.

Language:Jupyter NotebookLicense:MITStargazers:33Issues:0Issues:8

muzero

A PyTorch implementation of DeepMind's MuZero agent

Language:PythonLicense:Apache-2.0Stargazers:23Issues:1Issues:1

SAP-UI5-Development-Re-Introduction

This is the official source code for Udemy course SAP UI5 Development Re-Introduction

Language:JavaScriptStargazers:12Issues:0Issues:0

miniGPT

Try to implement pre-training and fine-tuning GPT-2 model for research and education purpose.

Language:Jupyter NotebookLicense:MITStargazers:8Issues:0Issues:3

MM-LLaMA

Bring multimodality to the LLaMA model by leveraging ImageBind as the modal encoder. This project supports vision input (both images and short videos) to the LLaMA model, with text output generated by LLaMA.

Language:PythonLicense:MITStargazers:3Issues:1Issues:0

DPO-LLaMA

A clean implementation of direct preference optimization (DPO) to train the LLaMA 2 model to align with human preferences.

Language:PythonLicense:MITStargazers:2Issues:1Issues:0

QLoRA-LLM

A simple custom QLoRA implementation for fine-tuning a language model (LLM) with basic tools such as PyTorch and Bitsandbytes, completely decoupled from Hugging Face.

Language:PythonLicense:MITStargazers:2Issues:0Issues:0

RAG-LLaMA

A clean and simple implementation of Retrieval Augmented Generation (RAG) to enhanced LLaMA chat model to answer questions from a private knowledge base. We use Tesla user manuals to build the knowledge base, and use open-source embedding and Cross-Encoders reranking models from Sentence Transformers in this project.

Language:Jupyter NotebookLicense:MITStargazers:2Issues:0Issues:0

art-of-reinforcement-learning

Original source code The Art of Reinforcement Learning by Michael Hu

Language:PythonLicense:NOASSERTIONStargazers:0Issues:0Issues:0

ReservoirComputing

Implementing Reservoir Computing Networks for Predicting Dynamic Systems

Language:Jupyter NotebookLicense:MITStargazers:0Issues:0Issues:0

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:0Issues:0Issues:0

tensorrtllm_backend

The Triton TensorRT-LLM Backend

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0