Hao Wu's starred repositories

llama3

The official Meta Llama 3 GitHub site

Language:PythonLicense:NOASSERTIONStargazers:24842Issues:209Issues:209

leedl-tutorial

《李宏毅深度学习教程》(李宏毅老师推荐👍),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:11422Issues:264Issues:81

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Language:PythonLicense:MITStargazers:9932Issues:66Issues:105

LMFlow

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

Language:PythonLicense:Apache-2.0Stargazers:8151Issues:73Issues:398

MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone

Language:PythonLicense:Apache-2.0Stargazers:8095Issues:75Issues:315

awesome-self-supervised-learning

A curated list of awesome self-supervised methods

InternVL

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的可商用开源多模态对话模型

Language:PythonLicense:MITStargazers:4442Issues:43Issues:378

torchtune

A Native-PyTorch Library for LLM Fine-tuning

Language:PythonLicense:BSD-3-ClauseStargazers:3678Issues:44Issues:383

dive-into-llms

《动手学大模型Dive into LLMs》系列编程实践教程

jepa

PyTorch code and models for V-JEPA self-supervised learning from video.

Language:PythonLicense:NOASSERTIONStargazers:2569Issues:37Issues:50
Language:PythonLicense:Apache-2.0Stargazers:2087Issues:126Issues:54

cambrian

Cambrian-1 is a family of multimodal LLMs with a vision-centric design.

Language:PythonLicense:Apache-2.0Stargazers:1607Issues:20Issues:57

clash_singbox-tutorials

Clash 和 sing-box 教程合集——安装、配置、自定义规则、DNS 分流

LLM-Agents-Papers

A repo lists papers related to LLM based agent

RMT

(CVPR2024)RMT: Retentive Networks Meet Vision Transformer

Language:PythonLicense:Apache-2.0Stargazers:209Issues:13Issues:70

awesome-foundation-model-leaderboards

A curated list of awesome leaderboards for foundation models

MIM-Depth-Estimation

This is an official implementation of our CVPR 2023 paper "Revealing the Dark Secrets of Masked Image Modeling" on Depth Estimation.

Language:PythonLicense:MITStargazers:159Issues:2Issues:12

ODTrack

The official implementation for the paper [ODTrack: Online Dense Temporal Token Learning for Visual Tracking].

Language:PythonLicense:MITStargazers:89Issues:6Issues:11

GRM

[CVPR'23] The official PyTorch implementation of our CVPR 2023 paper: "Generalized Relation Modeling for Transformer Tracking".

Language:PythonLicense:MITStargazers:67Issues:4Issues:14

TC-MoA

Task-Customized Mixture of Adapters for General Image Fusion (CVPR 2024)

Language:PythonLicense:BSD-3-ClauseStargazers:49Issues:2Issues:3

ROMTrack

[ICCV 2023] Robust Object Modeling for Visual Tracking, Official Implementation

Language:PythonLicense:MITStargazers:38Issues:4Issues:14

OmniTrackFast

Official Code For Track Everything Everywhere Fast and Robustly

TGSR

This is the official implementation with training code for “Trajectory Guided Robust Visual Object Tracking with Selective Remedy”.

Language:PythonLicense:Apache-2.0Stargazers:27Issues:2Issues:1

TTL

Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"

Language:PythonLicense:MITStargazers:21Issues:1Issues:0
Language:PythonStargazers:13Issues:1Issues:0

learning_research

本人的科研经验

Stargazers:8Issues:0Issues:0

unofficial-SiameseMAE

unofficial pytorch implement for Siamese-Masked Autoencoder