Chenting Wang (Revliter)

Revliter

Geek Repo

Company:Nanjing University

Location:Nanjing

Github PK Tool:Github PK Tool

Chenting Wang's starred repositories

Language:PythonStargazers:1358Issues:0Issues:0

LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Language:PythonLicense:Apache-2.0Stargazers:18348Issues:0Issues:0

NaViT

My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"

Language:PythonLicense:MITStargazers:150Issues:0Issues:0

MyGO

[Paper][Preprint 2024] MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion

Language:PythonStargazers:202Issues:0Issues:0

NJUThesis

南京大学学位论文模板

Language:TeXLicense:LPPL-1.3cStargazers:439Issues:0Issues:0

whisper

Robust Speech Recognition via Large-Scale Weak Supervision

Language:PythonLicense:MITStargazers:65069Issues:0Issues:0

MoViNet-pytorch

MoViNets PyTorch implementation: Mobile Video Networks for Efficient Video Recognition;

Language:Jupyter NotebookLicense:MITStargazers:255Issues:0Issues:0

DCI

Densely Captioned Images (DCI) dataset repository.

Language:PythonLicense:NOASSERTIONStargazers:148Issues:0Issues:0

Open-Sora

Open-Sora: Democratizing Efficient Video Production for All

Language:PythonLicense:Apache-2.0Stargazers:20884Issues:0Issues:0

Awesome-Dataset-Distillation

A curated list of awesome papers on dataset distillation and related applications.

Language:HTMLLicense:MITStargazers:1277Issues:0Issues:0
Language:PythonLicense:MITStargazers:3983Issues:0Issues:0

VideoMamba

[ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding

Language:PythonLicense:Apache-2.0Stargazers:714Issues:0Issues:0

math

The MATH Dataset (NeurIPS 2021)

Language:PythonLicense:MITStargazers:786Issues:0Issues:0

test

Measuring Massive Multitask Language Understanding | ICLR 2021

Language:PythonLicense:MITStargazers:1088Issues:0Issues:0

OpenDiT

OpenDiT: An Easy, Fast and Memory-Efficient System for DiT Training and Inference

Language:PythonLicense:Apache-2.0Stargazers:1364Issues:0Issues:0

maws

Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:70Issues:0Issues:0

VideoMAE

[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training

Language:PythonLicense:NOASSERTIONStargazers:1275Issues:0Issues:0

Vim

[ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

Language:PythonLicense:Apache-2.0Stargazers:2653Issues:0Issues:0

CLIP_benchmark

CLIP-like model evaluation

Language:Jupyter NotebookLicense:MITStargazers:545Issues:0Issues:0

bagel

A bagel, with everything.

Language:PythonStargazers:300Issues:0Issues:0

lm-evaluation-harness

A framework for few-shot evaluation of language models.

Language:PythonLicense:MITStargazers:5951Issues:0Issues:0

flash-attention

Fast and memory-efficient exact attention

Language:PythonLicense:BSD-3-ClauseStargazers:12602Issues:0Issues:0

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

Language:PythonLicense:MITStargazers:19242Issues:0Issues:0

unmasked_teacher

[ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models

Language:PythonLicense:MITStargazers:269Issues:0Issues:0

moco

PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722

Stargazers:4656Issues:0Issues:0

dino

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Language:PythonLicense:Apache-2.0Stargazers:6106Issues:0Issues:0

omnivore

Omnivore: A Single Model for Many Visual Modalities

Language:PythonLicense:NOASSERTIONStargazers:548Issues:0Issues:0

Awesome-Multimodal-Large-Language-Models

:sparkles::sparkles:Latest Advances on Multimodal Large Language Models

Stargazers:10835Issues:0Issues:0

Awesome-Knowledge-Distillation

Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。

Stargazers:2442Issues:0Issues:0

awesome-knowledge-distillation

Awesome Knowledge Distillation

License:Apache-2.0Stargazers:3378Issues:0Issues:0