BaddHabits (TAEYOUNG-SYG)

TAEYOUNG-SYG

Geek Repo

Company:PCL

Location:China

Github PK Tool:Github PK Tool

BaddHabits's starred repositories

LanguageBind

【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

Language:PythonLicense:MITStargazers:635Issues:0Issues:0

ModelCompose

Official code for our paper "Model Composition for Multimodal Large Language Models"

Language:PythonLicense:Apache-2.0Stargazers:11Issues:0Issues:0

Libra

Simple PyTorch implementation of "Libra: Building Decoupled Vision System on Large Language Models" (accepted by ICML 2024)

Language:PythonLicense:Apache-2.0Stargazers:36Issues:0Issues:0

CityDreamer

The official implementation of "CityDreamer: Compositional Generative Model of Unbounded 3D Cities". (Xie et al., CVPR 2024)

Language:PythonLicense:NOASSERTIONStargazers:582Issues:0Issues:0

hello-algo

《Hello 算法》:动画图解、一键运行的数据结构与算法教程。支持 Python, Java, C++, C, C#, JS, Go, Swift, Rust, Ruby, Kotlin, TS, Dart 代码。简体版和繁体版同步更新,English version ongoing

Language:JavaLicense:NOASSERTIONStargazers:89593Issues:0Issues:0

Vim

[ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

Language:PythonLicense:Apache-2.0Stargazers:2653Issues:0Issues:0

mamba-chat

Mamba-Chat: A chat LLM based on the state-space model architecture 🐍

Language:PythonLicense:Apache-2.0Stargazers:882Issues:0Issues:0

mamba

Mamba SSM architecture

Language:PythonLicense:Apache-2.0Stargazers:11895Issues:0Issues:0

VMamba

VMamba: Visual State Space Models,code is based on mamba

Language:PythonLicense:MITStargazers:1894Issues:0Issues:0

LLM-Agent-Paper-List

The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.

Stargazers:5859Issues:0Issues:0
Language:PythonStargazers:256Issues:0Issues:0

ml-aim

This repository provides the code and model checkpoints of the research paper: Scalable Pre-training of Large Autoregressive Image Models

Language:PythonLicense:NOASSERTIONStargazers:668Issues:0Issues:0

img2dataset

Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.

Language:PythonLicense:MITStargazers:3472Issues:0Issues:0

AdaLoRA

AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).

Language:PythonLicense:MITStargazers:239Issues:0Issues:0

LLaVA-RLHF

Aligning LMMs with Factually Augmented RLHF

Language:PythonLicense:GPL-3.0Stargazers:286Issues:0Issues:0

groundingLMM

[CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.

Language:PythonStargazers:698Issues:0Issues:0

ScienceQA

Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".

Language:PythonLicense:MITStargazers:575Issues:0Issues:0

DataOptim

A collection of visual instruction tuning datasets.

Language:PythonLicense:MITStargazers:73Issues:0Issues:0

LLaMA-VID

Official Implementation for LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models

Language:PythonLicense:Apache-2.0Stargazers:651Issues:0Issues:0

LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Language:PythonLicense:Apache-2.0Stargazers:18345Issues:0Issues:0

open_flamingo

An open-source framework for training large multimodal models.

Language:PythonLicense:MITStargazers:3591Issues:0Issues:0

AlignLLMHumanSurvey

Aligning Large Language Models with Human: A Survey

Stargazers:646Issues:0Issues:0

MiniGPT-5

Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"

Language:PythonLicense:Apache-2.0Stargazers:833Issues:0Issues:0
Language:PythonLicense:NOASSERTIONStargazers:8228Issues:0Issues:0

PointLLM

[ECCV 2024] PointLLM: Empowering Large Language Models to Understand Point Clouds

Language:PythonStargazers:446Issues:0Issues:0

LRV-Instruction

[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

Language:PythonLicense:BSD-3-ClauseStargazers:237Issues:0Issues:0

llama-moe

⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training

Language:PythonLicense:Apache-2.0Stargazers:810Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:543Issues:0Issues:0

avalanche

Avalanche: an End-to-End Library for Continual Learning based on PyTorch.

Language:PythonLicense:MITStargazers:1722Issues:0Issues:0