Renrui Zhang's starred repositories
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.
Transformers-Tutorials
This repository contains demos I made with the Transformers library by HuggingFace.
LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
LLaMA2-Accessory
An Open-source Toolkit for LLM Development
Personalize-SAM
Personalize Segment Anything Model (SAM) with 1 shot in 10 seconds
prolificdreamer
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation (NeurIPS 2023 Spotlight)
LLaVA-Plus-Codebase
LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills
Point-Bind_Point-LLM
Align 3D Point Cloud with Multi-modalities for Large Language Models
lightning-GPT
Train and run GPTs with Lightning
ViewRefer3D
Official implementation of 'ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding with GPT and Prototype Guidance' (ICCV2023)
Point-PEFT
Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models(AAAI2024)