ZhuoZHI-UCL's starred repositories

Language:PythonStargazers:1110Issues:0Issues:0

Vim

[ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

Language:PythonLicense:Apache-2.0Stargazers:2575Issues:0Issues:0

Ask-Anything

[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.

Language:PythonLicense:MITStargazers:2865Issues:0Issues:0

qformer

Implementation of Qformer from BLIP2 in Zeta Lego blocks.

Language:PythonLicense:MITStargazers:22Issues:0Issues:0

LLaMA-VID

Official Implementation for LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models

Language:PythonLicense:Apache-2.0Stargazers:628Issues:0Issues:0

attention-is-all-you-need-pytorch

A PyTorch implementation of the Transformer model in "Attention is All You Need".

Language:PythonLicense:MITStargazers:8608Issues:0Issues:0

Awesome-LLM

Awesome-LLM: a curated list of Large Language Model

License:CC0-1.0Stargazers:16017Issues:0Issues:0

PID

[NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions

Language:PythonLicense:MITStargazers:44Issues:0Issues:0

ICML2024-AT-UR

Code for ICML2024 paper "The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks"

Language:PythonStargazers:4Issues:0Issues:0

ICL_multimodal

Code for paper 'Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity'

Language:PythonStargazers:9Issues:0Issues:0

HighMMT

[TMLR 2022] High-Modality Multimodal Transformer

Language:PythonLicense:MITStargazers:99Issues:0Issues:0

perceiver-multi-modality-pytorch

Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch

Language:PythonLicense:MITStargazers:36Issues:0Issues:0

MultiModalSA

MultiModal Sentiment Analysis architectures for CMU-MOSEI.

Language:PythonStargazers:33Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:25Issues:0Issues:0
Language:PythonStargazers:161Issues:0Issues:0

EHR-X-Ray-by-ViLT

Multimodal learning based on EHR and X-ray image for CVPR2024

Language:PythonStargazers:3Issues:0Issues:0

MultiBench

[NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning

Language:HTMLLicense:MITStargazers:457Issues:0Issues:0

ALAS

ALAS: Active Learning for Autoconversion Rates Prediction from Satellite Data

Language:PythonLicense:MITStargazers:8Issues:0Issues:0

paper-reading

深度学习经典、新论文逐段精读

License:Apache-2.0Stargazers:24954Issues:0Issues:0

cs-self-learning

计算机自学指南

Language:HTMLLicense:MITStargazers:52410Issues:0Issues:0

missing_aware_prompts

Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23

Language:PythonStargazers:146Issues:0Issues:0
Language:PythonStargazers:57Issues:0Issues:0

yolov7-main

Use yolov7 for dam/landfill detection from google map

Language:Jupyter NotebookLicense:GPL-3.0Stargazers:1Issues:0Issues:0

LeetcodeTop

汇总各大互联网公司容易考察的高频leetcode题🔥

Stargazers:18311Issues:0Issues:0

machine-learning-interview

Machine Learning Interviews from FAANG, Snapchat, LinkedIn. I have offers from Snapchat, Coupang, Stitchfix etc. Blog: mlengineer.io.

Stargazers:8557Issues:0Issues:0

ICL_PaperList

Paper List for In-context Learning 🌷

Stargazers:767Issues:0Issues:0
Language:PythonStargazers:1Issues:0Issues:0
Language:Jupyter NotebookLicense:MITStargazers:4Issues:0Issues:0

pytorch-template

PyTorch deep learning projects made easy.

Language:PythonLicense:MITStargazers:4635Issues:0Issues:0