linzhiqiu's repositories

cross_modal_adaptation

Cross-modal few-shot adaptation with CLIP

Language:PythonLicense:MITStargazers:240Issues:5Issues:18

digital_chirality

Testing the chirality of digital imaging operations.

Language:Jupyter NotebookStargazers:93Issues:4Issues:0

t2v_metrics

Evaluating text-to-image/video/3D models with VQAScore

Language:PythonLicense:Apache-2.0Stargazers:58Issues:3Issues:0

visual_gpt_score

VisualGPTScore for visio-linguistic reasoning

Language:Jupyter NotebookStargazers:13Issues:5Issues:0

open_active

Open World Active Learning

Language:PythonStargazers:5Issues:5Issues:0

modern-resume-theme

A modern static resume template and theme. Powered by Jekyll and GitHub pages.

Language:HTMLLicense:MITStargazers:1Issues:2Issues:0

16-811

Math Fundamentals for Robotics - CMU

Language:PythonLicense:UnlicenseStargazers:0Issues:2Issues:0

avalanche

Avalanche: an End-to-End Library for Continual Learning.

Language:PythonLicense:MITStargazers:0Issues:2Issues:0
License:MITStargazers:0Issues:3Issues:0
Language:HTMLStargazers:0Issues:2Issues:0
Language:PythonStargazers:0Issues:2Issues:0
Language:HTMLStargazers:0Issues:1Issues:0

debiased-pseudo-labeling

[CVPR 2022] Debiased Learning from Naturally Imbalanced Pseudo-Labels

Language:Jupyter NotebookLicense:MITStargazers:0Issues:2Issues:0

dino

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Language:PythonLicense:Apache-2.0Stargazers:0Issues:2Issues:0

examples

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:3Issues:0

HRNet-Semantic-Segmentation

The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919

Language:PythonLicense:NOASSERTIONStargazers:0Issues:2Issues:0

HTML4Vision

A simple HTML visualization tool for computer vision research :hammer_and_wrench:

Language:PythonLicense:MITStargazers:0Issues:2Issues:0

linzhiqiu.github.io

Zhiqiu Lin's site

Language:JavaScriptLicense:MITStargazers:0Issues:2Issues:0

LLaVA

[NeurIPS 2023 Oral] Visual Instruction Tuning: LLaVA (Large Language-and-Vision Assistant) built towards GPT-4V level capabilities.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:1Issues:0
Stargazers:0Issues:0Issues:0
Language:MATLABStargazers:0Issues:2Issues:0

mmselfsup

OpenMMLab Self-Supervised Learning Toolbox and Benchmark

Language:PythonLicense:Apache-2.0Stargazers:0Issues:2Issues:0

MobileNet-Caffe

Caffe Implementation of Google's MobileNets (v1 and v2)

Language:PythonLicense:BSD-3-ClauseStargazers:0Issues:3Issues:0

nips_policy_learning

NeuralIPS Policy Learning Scripts

Language:PythonStargazers:0Issues:3Issues:0

PerceptualSimilarity

LPIPS metric. pip install lpips

Language:PythonLicense:BSD-2-ClauseStargazers:0Issues:1Issues:0

vision-language-models-are-bows

Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

vl_finetuning

Few-shot Finetuning of CLIP

Language:PythonLicense:MITStargazers:0Issues:3Issues:0

why-winoground-hard

Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022

Language:PythonLicense:MITStargazers:0Issues:1Issues:0