Michal Shlapentokh-Rothman's repositories
AWS-OHL-AutoAug
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.
CoOp_augment
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
grit_official
Official repository for the General Robust Image Task (GRIT) Benchmark
hello-world
repository for tutorial
LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
learning_to_teach_pytroch
This is the implement of paper “Fan, Yang, et al. "Learning to teach." ICLR (2018).” in pytorch
michalsr.github.io
Personal website
ml_notes
Notes related to machine learning. Focuses on summarizing main results, key ideas and definition.
MUST
PyTorch code for MUST
segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
viper
Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
vision-language-models-are-bows
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023