Michal Shlapentokh-Rothman (michalsr)

michalsr

Geek Repo

Github PK Tool:Github PK Tool

Michal Shlapentokh-Rothman's repositories

Language:Jupyter NotebookStargazers:4Issues:0Issues:0
Language:PythonStargazers:1Issues:2Issues:0
Language:PythonStargazers:0Issues:0Issues:0

AWS-OHL-AutoAug

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

Language:PythonStargazers:0Issues:0Issues:0
Language:PHPLicense:NOASSERTIONStargazers:0Issues:2Issues:0

CoOp_augment

Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0
Language:PythonStargazers:0Issues:2Issues:1
Language:PythonStargazers:0Issues:0Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

gpv-1

A task-agnostic vision-language architecture as a step towards General Purpose Vision

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:1Issues:0
Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

grit_official

Official repository for the General Robust Image Task (GRIT) Benchmark

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:1Issues:0

hello-world

repository for tutorial

Stargazers:0Issues:0Issues:0

LAVIS

LAVIS - A One-stop Library for Language-Vision Intelligence

Language:Jupyter NotebookLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0

learning_to_teach_pytroch

This is the implement of paper “Fan, Yang, et al. "Learning to teach." ICLR (2018).” in pytorch

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

michalsr.github.io

Personal website

Language:HTMLLicense:NOASSERTIONStargazers:0Issues:2Issues:0

ml_notes

Notes related to machine learning. Focuses on summarizing main results, key ideas and definition.

Language:TeXStargazers:0Issues:0Issues:0

MUST

PyTorch code for MUST

Language:Jupyter NotebookLicense:BSD-3-ClauseStargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Language:PythonLicense:MITStargazers:0Issues:2Issues:0
Language:PythonStargazers:0Issues:1Issues:0
Language:JavaScriptStargazers:0Issues:0Issues:0

segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:0Issues:0Issues:0

viper

Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"

Language:Jupyter NotebookLicense:NOASSERTIONStargazers:0Issues:0Issues:0

vision-language-models-are-bows

Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023

Language:PythonLicense:MITStargazers:0Issues:0Issues:0

VL-T5

PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)

Language:PythonLicense:MITStargazers:0Issues:1Issues:0

word-tree

A Unity app designed to help children learn English letter-sound correspondence, sound blending, and sight word recognition.

Language:C#License:MITStargazers:0Issues:6Issues:0