Johanna Karras's repositories
consistent_depth
We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.
Language:PythonMIT000
CSE517-Final-Project
A reproducibility experiment of the paper "Does Vision-and-Language Pretraining Improve Lexical Grounding?" by Tian Yun, Chen Sun, and Ellie Pavlick, published at EMNLP 2021.
EE148-Detecting-Red-Traffic-Lights
Caltech CNS/EE/CS 148 HW 1
Language:Jupyter NotebookMIT000
EE148-Detecting-Red-Traffic-Lights-pt2
EE/CNS/CS HW2
Language:Jupyter NotebookMIT000
EE148-MNIST-Classification-CNN
Classification of MNIST handwritten digits using convolutional neural networks.
Language:Jupyter Notebook000
pytorch-CycleGAN-and-pix2pix
Image-to-Image Translation in PyTorch
Language:PythonNOASSERTION000
reproducibility-vlm-lexical-grounding
PyTorch code for the Findings of EMNLP 2021 paper "Does Vision-and-Language Pretraining Improve Lexical Grounding?"
Language:Jupyter Notebook000