Anna Berger's starred repositories
awesome-public-datasets
A topic-centric list of HQ open datasets.
hummingbird
Hummingbird compiles trained ML models into tensor computation for faster inference.
nsfw_data_scraper
Collection of scripts to aggregate image data for the purposes of training an NSFW Image Classifier
dfdc_deepfake_challenge
A prize winning solution for DFDC challenge
pytorch-domain-adaptation
A collection of implementations of adversarial domain adaptation algorithms
mean_average_precision
Mean Average Precision for Object Detection
face-id-with-medical-masks
Face ID recognition with medical masks
FaceParsing
EHANet: An effective hierarchical aggregation network for face parsing
3D-ResNets-PyTorch
3D ResNets for Action Recognition (CVPR 2018)
awesome-action-recognition
A curated list of action recognition and related area resources
flask_gunicorn_nginx_docker
Template for deploying ML models using Flask + Gunicorn + Nginx inside Docker
ActivityNet-Entities
A Dataset for Grounded Video Description
fairseq-image-captioning
Transformer-based image captioning extension for pytorch/fairseq
the-art-of-command-line
Master the command line, in one page
pytorch_bn_fusion
Batch normalization fusion for PyTorch
dostoevsky
Sentiment analysis library for russian language
TGIF-Release
Animated GIF Description Dataset
first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
demo-self-driving
Streamlit app demonstrating an image browser for the Udacity self-driving-car dataset with realtime object detection using YOLO.
pytorch-gradual-warmup-lr
Gradually-Warmup Learning Rate Scheduler for PyTorch
pytorch-metric-learning
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
meshed-memory-transformer
Meshed-Memory Transformer for Image Captioning. CVPR 2020
Up-Down-Captioner
Automatic image captioning model based on Caffe, using features from bottom-up attention.
bottom-up-attention
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome