Naoto Usuyama's repositories
pytorch-unet
Simple PyTorch implementations of U-Net/FullyConvNet (FCN) for image segmentation
ePillID-benchmark
ePillID Dataset: A Low-Shot Fine-Grained Benchmark for Pill Identification (CVPR 2020 VL3)
emacs-like-key-bindings-windows
System-wide Emacs-like/Mac-like key-bindings on Windows 10 using AutoHotKey
AzureML-BERT
End-to-end recipes for pre-training and fine-tuning BERT using Azure Machine Learning service
blue_benchmark_with_transformers
Implementation of the BLUE benchmark with Transformers. The details of our pre-training procedure can be found in https://arxiv.org/abs/2005.07202.
CLAP
Contrastive Language-Audio Pretraining
clinicalBERT
repository for Publicly Available Clinical BERT Embeddings
DeepLearningExamples
Deep Learning Examples
fast-MPN-COV
@CVPR2018: Efficient unrolling iterative matrix square-root normalized ConvNets, implemented by PyTorch (and code of B-CNN,Compact bilinear pooling etc.) for training from scratch & finetuning.
gradio
Create UIs for your machine learning model in Python in 3 minutes
hi-ml
HI-ML toolbox for deep learning for medical imaging and Azure integration
HIPT
Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)
Humpback-Whale-Identification-Challenge-2019_2nd_palce_solution
Kaggle Humpback Whale Identification Challenge 2019 2nd place code
Megatron-LM
Ongoing research training transformer language models at scale, including: BERT & GPT-2
open_clip
An open source implementation of CLIP.
powerful-benchmarker
A PyTorch library for benchmarking deep metric learning. It's powerful.
sentencepiece
Unsupervised text tokenizer for Neural Network-based text generation.
Survey_of_Deep_Metric_Learning
A comprehensive survey of deep metric learning and related works
TennisVideoAnalysis
Video analysis tool for tennis
transformers
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
trl
Train transformer language models with reinforcement learning.
tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.