Yujie Qian's starred repositories
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
pytorch-image-models
PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
albumentations
Fast image augmentation library and an easy-to-use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about the library: https://www.mdpi.com/2078-2489/11/2/125
attention-is-all-you-need-pytorch
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Distributional-Signatures
"Few-shot Text Classification with Distributional Signatures" ICLR 2020
DeepRL-InformationExtraction
Code for the paper "Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning" http://arxiv.org/abs/1603.07954
OChemR
From a chemical reaction image, detect and classify molecules, text and arrows by using the Vision Transformer DETR. Comparisons with well-established CNNs (RetinaNet and FasterRCNN also provided). The detections are then translated into text "OCR" or into SMILES. The direction of the reaction is learned and preserved into the output files.
Clipboard-to-SMILES-Converter
Converts clipboard content to smiles and much more
DLaaS-Getting-StartedTutorial
This repo is for the IBM MIT AI Lab to use DLaaS in IBM Watson Studio. The demo will use Pytorch to train VGG for CIFAR10. The code is forked from kuangliu (https://github.com/kuangliu/pytorch-cifar) and adapted for submitting the model to IBM Watson Machine Learning on Watson Studio for training. It is meant to get you quick-started. We hope you have some fun running your first models in IBM Cloud