Edwin Arkel Rios's starred repositories
pytorch-image-models
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
pytorch-summary
Model summary in PyTorch similar to `model.summary()` in Keras
awesome-ai-residency
List of AI Residency Programs
ml-surveys
📋 Survey papers summarizing advances in deep learning, NLP, CV, graphs, reinforcement learning, recommendations, graphs, etc.
clip-retrieval
Easily compute clip embeddings and build a clip retrieval system with them
Transformer-Explainability
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
lightning-bolts
Toolbox of models, callbacks, and datasets for AI/ML researchers.
Transformer-in-Vision
Recent Transformer-based CV and related works.
awesome-image-translation
A collection of awesome resources image-to-image translation.
awesome-vision-language-pretraining-papers
Recent Advances in Vision and Language PreTrained Models (VL-PTMs)
AwesomeAnimeResearch
Papers, repository and other data about anime or manga research. Please let me know if you have information that the list does not include.
English-for-Programmers
《程式英文》:用英文提昇程式可讀性
nlp-phd-global-equality
A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP
vit-explain
Explainability for Vision Transformers
PyTorch-Pretrained-ViT
Vision Transformer (ViT) in PyTorch
csGraduateFellowships
A curated list of fellowships for graduate students in Computer Science and related fields.
wsolevaluation
Evaluating Weakly Supervised Object Localization Methods Right (CVPR 2020)
Danbooru2018AnimeCharacterRecognitionDataset
An open source dataset based on Danbooru2018 dataset to do anime character recognition, with 1M images and 70k characters.
danbooru-pretrained
Pretrained pytorch models for the Danbooru2018 dataset
dl-eeg-review
Supplementary material for systematic literature review on deep learning and EEG.
pytorch-fgvc-dataset
PyTorch custom dataset APIs -- CUB-200-2011, Stanford Dogs, Stanford Cars, FGVC Aircraft, NABirds, Tiny ImageNet, iNaturalist2017
prim-benchmarks
PrIM (Processing-In-Memory benchmarks) is the first benchmark suite for a real-world processing-in-memory (PIM) architecture. PrIM is developed to evaluate, analyze, and characterize the first publicly-available real-world PIM architecture, the UPMEM PIM architecture. Described by Gómez-Luna et al. (https://arxiv.org/abs/2105.03814).