Jirasak Buranathawornsom's repositories
AT82.01
Repository containing work related to the class AT82.01 Computer Programming for Data Science and Artificial Intelligence from AIT
AT82.05-NLU-Project
Repository containing the code for reproduce the experiment for AIT's AT82.05 Artificial Intelligence: Natural Language Understanding
Best-README-Template
An awesome README template to jumpstart your projects!
DevToys
A Swiss Army knife for developers.
Efficient-AI-Backbones
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
health-checks
Scripts that check the health of my computers.
HyperbolicImageSegmentation
Hyperbolic Image Segmentation, CVPR 2022
JuliaTutorials
Learn Julia via interactive tutorials!
MICCAI22_ADN
The implementation of our MICCAI22 paper "Asymmetry Disentanglement Network for Interpretable Acute Ischemic Stroke Infarct Segmentation in Non-Contrast CT Scans".
mildlyoverfitted
Paper implementations from scratch and machine learning tutorials
model-vs-human
Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)
NLP
This is the repository for the course Natural Language Processing at Asian Institute of Technology. Mostly covering theoretical aspects of NLP and some coding assignments using PyTorch
pytorch-toolbelt
PyTorch extensions for fast R&D prototyping and Kaggle farming
RobustViT
[NeurIPS 2022] Official PyTorch implementation of Optimizing Relevance Maps of Vision Transformers Improves Robustness. This code allows to finetune the explainability maps of Vision Transformers to enhance robustness.
ToMe
A method to increase the speed and lower the memory footprint of existing vision transformers.
Transformer-Explainability
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
validations
blue kale validation repo for training