Mxbonn / MLRF

Machine Learning Research Flashcards (for Anki)

Home Page:https://maxim.bonnaerens.com/mlrf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MLRF

Machine Learning Research Flashcards (for Anki)

Description

MLRF is a collection of machine learning flashcards that can be used with Anki. The flashcards in this repository are associated with scientific research papers in the field of machine learning.

As a machine learning researcher I read a lot of papers to keep up with the state-of-the-art. However, for many papers I was only able to recall "Oh I read a paper about that" when a related topic would come up months later, without being able to give much more details. Intrigued by the article from Michael Nielsen "Augmenting Long-term Memory", I started using Anki.

The flashcards in this repository are not a replacement for reading the actual paper, but rather an additional resource to retain the knowledge from these papers. Initially the papers covered by this repository are mainly selected based on my own interests and topics I do research about. However, by open sourcing this repository, I invite everyone that has interests in using Anki for Machine Learning papers to collaborate on these flashcards.

Preview

image

Papers

Title URL flashcards
3D Gaussian Splatting for Real-Time Radiance Field Rendering [arXiv] [gaussian_splatting.csv]
A Simple Framework for Contrastive Learning of Visual Representations [arXiv] [simclr.csv]
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour [arXiv] [accurate_large_minibatch_sgd.csv]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [arXiv] [vision_transformers.csv]
Anchor Pruning for Object Detection [arXiv] [anchor_pruning.csv]
Attention Is All You Need [arXiv] attention_is_all_you_need.csv]
Auto-Encoding Variational Bayes [arXiv] [vae.csv]
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers [arXiv] [autoslim.csv]
Barlow Twins: Self-Supervised Learning via Redundancy Reduction [arXiv] [barlow_twins.csv]
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift [arXiv] [batchnorm.csv]
DETR: End-to-End Object Detection with Transformers [arXiv] [detr.csv]
DUSt3R: Geometric 3D Vision Made Easy [arXiv] [dust3r.csv]
Deep Reinforcement Learning with Double Q-learning [arXiv] [double_qlearning.csv]
Denoising Diffusion Probabilistic Models [arXiv] [diffusion_models.csv]
Distilling the Knowledge in a Neural Network [arXiv] [knowledge_distillation.csv]
Extracting and Composing Robust Features with Denoising Autoencoders [ICML] [denoising_autoencoder.csv]
FaceNet: A Unified Embedding for Face Recognition and Clustering [arXiv] [facenet.csv]
Focal Loss for Dense Object Detection [arXiv] [retinanet.csv]
High-Resolution Image Synthesis with Latent Diffusion Models [arXiv] [stable_diffusion.csv]
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [arXiv] [instant_ngp.csv]
Learned Thresholds Token Merging and Pruning for Vision Transformers [arXiv] [ltmp.csv]
Learning Transferable Visual Models From Natural Language Supervision [arXiv] [clip.csv]
LoRA: Low-Rank Adaptation of Large Language Models [arXiv] [lora.csv]
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields [arXiv] [mipnerf360.csv]
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields [arXiv] [mipnerf.csv]
Mixture-of-Experts with Expert Choice Routing [arXiv] [expert_choice.csv]
MobileNetV2: Inverted Residuals and Linear Bottlenecks [arXiv] [mobilenetv2.csv]
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [arXiv] [mobilenetv1.csv]
Multi-Scale Context Aggregation by Dilated Convolutions [arXiv] [multi_scale_context_dilated_convolutions.csv]
Multiple Choice Learning: Learning to Produce Multiple Structured Outputs [NeurIPS] [multiple_choice_learning.csv]
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [arXiv] [nerf.csv]
On Network Design Spaces for Visual Recognitio [arXiv] [network_design_spaces.csv]
Once-for-All: Train One Network and Specialize it for Efficient Deployment [arXiv] [once_for_all.csv]
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer [arXiv] [moe.csv]
Playing Atari with Deep Reinforcement Learning [arXiv] [deep_rl.csv]
Proximal Policy Optimization Algorithms [arXiv] [ppo.csv]
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [arXiv] [setr.csv]
SSD: Single Shot MultiBox Detector [arXiv] [ssd.csv]
Segment Anything [arXiv] [segment_anything.csv]
Slimmable Neural Networks [arXiv] [slimmable_neural_networks.csv]
Squeeze and Excitation Networks [arXiv] [squeeze_and_excitation.csv]
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity [arXiv] [switch_transformer.csv]
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks [arXiv] [lottery_ticket.csv]
Token Merging: Your ViT But Faster [arXiv] [tome.csv]
Understanding the Effective Receptive Field in Deep Convolutional Neural Networks [arXiv] [understanding_receptive_field.csv]
Universally Slimmable Networks and Improved Training Techniques [arXiv] [universally_slimmable_networks.csv]
V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation [arXiv] [dice_loss.csv]

Usage

The flashcards in this repository are made for Anki. However, the cards are stored here in csv formats, so you can also use them as inputs to a different flashcards system. In case you edit the raw csvs and want to sync them here, you can leave the guid column of new cards empty. Additionally, you also need to install the Anki add-on CrowdAnki.

Once you cloned this repository you can use tools/source_to_anki.py to create a deck that can be imported in Anki. tools/anki_to_source can be used to update or add your own cards to this repository.

In order to run these scripts you need to install brain brew and pandas pip install brain-brew pandas.

source_to_anki

usage: python tools/source_to_anki.py [-h] [--include INCLUDE [INCLUDE ...]] [--exclude EXCLUDE [EXCLUDE ...]]

Tool to convert the source format of this repository to a crowdAnki folder that can be imported into Anki.

optional arguments:
  -h, --help            show this help message and exit
  --include INCLUDE [INCLUDE ...]
                        You can convert only part of this repository by using this argument with a list of the csv files to convert. E.g. `--include ofa.csv mobilenetv2.csv`
  --exclude EXCLUDE [EXCLUDE ...]
                        Exclude certain papers in the crowdAnki export folder. E.g. `--exclude ofa.csv mobilenetv2.csv`

The resulting export folder will be created in MLRF/build/. To add the cards to Anki do the following:

  • Open Anki and make sure your devices are all synchronised.
  • In the File menu, select CrowdAnki: Import from disk.
  • Browse for and select MLRF/build/

Recommended next steps:

  • Review all cards in the MLRF deck, delete the cards you're not interested in (see also TODO).
  • Move the cards to a deck of your own. (This allows you use your own card scheduling steps)

anki_to_source

  • Open Anki and make sure your devices are all synchronised.
  • In the File menu, select CrowdAnki: Snapshot, and remember the location where it is stored.
usage: python tools/anki_to_source.py [-h] crowdanki_folder

Tool to convert crowdAnki export folder to the format of this repository.

positional arguments:
  crowdanki_folder  Location of the crowdAnki export folder.

Important notes:

  • This tool only extracts cards that use the paper_basic note model from this repository. This means that you can export a deck that contains more than just your machine learning research flashcards.
  • paper_basic cards that are tagged with DoNotSync are ignored.
  • Tags are not copied to this repository

About

Machine Learning Research Flashcards (for Anki)

https://maxim.bonnaerens.com/mlrf


Languages

Language:Python 80.8%Language:CSS 16.8%Language:HTML 2.4%