trungnt13 / odin-ai

Orgainzed Digital Intelligent Network (O.D.I.N)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

https://readthedocs.org/projects/odin/badge/

O.D.I.N

Organized Digital Intelligent Network (O.D.I.N)

O.D.I.N is a framework for building "Organized Digital Intelligent Networks".

End-to-end design, versatile, plug-n-play, minimized repetitive work

This repo contains the most comprehensive implementation of variational autoencoder and disentangled representation benchmark.

from odin.fuel import MNIST
from odin.networks import get_networks
from odin.bay.vi import VariationalAutoencoder

ds = MNIST()
train = ds.create_dataset(partition='train')
# optimized architectures for MNIST
networks = get_networks(ds, is_hierarchical=False, is_semi_supervised=False)

# create the VAE
vae = VariationalAutoencoder(**networks)
vae.build(ds.full_shape)
vae.fit(train, max_iter=10000)

TOC

  1. VAE
  2. Hierachical VAE
  3. Semi-supervised VAE
  4. Disentanglement Gym
  5. Faster Classical ML (automatically select GPU implementation)

Variational Autoencoder (VAE)

Model Reference/Description Implementation
  1. Vanilla VAE
(Kingma et al. 2014). "Auto-Encoding Variational Bayes" [Paper] [Code][Example]
  1. Beta-VAE
(Higgins et al. 2016). "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" [Paper] [Code][Example]
  1. BetaGamma-VAE
Customized version of Beta-VAE, support re-weighing both reconstruction and regularization \(\mathrm{ELBO}=\gamma \cdot E_q[log p(x|z)] - \beta \cdot KL(q(z|x)||p(z|x))\) [Code][Example]
  1. Annealing VAE
(Sønderby et al. 2016) "Ladder variational autoencoder" [Code][Example]
  1. CyclicalAnnealing VAE
(Fu et al. 2019) "Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing" [Code][Example]
  1. BetaTC-VAE
(Chen et al. 2019) "Isolating Sources of Disentanglement in Variational Autoencoders" (regularize the latents' Total Correlation) [Code][Example]
  1. Controlled Capacity Beta-VAE
(Burgess et al. 2018) "Understanding disentangling in beta-VAE" [Code][Example]
  1. FactorVAE
(Kim et al. 2018) "Disentangling by Factorising" [Code][Example]
  1. AuxiliaryVAE
(Maaløe et al. 2016) "Auxiliary Deep Generative Models" [Code][Example]
  1. HypersphericalVAE
(Davidson et al. 2018) "Hyperspherical Variational Auto-Encoders" [Code][Example]
  1. PowersphericalVAE
(De Cao et al. 2020) "The Power Spherical distribution" [Code][Example]
  1. DIPVAE
(Kumar et al. 2018) "Variational Inference of Disentangled Latent Concepts from Unlabeled Observations" (I - only_mean=True; II - only_mean=False) [Code][Example]
  1. InfoVAE
(Zhao et al. 2018) "infoVAE: Balancing Learning and Inference in Variational Autoencoders" [Code][Example]
  1. MIVAE
(Ducau et al. 2017) "Mutual Information in Variational Autoencoders" (max Mutual Information I(X;Z)) [Code][Example]
  1. irmVAE
(Jing et al. 2020) "Implicit Rank-Minimizing Autoencoder" (Implicit Rank Minimizer) [Code][Example]
  1. ALDA
(Figurnov et al. 2018) "Implicit Reparameterization Gradients" (Amortized Latent Dirichlet Allocation - VAE with Dirichlet latents for topic modeling) [Code][Example]
  1. TwoStageVAE
(Dai et al. 2019) "Diagnosing and Enhancing VAE Models" [Code][Example]
  1. VampriorVAE
(Tomczak et al. 2018) "VAE with a VampPrior" [Code][Example]
  1. VQVAE
(Oord et al. 2017) "Neural Discrete Representation Learning" [Code][Example]

Hierarchical VAE

Model Reference/Description Implementation
  1. LadderVAE
(Sønderby et al. 2016) "Ladder variational autoencoder" [Code][Example]
  1. BidirectionalVAE
(Kingma et al. 2016) "Improved variational inference with inverse autoregressive flow" (Bidirectional inference hierarchical VAE) [Code][Example]
  1. ParallelVAE
(Zhao et al. 2017) "Learning Hierarchical Features from Generative Models" (Multiple latents connects encoder-decoder from bottom to top in parallel) [Code][Example]

Semi-supervised VAE

Model Reference/Description Implementation
  1. Semi-supervised FactorVAE
Same as FactorVAE, but the discriminator also estimate the density of the labels and unlabeled data (like in semi-GAN) [Code][Example]
  1. MultiheadVAE
VAE has multiple decoders for different tasks [Code][Example]
  1. SkiptaskVAE
VAE has multiple tasks directly constrain the latents [Code][Example]
  1. ConditionalM2VAE
(Kingma et al. 2014) "Semi-supervised learning with deep generative models" [Paper] [Code][Example]
  1. CCVAE (capture characteristic VAE)
(Joy et al. 2021) "Capturing label characteristics in VAEs" [Paper] [Code][Example]
  1. SemafoVAE
(Trung et al. 2021) "The transitive information theory and its application to deep generative models" [Paper] [Code][Example]

Disentanglement Gym

DisentanglementGym: fast API for benchmarks on popular datasets and renowned disentanglement metrics.

Dataset support: ['shapes3d', 'dsprites', 'celeba', 'fashionmnist', 'mnist', 'cifar10', 'cifar100', 'svhn', 'cortex', 'pbmc', 'halfmoons']

Metrics support:

  • Correlation: 'spearman', 'pearson', 'lasso'
  • BetaVAE score
  • FactorVAE score
  • Mutual Information Estimated
  • MIG (Mutual Information Gap)
  • SAP (Separated Attribute Prediction)
  • RDS (relative disentanglement strength)
  • DCI (Disentanglement, Completeness, Informativeness)
  • FID (Frechet Inception Distance)
  • Total Correlation
  • Clustering scores: Adjusted Rand Index, Adjusted Mutual Info, Normalized Mutual Info, Silhouette score.

Fast API for classical ML

Automatically accelerated by RAPIDS.ai (i.e. automatically select GPU implementation if available)

Dimension Reduction

  • t-SNE [Code]
  • UMAP [Code]
  • PCA, Probabilistic PCA, Supervised Probabilistic PCA, MiniBatch PCA, Randomize PCA [Code]
  • Probabilistic Linear Discriminant Analysis (PLDA) [Code]
  • iVector (GPU acclerated) [Code]

GMM

  • GMM classifier [Code]
  • Probabilistic embedding with GMM [Code]
  • Universal Background Model (GMM-Tmatrix) [Code]

Clustering