There are 1 repository under vae-implementation topic.
A Collection of Variational Autoencoders (VAE) in PyTorch.
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Tensorflow 2.x implementation of the beta-TCVAE (arXiv:1802.04942).
An official repository for a VAE tutorial of Probabilistic Modelling and Reasoning (2023/2024) - a University of Edinburgh master's course.
Pytorch implementation of Gaussian Mixture Variational Autoencoder GMVAE
Variational Auto Encoders (VAEs), Generative Adversarial Networks (GANs) and Generative Normalizing Flows (NFs) and are the most famous and powerful deep generative models.
Towards Generative Modeling from (variational) Autoencoder to DCGAN
Implementation of the variational autoencoder with PyTorch and Fastai
Python implementation of N-gram Models, Log linear and Neural Linear Models, Back-propagation and Self-Attention, HMM, PCFG, CRF, EM, VAE
A re-implementation of the Sentence VAE paper, Generating Sentences from a Continuous Space
Autoencoders (Standard, Convolutional, Variational), implemented in tensorflow
Probabilistic framework for solving Visual Dialog
VAE and CVAE pytorch implement based on MNIST
Classical Machine Learning Algorithms
ColorVAE is a Vanilla Auto Encoder (V.A.E.) which can be used to add colours to black and white images.
This repository contains the code, data and scripts used to write the Bachelor Thesis "Latent representations for traditional music analysis and generation".
This repo is devoted to the pracicals of the course Deep Learning (5204DLFV6Y) realised at the Univeristy of Amsterdam, Fall 2020.
An implementation of Variational Auto-encoder with TSNE Visualization on MNIST dataset.
Solutions for Advanced Image Analysis course assignments, featuring model designs for image summation and generation with MNIST, and style transfer using CycleGAN with MNIST and SVHN datasets.
Testing the Reproducibility of the paper: MixSeq. Under the assumption that macroscopic time series follow a mixture distribution, they hypothesise that lower variance of constituting latent mixture components could improve the estimation of macroscopic time series.
Variational Autoencoder (VAE) trained on MNIST
Running VAEs on mobile and IOT devices using TFLite.
The CNN implementation to qualify images. This repo also contains Japanese coin validation(with binaries) and MNIST challenge detection.
A repository for generating synthetic data (images) using various DL/ML models.
Basic implementation of VAE
Utilized a VAE (Variational Autoencoder) and CGAN (Conditional Generative Adversarial Network) models to generate synthetic chatter signals, addressing the challenge of imbalanced data in turning operations. Compared othe performance of synthetic chatter signals.
Handwritten Digit Generation with VAE and GAN are applied.
Convolutional Variational Autoencoder on VizdoomTakeCover
Topics include function approximation, learning dynamics, using learned dynamics in control and planning, handling uncertainty in learned models, learning from demonstration, and model-based and model-free reinforcement learning.
A simple implementation of variational Auto encoders using Mnist dataset in tensorflow.
A variational autoencoder can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.
A variational Autoencoder (VAE) to generate human faces based on the CelebA dataset. A VAE is a generative model that learns to represent high-dimensional data (like images) in a lower-dimensional latent space, and then generates new data from this space.
Simple VAE face generator