There are 1 repository under vae-implementation topic.
A Collection of Variational Autoencoders (VAE) in PyTorch.
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Implementation of mutual learning model between VAE and GMM.
There are C language computer programs about the simulator, transformation, and test statistic of continuous Bernoulli distribution. More than that, the book contains continuous Binomial distribution and continuous Trinomial distribution.
Dirichlet-Variational Auto-Encoder by PyTorch
Pytorch implementation of Gaussian Mixture Variational Autoencoder GMVAE
Tensorflow 2.x implementation of the beta-TCVAE (arXiv:1802.04942).
Implementation of LiteVAE
An official repository for a VAE tutorial of Probabilistic Modelling and Reasoning - a University of Edinburgh master's course.
Symbol emergence using Variational Auto-Encoder and Gaussian Mixture Model (Inter-GMM-VAE)~VAEを活用した実画像からの記号創発~
Topics include function approximation, learning dynamics, using learned dynamics in control and planning, handling uncertainty in learned models, learning from demonstration, and model-based and model-free reinforcement learning.
Python implementation of N-gram Models, Log linear and Neural Linear Models, Back-propagation and Self-Attention, HMM, PCFG, CRF, EM, VAE
Implementation of the variational autoencoder with PyTorch and Fastai
Variational Auto Encoders (VAEs), Generative Adversarial Networks (GANs) and Generative Normalizing Flows (NFs) and are the most famous and powerful deep generative models.
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.
Towards Generative Modeling from (variational) Autoencoder to DCGAN
A re-implementation of the Sentence VAE paper, Generating Sentences from a Continuous Space
Utilized a VAE (Variational Autoencoder) and CGAN (Conditional Generative Adversarial Network) models to generate synthetic chatter signals, addressing the challenge of imbalanced data in turning operations. Compared othe performance of synthetic chatter signals.
Autoencoders (Standard, Convolutional, Variational), implemented in tensorflow
The GUI for RaptGen developed with React and FastAPI
Probabilistic framework for solving Visual Dialog
VAE and CVAE pytorch implement based on MNIST
ColorVAE is a Vanilla Auto Encoder (V.A.E.) which can be used to add colours to black and white images.
Implementation of VAEs (Variational Autoencoders) with PyTorch.
Testing the Reproducibility of the paper: MixSeq. Under the assumption that macroscopic time series follow a mixture distribution, they hypothesise that lower variance of constituting latent mixture components could improve the estimation of macroscopic time series.
Built a model to create highlights/summary of given video. The results of this study shows that, with a remarkable similarity index(SSIM) of 98%, the recommended technique is quite successful in choosing keyframes that are both educational and distinctive from the original movie
Running VAEs on mobile and IOT devices using TFLite.
PyTorch-based pipeline that trains a convolutional variational autoencoder on cat images, optionally tunes hyperparameters with Ray Tune, and samples new images by fitting a Gaussian Mixture Model in the latent space.
Leveraging the power of LD variational autoencoders to identify latent representations as dim red embeddings of sc data
A PyTorch implementation of multimodal VRNN and VAE.
Unsupervised MRD detection in flow cytometry data using Variational AutoEncoder (VAE) and Gaussian Mixture Model (GMM).
AstroVAE uses a Variational Autoencoder (VAE) to compress high-dimensional data from CAMELS cosmological simulations. It creates a compact, meaningful latent-space representation of complex astrophysical data, proving more effective at preserving critical scientific information than traditional methods.