bampt's starred repositories
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
deepmind-research
This repository contains implementations and illustrative code to accompany DeepMind publications
PyTorch-VAE
A Collection of Variational Autoencoders (VAE) in PyTorch.
lightning-hydra-template
PyTorch Lightning + Hydra. A very user-friendly template for ML experimentation. ⚡🔥⚡
Self-Attention-GAN
Pytorch implementation of Self-Attention Generative Adversarial Networks (SAGAN)
TimeSformer
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"
mpc.pytorch
A fast and differentiable model predictive control (MPC) solver for PyTorch.
LaPreprint
📝 A nicely formatted LaTeX preprint template
geometric-gnn-dojo
Geometric GNN Dojo provides unified implementations and experiments to explore the design space of Geometric Graph Neural Networks.
Efficient-VDVAE
Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
Stabilizing_GANs
Code for the NIPS17 paper "Stabilizing Training of Generative Adversarial Networks through Regularization"
Frechet-Inception-Distance
CPU/GPU/TPU implementation of the Fréchet Inception Distance
TrafficBots
TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction. ICRA 2023. Code is now available at https://github.com/zhejz/TrafficBots
ComplexAutoEncoder
Code for the paper: Complex-Valued Autoencoders for Object Discovery
Coupled-VAE-Improved-Robustness-and-Accuracy-of-a-Variational-Autoencoder
We present a coupled Variational Auto-Encoder (VAE) method that improves the accuracy and robustness of the probabilistic inferences on represented data. The new method models the dependency between input feature vectors (images) and weighs the outliers with a higher penalty by generalizing the original loss function to the coupled entropy function, using the principles of nonlinear statistical coupling. We evaluate the performance of the coupled VAE model using the MNIST dataset. Compared with the traditional VAE algorithm, the output images generated by the coupled VAE method are clearer and less blurry. The visualization of the input images embedded in 2D latent variable space provides a deeper insight into the structure of new model with coupled loss function: the latent variable has a smaller deviation and the output values are generated by a more compact latent space. We analyze the histograms of probabilities for the input images using the generalized mean metrics, in which increased geometric mean illustrates that the average likelihood of input data is improved. Increases in the -2/3 mean, which is sensitive to outliers, indicates improved robustness. The decisiveness, measured by the arithmetic mean of the likelihoods, is unchanged and -2/3 mean shows that the new model has better robustness.