jorgemarpa / PELS-VAE

Physics-Enhanced Latent Space Variational Autoencoder

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PELS-VAE: Physics-Enhanced Latent Space Variational Autoencoder

This repository contains the code used for training and testing a conditioned Variational Autoencoder that generates physically informed light curves of periodic variable stars and accompanies the article MartĂ­nez-Palomera et al. 2020.

Astronomical Time Series

Light curves taken from OGLE 3, which contains the following variability classes:

  • Eclipsing Binaries
  • Anomalous Cepheids
  • Cepheids
  • Type II Cepheids
  • RR Lyrae
  • Long Period Variables
  • Ellipsoidal Variables
  • Delta Scuti

Training data is avalaible here.

Samples

Light Curve samples

Gaia DR2 parameters

Joint distribution

Usage

Use vae_main.py to train a cVAE model with the following parameters:

  --dry-run             Only load data and initialize model [False]
  --machine             were to is running ([Jorges-MBP], colab, exalearn)
  --data                data used for training (OGLE3)
  --use-err             use magnitude errors ([T],F)
  --cls                 drop or select ony one class
                        ([all],drop_"vartype",only_"vartype")
  --lr                  learning rate [1e-4]
  --lr-sch              learning rate shceduler ([None], step, exp,cosine,
                        plateau)
  --beta                beta factor for latent KL div ([1],step)
  --batch-size          batch size [128]
  --num-epochs          total number of training epochs [150]
  --cond                label conditional VAE (F,[T])
  --phy                 physical parameters to use for conditioning ([],[tm])
  --latent-dim          dimension of latent space [6]
  --latent-mode         wheather to sample from a 3d or 2d tensor
                        ([repeat],linear,convt)
  --arch                architecture for Enc & Dec ([tcn],lstm,gru)
  --transpose           use tranpose convolution in Dec ([F],T)
  --units               number of hidden units [32]
  --layers              number of layers/levels for lstm/tcn [5]
  --dropout             dropout for lstm/tcn layers [0.2]
  --kernel-size         kernel size for tcn conv, use odd ints [5]
  --comment             extra comments

Architecture available are [TCN, LSTM, GRU]. The encoder-decoder contain a sequential artchitecture followed by a set of dense layers.

This train the models and generate a tensorboard event log (located in ./logs) of the training progress.

Recontruction examples

Light Curve reconstruction

Sources and inspiration

About

Physics-Enhanced Latent Space Variational Autoencoder

License:MIT License


Languages

Language:Jupyter Notebook 99.1%Language:Python 0.9%