TimyadNyda / Variational-Lstm-Autoencoder

Lstm variational auto-encoder for time series anomaly detection and features extraction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Code Explaination

muhmmadzs opened this issue · comments

Hi, Thank you very much for your contribution. Can you provide Some detail related to "intermediate_dim = 10,z_dim = 3" or if you have some example dataset that would be best only want to know insight related to the selection of these values

Hi,

intermediate_dim is actually the dimension of the LSTM layer (of shape batch_size, timesteps, intermediate_dim).
z is the latent space, and 3 is its dimension. You can freely change these parameters :)

Every timeseries dataset could fit here. The more complexe your datas are, the more you will want a latent space (z) of a bigger dimension (a kind of simple rule to get some insight). On the otherside, if you want a simpler representation of your data, a smaller z may be better. There is no fixed rule here, you have to try several values :), and see what you get on a test set (reconstruction loss).

Same for the intermediate dim !

Hint : try it with an artificial dataset (e.g. sin/cos waves with gaussian noise) or http://odds.cs.stonybrook.edu/#table1 for some real world issues.