idearc / Transformer_Time_Series

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transformer_Time_Series

DISLCLAIMER: THIS IS NOT THE PAPERS CODE. THIS DOES NOT HAVE SPARSITY. THIS IS TEACHER FORCED LEARNING. Only tried to replicate the simple example without sparsity. Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)

Able to match the results of the paper for the synthetic dataset as shown in the table below Rp

The synthetic dataset was constructed as shown below Synthetic Dataset

A nice visualization of how the attention layers look at the signal for predicting the last timestep t=t0+24-1 Attention Visualization

Learning Values (MSE) Learning Curve Validation Example

About

Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019)


Languages

Language:Jupyter Notebook 99.7%Language:Python 0.3%