Yanjun-Zhao / GCformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When I try to ues 1440 data to predict 1 time-series it shows this error

ooOPPPPPS opened this issue · comments

parser.add_argument('--seq_len', type=int, default=1440, help='input sequence length for global_model')
parser.add_argument('--label_len', type=int, default=10, help='start token length') #720
parser.add_argument('--pred_len', type=int, default=1, help='prediction sequence length')

File "/home/panhaokang/Documents/GCformer-main/layers/SelfAttention_Family.py", line 67, in _prob_QK
M = Q_K_sample.max(-1)[0] - torch.div(Q_K_sample.sum(-1), L_K)
IndexError: max(): Expected reduction dim 2 to have non-zero size.

The model does not support prediction length=1 because of the attention structure in the decoder.
You can follow the setting in the code and print the first value of the prediction result.

Dear Owner:

what about I want --features is S and set
parser.add_argument('--enc_in', type=int, default=1, help='encoder input size')
parser.add_argument('--enc_raw', type=int, default=1, help='encoder input size')
parser.add_argument('--dec_in', type=int, default=1, help='decoder input size')

it will get this error:

return x[:, :, channel_sample], y[:, :, channel_sample]
IndexError: index 332 is out of bounds for dimension 0 with size 1

Is that also the problem from attention structure?

I took the same setting process in fedformer autoformer informer, all of them can get correct output

Bests,

Yeah, maybe you can try to modify the decoder part of the code to predict univariate time series.