Unfolding sometimes results into concatenated channels
markusMM opened this issue · comments
asteroid/asteroid/dsp/overlap_add.py
Line 92 in c72227e
/We can see how torch 1.10.2 does concatenates the windows of all channels after unfold./
The expected behavior, in the code, would be to handle (batch, chans, win_size)
per chunk
(batch, chans, win_size, n_chunks)
.
Thus it has to be reshaped before handling to the NN, from my perspective.
unfolded = unfolded.reshape(batch, channels, self.window_size, -1)
Thanks for the issue, that's very informative !
Did you search in PyTorch's changelog if they note this change, or not ? Do you think it's intended behavior, or a bug. Has it been fixed in the new versions ?
So, right now there are two simple ways of unfolding a tensor.
The nn.Unfold
(and its functional wrapper) will always do the behaviour above on the latest versions (since v0.4.1)
And the builtin torch.tensor.unfold
, which always unfolds a specified dimension and outputs size(..., nWindows, winSize)
.
This seems to be the better solution to avoid the reshape:
unfolded = frame.unfold(-1, stride, window_size)
cheers
Thanks for the explanation @markusMM
Could you submit a PR to fix the problem please ? 🙃 Thanks in advance !