lyn1874 / memAE

unofficial implementation of paper Memorizing Normality to Detect Anomaly: Memory-augmented Deep Autoencoder (MemAE) for Unsupervised Anomaly Detection

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

the problem of shuffle and len(batch_size)?

huyi1998 opened this issue · comments

commented

Traceback (most recent call last):
File "E:/PythonProjects/memAE/Train.py", line 375, in
main()
File "E:/PythonProjects/memAE/Train.py", line 256, in main
shuffle=True, num_workers=args.num_workers, drop_last=True)
File "D:\DPFS\DeepLearning\anaconda3\envs\MemAE\lib\site-packages\torch\utils\data\dataloader.py", line 213, in init
sampler = RandomSampler(dataset)
File "D:\DPFS\DeepLearning\anaconda3\envs\MemAE\lib\site-packages\torch\utils\data\sampler.py", line 94, in init
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0

And I set shuffle=false;
then.....
0it [00:03, ?it/s]
Traceback (most recent call last):
File "E:/PythonProjects/memAE/Train.py", line 375, in
main()
File "E:/PythonProjects/memAE/Train.py", line 333, in main
train_writer.add_scalar("model/train-recons-loss", tr_re_loss / len(train_batch), epoch)
ZeroDivisionError: float division by zero

commented

done