zhengliz / natural-adversary

Generating Natural Adversarial Examples, ICLR 2018

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

generate.py throwing issue

munaAchyuta opened this issue · comments

Hi zhengliz,

Thanks for sharing code.

just followed README.
i was trying see demo results. but couldn't get it as it's throwing issue.

$ python generate.py --load_path ./output/1535710044
{'ninterpolations': 5, 'temp': 1, 'load_path': './output/1535710044', 'sample': False, 'seed': 1111, 'steps': 5, 'noprint': False, 'outf': './generated.txt', 'ngenerations': 10}
Loading models from./output/1535710044/models
Traceback (most recent call last):
  File "generate.py", line 135, in <module>
    main(args)
  File "generate.py", line 74, in main
    maxlen=model_args['maxlen'])
  File "/natural-adversary/text/models.py", line 654, in generate
    sample=sample)
  File "/natural-adversary/text/models.py", line 303, in generate
    embedding = self.embedding_decoder(self.start_symbols)
  File "/venv/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/venv/local/lib/python2.7/site-packages/torch/nn/modules/sparse.py", line 103, in forward
    self.scale_grad_by_freq, self.sparse
  File "/venv/local/lib/python2.7/site-packages/torch/nn/_functions/thnn/sparse.py", line 59, in forward
    output = torch.index_select(weight, 0, indices.view(-1))
TypeError: torch.index_select received an invalid combination of arguments - got (torch.cuda.FloatTensor, int, torch.LongTensor), but expected (torch.cuda.FloatTensor source, int dim, torch.cuda.LongTensor index)

could you please tell me why is this issue coming ?

thanks in advance.

using debugger , this is the exact location of issue.

TypeError: 'torch.index_select received an invalid combination of arguments - got (\x1b[32;1mtorch.cuda.Float...Tensor\x1b[0m), but expected (torch.cuda.FloatTensor source, int dim, torch.cuda.LongTensor index)'
> /natural-adversary/text/models.py(303)generate()
-> embedding = self.embedding_decoder(self.start_symbols)
(Pdb) l
298  	            self.start_symbols = self.start_symbols.cpu()
299  	        # <sos>
300  	        self.start_symbols.data.resize_(batch_size, 1)
301  	        self.start_symbols.data.fill_(1)
302  	
**303  ->	        embedding = self.embedding_decoder(self.start_symbols)**
304  	        inputs = torch.cat([embedding, hidden.unsqueeze(1)], 2)