donglixp / coarse2fine

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

error in torch.stack(outputs)

prezaei85 opened this issue · comments

Here is what I get when I run "python train.py" for WikiSQL. Any idea?

Traceback (most recent call last):
  File "train.py", line 205, in <module>
    main()
  File "train.py", line 201, in main
    train_model(model, train, valid, fields, optim)
  File "train.py", line 106, in train_model
    train_stats = trainer.train(epoch, report_func)
  File "/home/ubuntu/seq2sql/coarse2fine/wikisql/table/Trainer.py", line 143, in train
    loss, batch_stats = self.forward(batch, self.train_loss)
  File "/home/ubuntu/seq2sql/coarse2fine/wikisql/table/Trainer.py", line 107, in forward
    q, q_len, batch.ent, tbl, tbl_len, batch.tbl_split, batch.tbl_mask, cond_op, cond_op_len, batch.cond_col, batch.cond_span_l, batch.cond_span_r, batch.lay)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36_copy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/seq2sql/coarse2fine/wikisql/table/Models.py", line 459, in forward
    cond_context, _, _ = self.cond_decoder(emb, q_all, q_state)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36_copy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/seq2sql/coarse2fine/wikisql/table/Models.py", line 239, in forward
    outputs = torch.stack(outputs)
TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not Tensor

The error happens in the following function. outputs is a tensor itself and not a list of tensors. I am using Pytorch 0.4. I can't imagine how this can be due to that. Everything else is the same version.

def forward(self, emb, context, state):
        """
        Forward through the decoder.
        Args:
            input (LongTensor): a sequence of input tokens tensors
                                of size (len x batch x nfeats).
            context (FloatTensor): output(tensor sequence) from the encoder
                        RNN of size (src_len x batch x hidden_size).
            state (FloatTensor): hidden state from the encoder RNN for
                                 initializing the decoder.
        Returns:
            outputs (FloatTensor): a Tensor sequence of output from the decoder
                                   of shape (len x batch x hidden_size).
            state (FloatTensor): final hidden state from the decoder.
            attns (dict of (str, FloatTensor)): a dictionary of different
                                type of attention Tensor from the decoder
                                of shape (src_len x batch).
        """
        # Args Check
        assert isinstance(state, RNNDecoderState)
        # END Args Check

        # Run the forward pass of the RNN.
        hidden, outputs, attns = self._run_forward_pass(emb, context, state)

        # Update the state with the result.
        state.update_state(hidden)

        # Concatenates sequence of tensors along a new dimension.
        outputs = torch.stack(outputs)
        for k in attns:
            attns[k] = torch.stack(attns[k])

        return outputs, state, attns

I got the PyTorch version 0.2.0.post3 and it works now!

I will let the issue open in case it was of interest to make this work with Pytorch 0.4.0.

Requirements

Install Python dependency

pip install -r requirements.txt

I got the PyTorch version 0.2.0.post3 and it works now!

when i type the "pip install -r requirements.txt" but it says "Could not find a version that satisfies the requirement torch==0.2.0.post3"

use "conda install pytorch==0.2.0" works !!