KEMBAR78
Add rnn args check by zou3519 · Pull Request #3925 · pytorch/pytorch · GitHub
Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Nov 28, 2017

Fixes #3851, #3259

Added a high-level check for arguments to RNNBase (these were moved from arg checks in cudnn/rnn.py)

Test Plan

python test/test_nn.py

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but please check everything for LSTM.

mini_batch, self.hidden_size)
if self.mode == 'LSTM':
hidden = hidden[0]
if tuple(hidden.size()) != expected_hidden_size:

This comment was marked as off-topic.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't this PR have a test for these cases as well? It would be nice to add it back before merging.

@soumith soumith merged commit 28890b2 into pytorch:master Dec 13, 2017
@zou3519 zou3519 deleted the rnn-args branch January 3, 2018 19:59
@soumith soumith added the 0.3.1 label Feb 4, 2018
soumith pushed a commit that referenced this pull request Feb 7, 2018
* Add rnn args check

* Check both hidden sizes for LSTM

* RNN args check test
@Shandilya21
Copy link

@zou3519 I have similar problem,

RuntimeError: Expected hidden[0] size (1, 64, 256), got (64, 256), i tried different ways but unable to get it.
here is the code snippet.:
def forward(self, input_src, input_trg, courteous_template, ctx_mask=None, trg_mask=None):
src_emb = self.src_embedding(input_src)
trg_emb = self.trg_embedding(input_trg)
temp_emb = self.temp_embedding(courteous_template)

    self.h0_encoder, self.c0_encoder = self.get_state(input_src)
    self.h1_encoder, self.c1_encoder = self.get_courteous(courteous_template)

    src_h, (src_h_t, src_c_t) = self.encoder(
        src_emb, (self.h0_encoder, self.c0_encoder)
    )
    

    temp_h, (tmp_h_t, tmp_c_t) = self.temp_encoder(
        temp_emb, (self.h1_encoder, self.c1_encoder)
    )

    out = torch.cat((src_h, temp_h),1)


    out = out.reshape(out.size(1),
    	out.size(0),out.size(2))

    # print(out.size())

    h_t = out[-1]
    h = h_t.view(h_t.size(0),h_t.size(1))



    trg_h, (_,_) = self.decoder(
    	trg_emb, h_t.view(
    		h_t.size(0),
    		h_t.size(1))
    	)

RuntimeError: Expected hidden[0] size (1, 64, 256), got (64, 256), i tried different ways but unable to get it.
please help to rectify this issues

@zou3519
Copy link
Contributor Author

zou3519 commented Jun 10, 2019

@Shandilya21 Please ask a question on the forums or open a (new) issue if you think there is a bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve error message for RuntimeError: inconsistent tensor size in RNN

4 participants