KEMBAR78
Allow target.requires_grad in l1_loss and mse_loss by szagoruyko · Pull Request #3876 · pytorch/pytorch · GitHub
Skip to content

Conversation

@szagoruyko
Copy link
Contributor

This bug cost me a lot of time in several projects, 0.2 would silently accept target.requires_grad and would backprop zeroes in l1_loss and mse_loss. It's fixed in master already (with exception), and this PR allows target.requires_grad by explicitly defining the loss functions.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but please fix the typo

return loss.sum()


def _poinwise_loss(lambd, lambd_optimized, input, target, size_average=True, reduce=True):

This comment was marked as off-topic.

@apaszke
Copy link
Contributor

apaszke commented Nov 26, 2017

linter still has a few complaints

@apaszke
Copy link
Contributor

apaszke commented Nov 27, 2017

@pytorchbot add to whitelist

@ezyang ezyang merged commit 11c9bd6 into pytorch:master Nov 27, 2017
@soumith soumith added the 0.3.1 label Feb 4, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants