KEMBAR78
Fix CUDA Multinomial checks by ssnl · Pull Request #4009 · pytorch/pytorch · GitHub
Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Dec 4, 2017

Fixes #3475 .
Also adds probability non-negative check on CPU.

sum = reduceBlock(smem, blockDim.x, sum, ReduceAdd<T, T>(), ScalarConvert<int, T>::to(0));
sum = reduceBlock(smem, blockDim.x, sum, ReduceAdd<T, T>(), zero);
if (threadIdx.x == 0) {
assert(THCNumerics<T>::gt(sum, zero));

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

if (threadIdx.x == 0) {
// Make sure the sum of our distribution didn't overflow
assert(!isinf(sum));
assert(THCNumerics<AccT>::gt(sum, accZero));

This comment was marked as off-topic.

Copy link
Contributor

@killeent killeent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@soumith soumith merged commit 390b7af into pytorch:master Dec 18, 2017
@ssnl ssnl deleted the cuda_multinomial branch December 18, 2017 16:26
@soumith soumith added the 0.3.1 label Feb 4, 2018
soumith pushed a commit that referenced this pull request Feb 7, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CUDA tensor allows negative values in torch.multinomial

4 participants