-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Labels
high prioritymodule: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchmodule: testsIssues related to tests (not the torch.testing module)Issues related to tests (not the torch.testing module)trackerA tracking issueA tracking issuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
OpInfos are Python classes containing structured metadata, test directives, and sample inputs for PyTorch's operators. This data is used to automatically generate a variety of tests, including tests that operators work with a variety of systems, like autograd, forward autograd, torchscript, FX, and NNC correctly.
For more details on OpInfos and how to add them see the Github wiki article here. If you have more questions ping @mruberry directly (like in a comment responding to this issue).
Priority Functions to OpInfo
- nn.functional.batch_norm (OpInfo for
nn.functional.batch_norm#63218) - nn.functional.layer_norm (OpInfo for
nn.functional.layer_norm#63276) - nn.functional.interpolate (OpInfo for nn.functional.interpolate #61956)
- nn.functional.cross_entropy (OpInfo for
nn.functional.cross_entropy#63547) - nn.functional.conv2d (OpInfo: nn.functional.conv2d #63517)
- nn.functional.dropout (add
OpInfofortorch.nn.functional.dropout#62315) - nn.functional.linear (nn.functional.linear OpInfo #61971)
- nn.functional.max_pool2d (Add Maxpool to shape analysis / Opinfo #63530)
- nn.functional.nll_loss (add
OpInfofortorch.nn.functional.nll_loss#63854) - nn.functional.cosine_similarity (Add OpInfo for
nn.functional.cosine_similarity#62959) - nn.functional.embedding (Opinfo: embedding, add shape analysis #63959)
- cat (see tests in test_autograd.py) (Support
torch.concatalias, addcatOpInfo & remove OpInfo test_out skips {cat, stack, hstack, vtack, dstack} #62560) - block_diag (see tests in test_autograd.py) (Remove
run_functional_checksfromtest_autogradand create necessary OpInfos #64993) - broadcast_tensors (see tests in test_autograd.py) (Remove
run_functional_checksfromtest_autogradand create necessary OpInfos #64993) - lobpcg (see tests in test_autograd.py)
- torch.linalg.tensor_solve (see tests in test_linalg.py)
- igamma (see tests in test_autograd.py)
- as_strided (see tests in test_autograd.py)
- unbind (see tests in test_autograd.py)
- pdist (see tests in test_autograd.py)
- argsort (add OpInfo for
torch.argsort#65454) - repeat_interleave (add
OpInfofortorch.repeat_interleave#65455) - nn.functional.sigmoid (note that an OpInfo for torch.sigmoid exists)
- nn.functional.tanh (note than an OpInfo for torch.tanh exists)
- torch.nn.functional.conv1d (OpInfo for
nn.functional.conv1d#67747)
OpInfo Backlog
- binary_cross_entropy_with_logits
- l1_loss (fix
torch.nn.functional.l1_lossfor complex inputs #65681) - binary_cross_entropy
- elu
- upsample
- fold
- affine_grid
- max_pool1d ([OpInfo Hackathon] Parcel (1/2): OpInfo for
max_pool1d,max_pool3dandmax_poolNd_with_indices#67005) - torch
- threshold
- smooth_l1_loss
- pairwise_distance (add
OpInfofortorch.nn.functional.pairwise_distance#65460) - logsigmoid
- adaptive_max_pool2d
- pixel_shuffle (add
OpInfofortorch.nn.pixel_shuffle#65467) - avg_pool3d Opinfos for avg_pooling #64214
- bilinear
- gumbel_softmax
- max_unpool2d (Improved error messages for
max_unpool{}doperators #67328) - kl_div (add
OpInfofortorch.nn.functional.kl_div#65469) - ctc_loss
- layer_norm (OpInfo for
nn.functional.layer_norm#63276) - conv3d
- max_unpool3d (Improved error messages for
max_unpool{}doperators #67328) - selu
- glu
- hardsigmoid
- upsample_bilinear
- max_pool3d ([OpInfo Hackathon] Parcel (1/2): OpInfo for
max_pool1d,max_pool3dandmax_poolNd_with_indices#67005) - adaptive_avg_pool3d Opinfos for avg_pooling #64214
- instance_norm
- embedding_bag
- upsample_nearest
- avg_pool1d Opinfos for avg_pooling #64214
- prelu
- celu
- dropout2d (OpInfo for
nn.functional.dropout2d, revise sample inputs fordropout#67891) - hinge_embedding_loss
- softsign
- max_unpool1d (Improved error messages for
max_unpool{}doperators #67328) - silu
- softshrink
- leaky_relu_
- softmin
- channel_shuffle
- multilabel_margin_loss
- dropout3d
- multi_margin_loss
- lp_pool2d
- conv_transpose1d
- triplet_margin_loss
- tanhshrink
- adaptive_max_pool1d
- cosine_embedding_loss
- multi_head_attention_forward
- max_pool1d_with_indices ([OpInfo Hackathon] Parcel (1/2): OpInfo for
max_pool1d,max_pool3dandmax_poolNd_with_indices#67005) - poisson_nll_loss
- margin_ranking_loss
- soft_margin_loss
- adaptive_max_pool3d
- group_norm
- local_response_norm
- multilabel_soft_margin_loss
- relu_
- alpha_dropout
- nn.functional.alpha_dropout (OpInfo for
nn.functional.alpha_dropout#67823) - feature_alpha_dropout
- lp_pool1d
- adaptive_max_pool1d_with_indices
- adaptive_max_pool2d_with_indices
- adaptive_max_pool3d_with_indices
- fractional_max_pool2d
- fractional_max_pool2d_with_indices
- fractional_max_pool3d
- fractional_max_pool3d_with_indices
- max_pool2d_with_indices ([OpInfo Hackathon] Parcel (1/2): OpInfo for
max_pool1d,max_pool3dandmax_poolNd_with_indices#67005) - max_pool3d_with_indices ([OpInfo Hackathon] Parcel (1/2): OpInfo for
max_pool1d,max_pool3dandmax_poolNd_with_indices#67005) - handle_torch_function
- has_torch_function
- adaptive_avg_pool1d Opinfos for avg_pooling #64214
- pdist
- rrelu_
- elu_
- hardtanh_
- triplet_margin_with_distance_loss
- selu_
- pixel_unshuffle (add
OpInfofortorch.nn.pixel_unshuffle#65468) - conv_transpose3d
- gaussian_nll_loss
- celu_
- huber_loss
- mish
- threshold_
- logical_and (Adding OpInfo for
logical_or,logical_and,logical_xor#67178) - logical_or (Adding OpInfo for
logical_or,logical_and,logical_xor#67178) - logical_xor (Adding OpInfo for
logical_or,logical_and,logical_xor#67178)
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @dylanbespalko @mruberry @lezcano @VitalyFedyunin @walterddr @mattip @kshitij12345
zou3519, walterddr, anjali411, RockingJavaBean, lithuak and 2 moreheitorschueroff, krshrimali and ankitaS11
Metadata
Metadata
Assignees
Labels
high prioritymodule: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchmodule: testsIssues related to tests (not the torch.testing module)Issues related to tests (not the torch.testing module)trackerA tracking issueA tracking issuetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module