-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Closed
Closed
Copy link
Labels
good first issuemodule: inductoroncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
description: when meeting internal float32 (it's y in my case), eager pass the check and return 0 while inductor throws an assertion error
device: both on triton and CPP
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config
config.fallback_random = True
torch.set_grad_enabled(False)
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
y = torch.Tensor([0]) # y dtype: torch.float32
x = torch.slice_scatter(y, x, 0)
return x
model = Model()
x = torch.Tensor([0]).to(torch.int64)
inputs = [x]
def run_test(model, inputs, backend):
model.eval()
torch.manual_seed(0)
if backend != "eager":
model = torch.compile(model, backend=backend)
try:
c_output = model(*inputs)
print(c_output)
except Exception as e:
print(e)
run_test(model, inputs, 'eager')
run_test(model, inputs, 'inductor')Error logs
tensor([0.])
LoweringException: AssertionError:
target: aten.slice_scatter.default
args[0]: TensorBox(StorageBox(
Pointwise(
'cpu',
torch.float32,
def inner_fn(index):
_ = index
tmp0 = ops.constant(0.0, torch.float32)
return tmp0
,
ranges=[1],
origin_node=full_default,
origins=OrderedSet([full_default])
)
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='arg0_1', layout=FixedLayout('cpu', torch.int64, size=[1], stride=[1]))
))
Versions
nightly 20250225
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @aakhundov
Metadata
Metadata
Assignees
Labels
good first issuemodule: inductoroncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module