-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[MPS] Add scatter_reduce.two #141948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MPS] Add scatter_reduce.two #141948
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/141948
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 1ff4994 with merge base 78543e6 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
Which is just a flavor of out-of-box scatter-reduce
b8c9b2d to
c63b817
Compare
|
@pytorchbot merge -f "MPS tests + Lint are green" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Which has been request 20+ times on pytorch#77764 is just a flavor of out-of-box scatter-reduce, so all this op does is redispatches existing implementation. Unsupported dtype/reduction type combinations: - min/max for int64 - min/max for int32 on MacOS-14 or older Following swift code demonstrates problem with scatterAlongAxis MPS call ```swift import Metal import MetalPerformanceShadersGraph func scatterMPS(device: MTLDevice, inp_buf: MTLBuffer, upd_buf: MTLBuffer, idx_buf: MTLBuffer, out_buf: MTLBuffer, inp_elem: Int, upd_elem: Int) { let graph = MPSGraph() let inputPlaceholder = graph.placeholder(shape: [inp_elem as NSNumber], dataType: .int64, name: nil) let updatesPlaceholder = graph.placeholder(shape: [upd_elem as NSNumber], dataType: .int64, name: nil) let indicesPlaceholder = graph.placeholder(shape: [upd_elem as NSNumber], dataType: .int64, name: nil) let outNode = graph.scatterAlongAxis(0, data: inputPlaceholder, updates: updatesPlaceholder, indices: indicesPlaceholder, mode: .min, name: nil) let mpsInputBuffer = MPSGraphTensorData(inp_buf, shape: [inp_elem as NSNumber], dataType: .int64) let mpsUpdatesBuffer = MPSGraphTensorData(upd_buf, shape: [upd_elem as NSNumber], dataType: .int64) let mpsIndicesBuffer = MPSGraphTensorData(idx_buf, shape: [upd_elem as NSNumber], dataType: .int64) let mpsOutputBuffer = MPSGraphTensorData(out_buf, shape: [inp_elem as NSNumber], dataType: .int64) guard let queue = device.makeCommandQueue() else { fatalError("Can't make queue") } graph.run(with: queue, feeds: [inputPlaceholder: mpsInputBuffer, updatesPlaceholder: mpsUpdatesBuffer, indicesPlaceholder: mpsIndicesBuffer ], targetOperations: nil, resultsDictionary: [outNode: mpsOutputBuffer]) } func makeBufferWithValues(device: MTLDevice, values: [Int64]) -> MTLBuffer { guard let buf = device.makeBuffer(length: values.count * MemoryLayout<Int64>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") } let buf_data = buf.contents().assumingMemoryBound(to: Int64.self) for i in 0..<values.count { buf_data[i] = values[i] } return buf } guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") } print("Using device \(device.name)") let inp_elem = 4 let upd_elem = 4 let inp_buf = makeBufferWithValues(device: device, values: [1, 2, 3, 4]) let upd_buf = makeBufferWithValues(device: device, values: [Int64.max - 1, Int64.max - 2 , Int64.max >> 16 , 11]) let idx_buf = makeBufferWithValues(device: device, values: [0, 1, 2, 3]) guard let out_buf = device.makeBuffer(length:inp_elem * MemoryLayout<Int64>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") } scatterMPS(device: device, inp_buf: inp_buf, upd_buf: upd_buf, idx_buf: idx_buf, out_buf: out_buf, inp_elem: inp_elem, upd_elem: upd_elem) let obuf_data = out_buf.contents().assumingMemoryBound(to: Int64.self) for i in 0..<inp_elem { print("out_buf[\(i)] = \(obuf_data[i])") } ``` that prints `4294967294, 4294967293, 4294967295, 4` instead of expected `1, 2, 3, 4` Where `torch.tensor([[1, 9223372036854775806], [2, 9223372036854775805], [3, 140737488355327], [4, 11]], dtype=torch.int64, device='mps').max(1)` yields an expected results Pull Request resolved: pytorch#141948 Approved by: https://github.com/manuelcandales
Which has been request 20+ times on pytorch#77764 is just a flavor of out-of-box scatter-reduce, so all this op does is redispatches existing implementation. Unsupported dtype/reduction type combinations: - min/max for int64 - min/max for int32 on MacOS-14 or older Following swift code demonstrates problem with scatterAlongAxis MPS call ```swift import Metal import MetalPerformanceShadersGraph func scatterMPS(device: MTLDevice, inp_buf: MTLBuffer, upd_buf: MTLBuffer, idx_buf: MTLBuffer, out_buf: MTLBuffer, inp_elem: Int, upd_elem: Int) { let graph = MPSGraph() let inputPlaceholder = graph.placeholder(shape: [inp_elem as NSNumber], dataType: .int64, name: nil) let updatesPlaceholder = graph.placeholder(shape: [upd_elem as NSNumber], dataType: .int64, name: nil) let indicesPlaceholder = graph.placeholder(shape: [upd_elem as NSNumber], dataType: .int64, name: nil) let outNode = graph.scatterAlongAxis(0, data: inputPlaceholder, updates: updatesPlaceholder, indices: indicesPlaceholder, mode: .min, name: nil) let mpsInputBuffer = MPSGraphTensorData(inp_buf, shape: [inp_elem as NSNumber], dataType: .int64) let mpsUpdatesBuffer = MPSGraphTensorData(upd_buf, shape: [upd_elem as NSNumber], dataType: .int64) let mpsIndicesBuffer = MPSGraphTensorData(idx_buf, shape: [upd_elem as NSNumber], dataType: .int64) let mpsOutputBuffer = MPSGraphTensorData(out_buf, shape: [inp_elem as NSNumber], dataType: .int64) guard let queue = device.makeCommandQueue() else { fatalError("Can't make queue") } graph.run(with: queue, feeds: [inputPlaceholder: mpsInputBuffer, updatesPlaceholder: mpsUpdatesBuffer, indicesPlaceholder: mpsIndicesBuffer ], targetOperations: nil, resultsDictionary: [outNode: mpsOutputBuffer]) } func makeBufferWithValues(device: MTLDevice, values: [Int64]) -> MTLBuffer { guard let buf = device.makeBuffer(length: values.count * MemoryLayout<Int64>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") } let buf_data = buf.contents().assumingMemoryBound(to: Int64.self) for i in 0..<values.count { buf_data[i] = values[i] } return buf } guard let device = MTLCopyAllDevices().first else { fatalError("Not Metal device found") } print("Using device \(device.name)") let inp_elem = 4 let upd_elem = 4 let inp_buf = makeBufferWithValues(device: device, values: [1, 2, 3, 4]) let upd_buf = makeBufferWithValues(device: device, values: [Int64.max - 1, Int64.max - 2 , Int64.max >> 16 , 11]) let idx_buf = makeBufferWithValues(device: device, values: [0, 1, 2, 3]) guard let out_buf = device.makeBuffer(length:inp_elem * MemoryLayout<Int64>.size, options: [.storageModeShared]) else { fatalError("Can't alloc") } scatterMPS(device: device, inp_buf: inp_buf, upd_buf: upd_buf, idx_buf: idx_buf, out_buf: out_buf, inp_elem: inp_elem, upd_elem: upd_elem) let obuf_data = out_buf.contents().assumingMemoryBound(to: Int64.self) for i in 0..<inp_elem { print("out_buf[\(i)] = \(obuf_data[i])") } ``` that prints `4294967294, 4294967293, 4294967295, 4` instead of expected `1, 2, 3, 4` Where `torch.tensor([[1, 9223372036854775806], [2, 9223372036854775805], [3, 140737488355327], [4, 11]], dtype=torch.int64, device='mps').max(1)` yields an expected results Pull Request resolved: pytorch#141948 Approved by: https://github.com/manuelcandales
|
Results running final rc for 2.6 on MacOS 15.1.1: Final rc running on MacOs 14.4: |
Which has been request 20+ times on #77764 is just a flavor of out-of-box scatter-reduce, so all this op does is redispatches existing implementation.
Unsupported dtype/reduction type combinations:
Following swift code demonstrates problem with scatterAlongAxis MPS call
that prints
4294967294, 4294967293, 4294967295, 4instead of expected1, 2, 3, 4Where
torch.tensor([[1, 9223372036854775806], [2, 9223372036854775805], [3, 140737488355327], [4, 11]], dtype=torch.int64, device='mps').max(1)yields an expected results