KEMBAR78
Extend _TestONNXRuntime to reuses all tests for new model format by thiagocrepaldi · Pull Request #112289 · pytorch/pytorch · GitHub
Skip to content

Conversation

@thiagocrepaldi
Copy link
Collaborator

@thiagocrepaldi thiagocrepaldi commented Oct 27, 2023

Stack from ghstack (oldest at bottom):

_TestONNXRuntime has infra to test models which are either Callable or a torch.nn.Module.

After #111497, we want to re-run all those tests for model of type torch.export.ExportedProgram.

This PR adds to self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime the capability of detect the model type to be tested and export the incoming torch.nn.Module model to torch.export.ExportedProgram before running ONNX export tests.

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 27, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/112289

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c758014 with merge base a9134fa (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

thiagocrepaldi pushed a commit that referenced this pull request Oct 27, 2023
_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

ghstack-source-id: 73f4094
Pull Request resolved: #112289
@thiagocrepaldi thiagocrepaldi marked this pull request as draft October 27, 2023 21:04
@thiagocrepaldi thiagocrepaldi added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team release notes: onnx torch.onnx related changes that should show up in the release notes labels Oct 27, 2023
…rmat"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…rmat"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…rmat"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…rmat"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
thiagocrepaldi pushed a commit that referenced this pull request Oct 31, 2023
…rtedProgram and remove None from input"


This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when #112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```



[ghstack-poisoned]
thiagocrepaldi pushed a commit that referenced this pull request Oct 31, 2023
…remove None from input"


This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when #112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```



[ghstack-poisoned]
…rmat"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
@thiagocrepaldi thiagocrepaldi changed the title Hack _TestONNXRuntime to reuses all tests for new model format Extend _TestONNXRuntime to reuses all tests for new model format Oct 31, 2023
Thiago Crepaldi added 4 commits November 14, 2023 22:02
…format"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…format"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…format"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…format"

_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model input of type
torch.export.ExportedProgram.

This PR adds a terrible flag that must be specified by all
self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime calls to force
the model conversion to torch.export.ExportedProgram before runnning
tests. This is not ideal, just a quick hack, though

Ideally, we should be able to _TestONNXRuntime, inheriting all the
parameterized classes and be able to overload
run_test_with_fx_to_onnx_exporter_and_onnx_runtime or similar.
run_test_with_fx_to_onnx_exporter_and_onnx_runtime could also be moved
out of the class

A quick PoC shows the idea of writing on BaseTestCase and reusing
through different test suits:

```python
import unittest

class _FXtoONNXRuntimeBasic:
    def __init__(self, input):
        self.input = input
    def get_result(self):
        return self.input+1

class _FXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBasic):
    def get_result(self):
        return self.input*10

class _FXtoONNXRuntimeBaseTess(unittest.TestCase):

    def check(self, input, expected_output):
        obj = self.class_under_test(input)
        actual_output = obj.get_result()
        self.assertEqual(actual_output, expected_output)

    def check_all(self):
        for value in self.values:
            self.check(value[0], value[1])

class TestFXtoONNXRuntimeBasic(_FXtoONNXRuntimeBaseTess):
    values = ((1, 2), (3, 4))
    class_under_test = _FXtoONNXRuntimeBasic

    def test_it(self):
        print("TestFXtoONNXRuntimeBasic.test_it")
        self.check_all()

class TestFXtoONNXRuntimeExportedProgram(_FXtoONNXRuntimeBaseTess):
    values = ((1, 10), (2, 20))
    class_under_test = _FXtoONNXRuntimeExportedProgram

    def test_it(self):
        print("TestFXtoONNXRuntimeExportedProgram.test_it")
        self.check_all()
```

[ghstack-poisoned]
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
Thiago Crepaldi added 2 commits November 15, 2023 17:59
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
thiagocrepaldi pushed a commit that referenced this pull request Nov 15, 2023
_TestONNXRuntime has infra to test models which are either Callable or a
torch.nn.Module.

After #111497, we want to re-run all those tests for model of type
torch.export.ExportedProgram.

This PR adds self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime
the capability of detect the model type to be tested and
export the incoming torch.nn.Module  model to torch.export.ExportedProgram
before runnning ONNX export tests.

ghstack-source-id: 0b7173a
Pull Request resolved: #112289
Copy link
Collaborator

@titaiwangms titaiwangms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking if I am going to create a new test file for #113705. Why don't we move ExportedProgram tests there. test_fx_to_onnx_onnxruntime.py seems pretty heavy loaded and complicated with all the parameters and fake_mode. We can separate them and when it's clear that which API we are going with, we integrate them at that time?

Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, can land once the skip vs xfail conversation is resolved.

@thiagocrepaldi
Copy link
Collaborator Author

I am thinking if I am going to create a new test file for #113705. Why don't we move ExportedProgram tests there. test_fx_to_onnx_onnxruntime.py seems pretty heavy loaded and complicated with all the parameters and fake_mode. We can separate them and when it's clear that which API we are going with, we integrate them at that time?

The idea is to not have tests that exclusive to torch.export.ExportedProgram or Union[Callable, torch.nn.Module]. Otherwise, we would duplicate the test base or if we don't, have gaps in one suite that is covered by the other.

union[torch.export.ExportedProgram, Callable, torch.nn.Module] models are meant to be used (and tested) interchangeably. Today torch.nn.Module is kind of the default model type, but as we adhere to torch.export within torch.onnx.dynamo_export, that would change. There is a chance that torch.onnx.dynamo_export would only take torch.export.ExportedProgram as model type, simplifying our implementation and keeping a tight integration with the main export path on pytorch (aka torch.export public API)

Although not in this PR, _TestONNXRuntime does deserve a nice refactoring to better handle different 1) model types, 2) dynamic shapes, 3) fake mode. These variants are not only bool values used to run the model that are passed as flags to the export api. They require extra code to run. For example, 1) requires extra call to model = torch.export.export(model, ...) when model type is ExportedPerogram; 2) requires an additional dict with specific inputs/dimensions specs being defined; 3) requires a context manager wrapping input/model loading

Thiago Crepaldi added 2 commits November 16, 2023 19:19
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
Thiago Crepaldi added 2 commits November 17, 2023 01:56
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
…format"


`_TestONNXRuntime` has infra to test models which are either Callable or a `torch.nn.Module`.

After #111497, we want to re-run all those tests for model of type `torch.export.ExportedProgram`.

This PR adds to `self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime` the capability of detect the model type to be tested and export the incoming `torch.nn.Module` model to `torch.export.ExportedProgram` before running ONNX export tests.

[ghstack-poisoned]
@thiagocrepaldi
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 17, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/thiagocrepaldi/8/head branch November 21, 2023 15:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team open source release notes: onnx torch.onnx related changes that should show up in the release notes topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

5 participants