KEMBAR78
TF2 Upgrader: add support of ipynb files by lc0 · Pull Request #25680 · tensorflow/tensorflow · GitHub
Skip to content

Conversation

@lc0
Copy link
Contributor

@lc0 lc0 commented Feb 11, 2019

As discussed @martinwicke, since locally it was tested in isolated environment, I would be more than happy to run the required tests

Potentially I'd follow with updating readme and other docs related things

@lc0
Copy link
Contributor Author

lc0 commented Feb 11, 2019

I think once, #25653 is in, I can rebase this one

Copy link
Member

@martinwicke martinwicke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is super-awesome!

parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="""Convert a TensorFlow Python file to 2.0
description="""Convert a TensorFlow Python file from 1.* to 2.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer 1.x

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

funny, since I started as 1.x, but in a header, it was already written as 1.*

"--outfile=<output file> argument is required when converting a "
"single file.")
files_processed, report_text, errors = upgrade.process_file(
files_processed, report_text, errors = ipynb.process_file(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer the filename selection logic to live here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, look at line 99, we should make sure that convert tree also understands ipynb files.

files_processed, report_text, errors = \
upgrader.process_file(in_filename, out_filename)

elif in_filename.endswith('.ipynb'):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move the dispatch logic to the main function (and possibly maybe even process_tree)

@pragyaak pragyaak self-assigned this Feb 12, 2019
@pragyaak pragyaak added size:M CL Change Size: Medium stat:awaiting response Status - Awaiting response from author labels Feb 12, 2019
@lc0
Copy link
Contributor Author

lc0 commented Feb 12, 2019

@martinwicke Thanks for review, I've updated accordingly.
And with tree part, I would add it as a second PR potentially.

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Feb 13, 2019
martinwicke
martinwicke previously approved these changes Feb 13, 2019
Copy link
Member

@martinwicke martinwicke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG, doing the tree stuff in a separate PR makes sense.

@martinwicke martinwicke added kokoro:force-run Tests on submitted change ready to pull PR ready for merge process labels Feb 13, 2019
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label Feb 13, 2019
@pragyaak pragyaak added ready to pull PR ready for merge process and removed ready to pull PR ready for merge process labels Feb 15, 2019
@pragyaak pragyaak assigned rthadur and unassigned pragyaak Feb 23, 2019
@lc0 lc0 force-pushed the tf2-ipynb-feature branch from 7a733d1 to c9ee938 Compare February 23, 2019 21:59
@martinwicke martinwicke added the kokoro:force-run Tests on submitted change label Feb 23, 2019
@kokoro-team kokoro-team removed kokoro:force-run Tests on submitted change labels Feb 23, 2019
@martinwicke martinwicke added the kokoro:force-run Tests on submitted change label Feb 23, 2019
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label Feb 23, 2019
@dynamicwebpaige dynamicwebpaige added the kokoro:force-run Tests on submitted change label Feb 24, 2019
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label Feb 24, 2019
@lc0
Copy link
Contributor Author

lc0 commented Feb 24, 2019

@martinwicke I fixed linter and other build issues. Please let me know if anything additional is missing.

@tensorflow-copybara tensorflow-copybara merged commit 424f6c9 into tensorflow:master Feb 25, 2019
tensorflow-copybara pushed a commit that referenced this pull request Feb 25, 2019
@lc0 lc0 mentioned this pull request Nov 5, 2019
copybara-service bot pushed a commit that referenced this pull request Apr 25, 2025
Imported from GitHub PR openxla/xla#25680

ROCm platform currently doesn't support **dot** --> **__triton_gemm** transformation for OCP FP8 data type.
Copybara import of the project:

--
93b3c24e4727f3e80868a6c644d36ccc60fe0199 by scxfjiang <xuefei.jiang@amd.com>:

disable triton fp8 gemm test  in gpu compiler for ROCm

--
90960280448975109e560a5bbe885aff84e36b25 by scxfjiang <xuefei.jiang@amd.com>:

remove unused variable

Merging this change closes #25680

FUTURE_COPYBARA_INTEGRATE_REVIEW=openxla/xla#25680 from ROCm:ci_disable_triton_fp8_gemm_test_for_rocm 90960280448975109e560a5bbe885aff84e36b25
PiperOrigin-RevId: 751348460
copybara-service bot pushed a commit that referenced this pull request Apr 25, 2025
Imported from GitHub PR openxla/xla#25680

ROCm platform currently doesn't support **dot** --> **__triton_gemm** transformation for OCP FP8 data type.
Copybara import of the project:

--
93b3c24e4727f3e80868a6c644d36ccc60fe0199 by scxfjiang <xuefei.jiang@amd.com>:

disable triton fp8 gemm test  in gpu compiler for ROCm

--
90960280448975109e560a5bbe885aff84e36b25 by scxfjiang <xuefei.jiang@amd.com>:

remove unused variable

Merging this change closes #25680

PiperOrigin-RevId: 751368623
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla: yes ready to pull PR ready for merge process size:M CL Change Size: Medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants