Add tf.stop_gradient in tf.summary.histogram #5311
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Working around a user issue. It's intended as a temporary fix, but keeping it long-term doesn't have any downsides that I know of.
Adds tf.stop_gradient in tf.summary.histogram. The function is not differentiable, and building gradient graphs for its conds have been causing some issues. This should be a no-op with a tiny positive performance impact for most users, and will work around an error for those using XLA with multiple/persistent tf.GradientTapes (due to XlaDynamicUpdateSlice not having a gradient defined; there is a separate TensorFlow bug about the op with no gradient, which is the more satisfying medium-term solution here).
N/A; this just changes the implementation of a summary API.
Two folks (at Alphabet) have confirmed that this works around an issue they reported for now. It's a bit tricky to reproduce; we might be able to do a unit test with XLA+CPU in the TensorBoard repo, although the original report relates to TPU usage.
The op should have a gradient defined, and will eventually.