TensorFlow 2.12.0
Release 2.12.0
TensorFlow
Breaking Changes
-
Build, Compilation and Packaging
- Removed redundant packages
tensorflow-gpuandtf-nightly-gpu. These packages were removed and replaced with packages that direct users to switch totensorflowortf-nightlyrespectively. Since TensorFlow 2.1, the only difference between these two sets of packages was their names, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removed redundant packages
-
tf.function:tf.functionnow uses the Python inspect library directly for parsing the signature of the Python function it is decorated on. This change may break code where the function signature is malformed, but was ignored previously, such as:- Using
functools.wrapson a function with different signature - Using
functools.partialwith an invalidtf.functioninput
- Using
tf.functionnow enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.- Parameterless
tf.functions are assumed to have an emptyinput_signatureinstead of an undefined one even if theinput_signatureis unspecified. tf.types.experimental.TraceTypenow requires an additionalplaceholder_valuemethod to be defined.tf.functionnow traces with placeholder values generated by TraceType instead of the value itself.
-
Experimental APIs
tf.config.experimental.enable_mlir_graph_optimizationandtf.config.experimental.disable_mlir_graph_optimizationwere removed.
Major Features and Improvements
-
Support for Python 3.11 has been added.
-
Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7.
-
tf.lite:- Add 16-bit float type support for built-in op
fill. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.experimental.dtensor:- Coordination service now works with
dtensor.initialize_accelerator_system, and enabled by default. - Add
tf.experimental.dtensor.is_dtensorto check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data:- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
experimental_symbolic_checkpointoption oftf.data.Options(). - Added a new
rerandomize_each_iterationargument for thetf.data.Dataset.random()operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseedis set andrerandomize_each_iteration=True, therandom()operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iterationargument for thetf.data.Dataset.sample_from_datasets()operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseedis set andrerandomize_each_iteration=True, thesample_from_datasets()operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
-
tf.test:- Added
tf.test.experimental.sync_devices, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
tf.SavedModel:- Introduced new class
tf.saved_model.experimental.Fingerprintthat contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduced API
tf.saved_model.experimental.read_fingerprint(export_dir)for reading the fingerprint of a SavedModel.
- Introduced new class
tf.random- Added non-experimental aliases for
tf.random.splitandtf.random.fold_in, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
tf.experimental.ExtensionType- Added function
experimental.extension_type.as_dict(), which converts an instance oftf.experimental.ExtensionTypeto adictrepresentation.
- Added function
stream_executor- Top level
stream_executordirectory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor.
- Top level
tf.nn- Added
tf.nn.experimental.general_dropout, which is similar totf.random.experimental.stateless_dropoutbut accepts a custom sampler function.
- Added
tf.types.experimental.GenericFunction- The
experimental_get_compiler_irmethod supports tf.TensorSpec compilation arguments.
- The
tf.config.experimental.mlir_bridge_rollout- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLEDandMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLEDwhich are no longer used by the tf2xla bridge
- Removed enums
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
tf.keras:
- Moved all saving-related utilities to a new namespace,
keras.saving, for example:keras.saving.load_model,keras.saving.save_model,keras.saving.custom_object_scope,keras.saving.get_custom_objects,keras.saving.register_keras_serializable,keras.saving.get_registered_nameandkeras.saving.get_registered_object. The previous API locations (inkeras.utilsandkeras.models) will be available indefinitely, but we recommend you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensoror using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
- Whether you represent a ragged tensor as a
- If you use masked losses with Keras the loss values may be different in TensorFlow
2.12compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
Major Features and Improvements
tf.keras:
- The new Keras model saving format (
.keras) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3"). In the future it will become the default for all files with the.kerasextension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdasare disallowed at loading time. If you want to uselambdas, you can passsafe_mode=Falseto the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchiveclass for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.functiontracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensorinput support totf.keras.layers.Embeddinglayer. The layer now accepts a new boolean argumentsparse. Ifsparseis set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compileas a settable property totf.keras.Model. - Added
synchronizedoptional parameter tolayers.BatchNormalization. - Added deprecation warning to
layers.experimental.SyncBatchNormalizationand suggested to uselayers.BatchNormalizationwithsynchronized=Trueinstead. - Updated
tf.keras.layers.BatchNormalizationto support masking of the inputs (maskargument) when computing the mean and variance. - Add
tf.keras.layers.Identity, a placeholder pass-through layer. - Add
show_trainableoption totf.keras.utils.model_to_dotto display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpaceobject, viafeature_space.save("myfeaturespace.keras"), and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras"). - Added utility
tf.keras.utils.to_ordinalto convert class vector to ordinal regression / classification matrix.
Bug Fixes and Other Changes
- N/A
Security
- Fixes an FPE in TFLite in conv kernel CVE-2023-27579
- Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
- Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
- Fixes a segfault in Bincount with XLA CVE-2023-25675
- Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
- Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
- Fixes segmentation fault in tfg-translate CVE-2023-25671
- Fixes an NPE in QuantizedMatMulWithBiasAndDequantize CVE-2023-25670
- Fixes an FPE in AvgPoolGrad with XLA CVE-2023-25669
- Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation CVE-2023-25668
- Fixes a segfault when opening multiframe gif CVE-2023-25667
- Fixes an NPE in SparseSparseMaximum CVE-2023-25665
- Fixes an FPE in AudioSpectrogram CVE-2023-25666
- Fixes a heap-buffer-overflow in AvgPoolGrad CVE-2023-25664
- Fixes a NPE in TensorArrayConcatV2 CVE-2023-25663
- Fixes a Integer overflow in EditDistance CVE-2023-25662
- Fixes a Seg fault in
tf.raw_ops.PrintCVE-2023-25660 - Fixes a OOB read in DynamicStitch CVE-2023-25659
- Fixes a OOB Read in GRUBlockCellGrad CVE-2023-25658
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, Vinila S, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09