TensorFlow 2.8.0
Release 2.8.0
Major Features and Improvements
-
tf.lite:- Added TFLite builtin op support for the following TF ops:
tf.raw_ops.Bucketizeop on CPU.tf.whereop for data typestf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.tf.random.normalop for output data typetf.float32on CPU.tf.random.uniformop for output data typetf.float32on CPU.tf.random.categoricalop for output data typetf.int64on CPU.
- Added TFLite builtin op support for the following TF ops:
-
tensorflow.experimental.tensorrt:conversion_paramsis now deprecated insideTrtGraphConverterV2in favor of direct arguments:max_workspace_size_bytes,precision_mode,minimum_segment_size,maximum_cached_engines,use_calibrationandallow_build_at_runtime.- Added a new parameter called
save_gpu_specific_enginesto the.save()function insideTrtGraphConverterV2. WhenFalse, the.save()function won't save any TRT engines that have been built. WhenTrue(default), the original behavior is preserved. TrtGraphConverterV2provides a new API called.summary()which outputs a summary of the inference converted by TF-TRT. It namely shows eachTRTEngineOpwith their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of theTRTEngineOps.
-
tf.tpu.experimental.embedding:tf.tpu.experimental.embedding.FeatureConfignow takes an additional argumentoutput_shapewhich can specify the shape of the output activation for the feature.tf.tpu.experimental.embedding.TPUEmbeddingnow has the same behavior astf.tpu.experimental.embedding.serving_embedding_lookupwhich can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
-
Add
tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces theTF_DETERMINISTIC_OPSenvironmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes. -
(Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.
Bug Fixes and Other Changes
-
tf.data:- The optimization
parallel_batchnow becomes default if not disabled by users, which will parallelize copying of batch elements. - Added the ability for
TensorSliceDatasetto identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.- The optimization
parallel_batchnow becomes default if not disabled by users, which will parallelize copying of batch elements. - Added the ability for
TensorSliceDatasetto identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
- The optimization
- The optimization
-
tf.lite:- Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
- Deprecated
Interpreter::SetNumThreads, in favor ofInterpreterBuilder::SetNumThreads.
-
tf.keras:- Adds
tf.compat.v1.keras.utils.get_or_create_layerto aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with thetf.compat.v1.keras.utils.track_tf1_style_variablesdecorator. - Added a
tf.keras.layers.experimental.preprocessing.HashedCrossinglayer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model. - Removed
keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to theHashedCrossinglayer or usetf.sparse.cross/tf.ragged.crossdirectly. - Added additional
standardizeandsplitmodes toTextVectorization:standardize="lower"will lowercase inputs.standardize="string_punctuation"will remove all puncuation.split="character"will split on every unicode character.
- Added an
output_modeargument to theDiscretizationandHashinglayers with the same semantics as other preprocessing layers. All categorical preprocessing layers now supportoutput_mode. - All preprocessing layer output will follow the compute dtype of a
tf.keras.mixed_precision.Policy, unless constructed withoutput_mode="int"in which case output will betf.int64. The output type of any preprocessing layer can be controlled individually by passing adtypeargument to the layer. tf.random.Generatorfor keras initializers and all RNG code.- Added 3 new APIs for enable/disable/check the usage of
tf.random.Generatorin keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well. tf.keras.callbacks.experimental.BackupAndRestoreis now available astf.keras.callbacks.BackupAndRestore. The experimental endpoint is deprecated and will be removed in a future release.tf.keras.experimental.SidecarEvaluatoris now available astf.keras.utils.SidecarEvaluator. The experimental endpoint is deprecated and will be removed in a future release.- Metrics update and collection logic in default
Model.train_step()is now customizable via overridingModel.compute_metrics(). - Losses computation logic in default
Model.train_step()is now customizable via overridingModel.compute_loss(). jit_compileadded toModel.compile()on an opt-in basis to compile the model's training step with XLA. Note thatjit_compile=Truemay not necessarily work for all models.
- Adds
-
Deterministic Op Functionality:
- Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version.
- Add determinsitic GPU implementations of:
tf.function(jit_compile=True)'s that useScatter.- (since v2.7) Stateful ops used in
tf.data.Dataset - (since v2.7)
tf.convert_to_tensorwhen fed with (sparse)tf.IndexedSlices(because it usestf.math.unsorted_segment_sum) - (since v2.7)
tf.gatherbackprop (becausetf.convert_to_tensorreducestf.gather's (sparse)tf.IndexedSlicesgradients into its denseparamsinput) - (since v2.7)
tf.math.segment_mean - (since v2.7)
tf.math.segment_prod - (since v2.7)
tf.math.segment_sum - (since v2.7)
tf.math.unsorted_segment_mean - (since v2.7)
tf.math.unsorted_segment_prod - (since v2.7)
tf.math.unsorted_segment_sum - (since v2.7)
tf.math.unsorted_segment_sqrt - (since v2.7)
tf.nn.ctc_loss(resolved, possibly in prior release, and confirmed with tests) - (since v2.7)
tf.nn.sparse_softmax_crossentropy_with_logits
- (since v2.7) Run
tf.scatter_ndand other related scatter functions, such astf.tensor_scatter_nd_update, on CPU (with significant performance penalty). - Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after
tf.config.experimental.enable_op_determinismhas been called), an attempt to use the specified paths through the following ops on a GPU will causetf.errors.UnimplementedError(with an understandable message), unless otherwise specified, to be thrown.FakeQuantWithMinMaxVarsGradientandFakeQuantWithMinMaxVarsPerChannelGradient- (since v2.7)
tf.compat.v1.get_seedif the global random seed has not yet been set (viatf.random.set_seed). ThrowsRuntimeErrorfrom Python orInvalidArgumentfrom C++ - (since v2.7)
tf.compat.v1.nn.fused_batch_normbackprop tooffsetwhenis_training=False - (since v2.7)
tf.image.adjust_contrastforward - (since v2.7)
tf.image.resizewithmethod=ResizeMethod.NEARESTbackprop - (since v2.7)
tf.linalg.svd - (since v2.7)
tf.math.bincount - (since v2.7)
tf.nn.depthwise_conv2dbackprop tofilterwhen not using cuDNN convolution - (since v2.7)
tf.nn.dilation2dgradient - (since v2.7)
tf.nn.max_pool_with_argmaxgradient - (since v2.7)
tf.raw_ops.DebugNumericSummaryandtf.raw_ops.DebugNumericSummaryV2 - (since v2.7)
tf.timestamp. ThrowsFailedPrecondition - (since v2.7)
tf.Variable.scatter_add(and other scatter methods, both on ref and resource variables) - (since v2.7) The random-number-generating ops in the
tf.randommodule when the global random seed has not yet been set (viatf.random.set_seed). ThrowsRuntimeErrorfrom Python orInvalidArgumentfrom C++
-
TensorFlow-oneDNN no longer supports explicit use of oneDNN blocked tensor format, e.g., setting the environment variable
TF_ENABLE_MKL_NATIVE_FORMATwill not have any effect. -
TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2) for both GPUs and CPUs.
-
Due to security issues (see section below), all boosted trees code has been deprecated. Users should switch to TensorFlow Decision Forests. TF's boosted trees code will be eliminated before the branch cut for TF 2.9 and will no longer be present since that release.
Security
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap(CVE-2022-23568) - Fixes a number of
CHECK-failures inMapStage(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool(CVE-2022-21735) - Fixes a number of
CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput(CVE-2022-21740) - Fixes an FPE in
BiasAndClampin TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.splitto crash whenaxisis a tuple (CVE-2021-41206) - Fixes a
CHECK-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK-fail with repeatedAttrDef(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
- Fixes a crash when type cannot be specialized (CVE-2022-23572)
- Fixes a heap OOB read/write in
SpecializeType(CVE-2022-23574) - Fixes an unitialized variable access in
AssignOp(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize(CVE-2022-23576) - Fixes a null dereference in
GetInitOp(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK-failures during Grappler'sIsSimplifiableReshape(CVE-2022-23581) - Fixes multiple
CHECK-failures during Grappler'sSafeToRemoveIdentity(CVE-2022-23579) - Fixes multiple
CHECK-failures inTensorByteSize(CVE-2022-23582) - Fixes multiple
CHECK-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePngkernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK-fails infunction.cc(CVE-2022-23586) - Fixes multiple
CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant(CVE-2022-23589) - Fixes a
CHECKfailure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef(CVE-2022-23591) - Fixes a heap OOB access in
RunForwardTypeInference(CVE-2022-23592) - Fixes a crash due to erroneous
StatusOr(CVE-2022-23590) - Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
- Fixes a segfault in
simplifyBroadcast(MLIR) (CVE-2022-23593) - Fixes a null pointer dereference in
BuildXlaCompilationCache(XLA) (CVE-2022-23595) - Updates
icuto69.1to handle CVE-2020-10531
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate, dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai, Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek, jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo, Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer, tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei Sun, Yong Tang, Yuduo Wu