TensorFlow 1.5.0
Release 1.5.0
Breaking Changes
- Prebuilt binaries are now built against CUDA 9 and cuDNN 7.
- Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
This may break TF on older CPUs.
Major Features And Improvements
- Eager execution
preview version is now available. - TensorFlow Lite
dev preview is now available. - CUDA 9 and cuDNN 7 support.
- Accelerated Linear Algebra (XLA):
- Add
complex64support to XLA compiler. bfloatsupport is now added to XLA infrastructure.- Make
ClusterSpecpropagation work with XLA devices. - Use a determinisitic executor to generate XLA graph.
- Add
tf.contrib:tf.contrib.distributions:- Add
tf.contrib.distributions.Autoregressive. - Make
tf.contrib.distributionsQuadratureCompound classes support batch - Infer
tf.contrib.distributions.RelaxedOneHotCategoricaldtypefrom arguments. - Make
tf.contrib.distributionsquadrature family parameterized by
quadrature_grid_and_probvsquadrature_degree. auto_correlationadded totf.contrib.distributions
- Add
- Add
tf.contrib.bayesflow.layers, a collection of probabilistic (neural) layers. - Add
tf.contrib.bayesflow.halton_sequence. - Add
tf.contrib.data.make_saveable_from_iterator. - Add
tf.contrib.data.shuffle_and_repeat. - Add new custom transformation:
tf.contrib.data.scan(). tf.contrib.distributions.bijectors:- Add
tf.contrib.distributions.bijectors.MaskedAutoregressiveFlow. - Add
tf.contrib.distributions.bijectors.Permute. - Add
tf.contrib.distributions.bijectors.Gumbel. - Add
tf.contrib.distributions.bijectors.Reshape. - Support shape inference (i.e., shapes containing -1) in the Reshape bijector.
- Add
- Add
streaming_precision_recall_at_equal_thresholds,a method for computing
streaming precision and recall withO(num_thresholds + size of predictions)
time and space complexity. - Change
RunConfigdefault behavior to not set a random seed, making random
behavior independently random on distributed workers. We expect this to
generally improve training performance. Models that do rely on determinism
should set a random seed explicitly. - Replaced the implementation of
tf.flagswithabsl.flags. - Add support for
CUBLAS_TENSOR_OP_MATHin fp16 GEMM - Add support for CUDA on NVIDIA Tegra devices
Bug Fixes and Other Changes
- Documentation updates:
- Clarified that you can only install TensorFlow on 64-bit machines.
- Added a short doc explaining how
Estimators save checkpoints. - Add documentation for ops supported by the
tf2xlabridge. - Fix minor typos in the doc of
SpaceToDepthandDepthToSpace. - Updated documentation comments in
mfcc_mel_filterbank.handmfcc.hto
clarify that the input domain is squared magnitude spectra and the weighting
is done on linear magnitude spectra (sqrt of inputs). - Change
tf.contrib.distributionsdocstring examples to usetfdalias
rather thands,bs. - Fix docstring typos in
tf.distributions.bijectors.Bijector. tf.assert_equalno longer raisesValueError.It now raises
InvalidArgumentError,as documented.- Update Getting Started docs and API intro.
- Google Cloud Storage (GCS):
- Add userspace DNS caching for the GCS client.
- Customize request timeouts for the GCS filesystem.
- Improve GCS filesystem caching.
- Bug Fixes:
- Fix bug where partitioned integer variables got their wrong shapes. Before
- Fix correctness bug in CPU and GPU implementations of Adadelta.
- Fix a bug in
import_meta_graph's handling of partitioned variables when
importing into a scope. WARNING: This may break loading checkpoints of
graphs with partitioned variables saved after usingimport_meta_graphwith
a non-emptyimport_scopeargument. - Fix bug in offline debugger which prevented viewing events.
- Added the
WorkerService.DeleteWorkerSessionmethod to the gRPC interface,
to fix a memory leak. Ensure that your master and worker servers are running
the same version of TensorFlow to avoid compatibility issues. - Fix bug in peephole implementation of BlockLSTM cell.
- Fix bug by casting dtype of
log_det_jacobianto matchlog_probin
TransformedDistribution. - Fix a bug in
import_meta_graph's handling of partitioned variables when - Ensure
tf.distributions.Multinomialdoesn't underflow inlog_prob.
Before this change, all partitions of an integer variable were initialized
with the shape of the unpartitioned variable; after this change they are
initialized correctly.
- Other:
- Add necessary shape util support for bfloat16.
- Add a way to run ops using a step function to MonitoredSession.
- Add
DenseFlipoutprobabilistic layer. - A new flag
ignore_live_threadsis available on train. If set toTrue, it
will ignore threads that remain running when tearing down infrastructure
after successfully completing training, instead of throwing a RuntimeError. - Restandardize
DenseVariationalas simpler template for other probabilistic
layers. tf.datanow supportstf.SparseTensorcomponents in dataset elements.- It is now possible to iterate over
Tensors. - Allow
SparseSegmentReductionops to have missing segment IDs. - Modify custom export strategy to account for multidimensional sparse float
splits. Conv2D,Conv2DBackpropInput,Conv2DBackpropFilternow supports arbitrary
dilations with GPU and cuDNNv6 support.Estimatornow supportsDataset:input_fncan return aDataset
instead ofTensors.- Add
RevBlock, a memory-efficient implementation of reversible residual layers. - Reduce BFCAllocator internal fragmentation.
- Add
cross_entropyandkl_divergencetotf.distributions.Distribution. - Add
tf.nn.softmax_cross_entropy_with_logits_v2which enables backprop
w.r.t. the labels. - GPU back-end now uses
ptxasto compile generated PTX. BufferAssignment's protocol buffer dump is now deterministic.- Change embedding op to use parallel version of
DynamicStitch. - Add support for sparse multidimensional feature columns.
- Speed up the case for sparse float columns that have only 1 value.
- Allow sparse float splits to support multivalent feature columns.
- Add
quantiletotf.distributions.TransformedDistribution. - Add
NCHW_VECT_Csupport fortf.depth_to_spaceon GPU. - Add
NCHW_VECT_Csupport fortf.space_to_depthon GPU.
API Changes
- Rename
SqueezeDimsattribute toAxisin C++ API for Squeeze op. Stream::BlockHostUntilDonenow returns Status rather than bool.- Minor refactor: move stats files from
stochastictocommonand remove
stochastic.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Adam Zahran, Ag Ramesh, Alan Lee, Alan Yee, Alex Sergeev, Alexander, Amir H. Jadidinejad,
Amy, Anastasios Doumoulakis, Andrei Costinescu, Andrei Nigmatulin, Anthony Platanios,
Anush Elangovan, arixlin, Armen Donigian, ArtëM Sobolev, Atlas7, Ben Barsdell, Bill Prin,
Bo Wang, Brett Koonce, Cameron Thomas, Carl Thomé, Cem Eteke, cglewis, Changming Sun,
Charles Shenton, Chi-Hung, Chris Donahue, Chris Filo Gorgolewski, Chris Hoyean Song,
Chris Tava, Christian Grail, Christoph Boeddeker, cinqS, Clayne Robison, codrut3, concerttttt,
CQY, Dan Becker, Dan Jarvis, Daniel Zhang, David Norman, dmaclach, Dmitry Trifonov,
Donggeon Lim, dongpilYu, Dr. Kashif Rasul, Edd Wilder-James, Eric Lv, fcharras, Felix Abecassis,
FirefoxMetzger, formath, FredZhang, Gaojin Cao, Gary Deer, Guenther Schmuelling, Hanchen Li,
Hanmin Qin, hannesa2, hyunyoung2, Ilya Edrenkin, Jackson Kontny, Jan, Javier Luraschi,
Jay Young, Jayaram Bobba, Jeff, Jeff Carpenter, Jeremy Sharpe, Jeroen BéDorf, Jimmy Jia,
Jinze Bai, Jiongyan Zhang, Joe Castagneri, Johan Ju, Josh Varty, Julian Niedermeier,
JxKing, Karl Lessard, Kb Sriram, Keven Wang, Koan-Sin Tan, Kyle Mills, lanhin, LevineHuang,
Loki Der Quaeler, Loo Rong Jie, Luke Iwanski, LáSzló Csomor, Mahdi Abavisani, Mahmoud Abuzaina,
ManHyuk, Marek ŠUppa, MathSquared, Mats Linander, Matt Wytock, Matthew Daley, Maximilian Bachl,
mdymczyk, melvyniandrag, Michael Case, Mike Traynor, miqlas, Namrata-Ibm, Nathan Luehr,
Nathan Van Doorn, Noa Ezra, Nolan Liu, Oleg Zabluda, opensourcemattress, Ouwen Huang,
Paul Van Eck, peisong, Peng Yu, PinkySan, pks, powderluv, Qiao Hai-Jun, Qiao Longfei,
Rajendra Arora, Ralph Tang, resec, Robin Richtsfeld, Rohan Varma, Ryohei Kuroki, SaintNazaire,
Samuel He, Sandeep Dcunha, sandipmgiri, Sang Han, scott, Scott Mudge, Se-Won Kim, Simon Perkins,
Simone Cirillo, Steffen Schmitz, Suvojit Manna, Sylvus, Taehoon Lee, Ted Chang, Thomas Deegan,
Till Hoffmann, Tim, Toni Kunic, Toon Verstraelen, Tristan Rice, Urs KöSter, Utkarsh Upadhyay,
Vish (Ishaya) Abrams, Winnie Tsang, Yan Chen, Yan Facai (颜发才), Yi Yang, Yong Tang,
Youssef Hesham, Yuan (Terry) Tang, Zhengsheng Wei, zxcqwe4906, 张志豪, 田传武
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.