Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Highlights of this release
- Moved to CUDA9, cuDNN 7 and Visual Studio 2017.
- Removed Python 3.4 support.
- Added Volta GPU and FP16 support.
- Better ONNX support.
- CPU perf improvement.
- More OPs.
OPs
top_koperation: in the forward pass it computes the top (largest) k values and corresponding indices along the specified axis. In the backward pass the gradient is scattered to the top k elements (an element not in the top k gets a zero gradient).gatheroperation now supports an axis argumentsqueezeandexpand_dimsoperations for easily removing and adding singleton axeszeros_likeandones_likeoperations. In many situations you can just rely on CNTK correctly broadcasting a simple 0 or 1 but sometimes you need the actual tensor.depth_to_space: Rearranges elements in the input tensor from the depth dimension into spatial blocks. Typical use of this operation is for implementing sub-pixel convolution for some image super-resolution models.space_to_depth: Rearranges elements in the input tensor from the spatial dimensions to the depth dimension. It is largely the inverse of DepthToSpace.sumoperation: Create a new Function instance that computes element-wise sum of input tensors.softsignoperation: Create a new Function instance that computes the element-wise softsign of a input tensor.asinhoperation: Create a new Function instance that computes the element-wise asinh of a input tensor.log_softmaxoperation: Create a new Function instance that computes the logsoftmax normalized values of a input tensor.hard_sigmoidoperation: Create a new Function instance that computes the hard_sigmoid normalized values of a input tensor.element_and,element_not,element_or,element_xorelement-wise logic operationsreduce_l1operation: Computes the L1 norm of the input tensor's element along the provided axes.reduce_l2operation: Computes the L2 norm of the input tensor's element along the provided axes..reduce_sum_squareoperation: Computes the sum square of the input tensor's element along the provided axes.image_scaleroperation: Alteration of image by scaling its individual values.
ONNX
- There have been several improvements to ONNX support in CNTK.
- Updates
- Updated ONNX
Reshapeop to handleInferredDimension. - Adding
producer_nameandproducer_versionfields to ONNX models. - Handling the case when neither
auto_padnorpadsatrribute is specified in ONNXConvop.
- Updated ONNX
- Bug fixes
- Fixed bug in ONNX
Poolingop serialization - Bug fix to create ONNX
InputVariablewith only one batch axis. - Bug fixes and updates to implementation of ONNX
Transposeop to match updated spec. - Bug fixes and updates to implementation of ONNX
Conv,ConvTranspose, andPoolingops to match updated spec.
- Fixed bug in ONNX
Operators
- Group convolution
- Fixed bug in group convolution. Output of CNTK
Convolutionop will change for groups > 1. More optimized implementation of group convolution is expected in the next release. - Better error reporting for group convolution in
Convolutionlayer.
- Fixed bug in group convolution. Output of CNTK
Halide Binary Convolution
- The CNTK build can now use optional Halide libraries to build
Cntk.BinaryConvolution.so/dlllibrary that can be used with thenetoptmodule. The library contains optimized binary convolution operators that perform better than the python based binarized convolution operators. To enable Halide in the build, please download Halide release and setHALIDE_PATHenvironment varibale before starting a build. In Linux, you can use./configure --with-halide[=directory]to enable it. For more information on how to use this feature, please refer to How_to_use_network_optimization.