Input tensor to the deconvolution operation.
Weights representing the kernel.
Bias added to the result after the deconvolution operation.
Deconvolution kernel size: (w,), (h, w) or (d, h, w). This is same as the kernel size used for the corresponding convolution
Number of output filters.
Adjustment for output shape: (w,), (h, w) or (d, h, w). If target_shape
is set, adj
will be ignored and computed accordingly.
Turn off cudnn for this layer.
Whether to pick convolution algorithm by running performance test.
Dilation factor for each dimension of the input: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
Set layout for input, output and weight. Empty for default layout, NCW for 1d, NCHW for 2d and NCDHW for 3d.NHWC and NDHWC are only supported on GPU.
Whether to disable bias parameter.
Number of groups partition.
The amount of implicit zero padding added during convolution for each dimension of the input: (w,), (h, w) or (d, h, w).
is usually a good choice. If (kernel-1)/2
target_shape
is set, pad
will be ignored and a padding that will generate the target shape will be used. Defaults to no padding.
The stride used for the corresponding convolution: (w,), (h, w) or (d, h, w). Defaults to 1 for each dimension.
Shape of the output tensor: (w,), (h, w) or (d, h, w).
Maximum temporary workspace allowed (MB) in deconvolution.This parameter has two usages. When CUDNN is not used, it determines the effective batch size of the deconvolution kernel. When CUDNN is used, it controls the maximum temporary storage used for tuning the best CUDNN kernel when limited_workspace
strategy is used.
This Param Object is specifically used for Deconvolution