dequantize(batchSize:input:output:axis:scale:bias:filterParameters:)
Dequantizes the input tensor and writes the result to the output tensor.
Declaration
static func dequantize(batchSize: Int, input: BNNSNDArrayDescriptor, output: BNNSNDArrayDescriptor, axis: Int? = nil, scale: BNNSNDArrayDescriptor?, bias: BNNSNDArrayDescriptor?, filterParameters: BNNSFilterParameters? = nil) throwsParameters
- batchSize:
The number of input-output pairs to process.
- input:
The descriptor of the input.
- output:
The descriptor of the output.
- axis:
The index of the axis to which the function applies scale and bias. Set to
nilto dequantize the entire tensor using scale and bias. - scale:
The scale, set to
nilfor a scale of1.0. - bias:
The bias, set to
nilfor a bias of0.0. - filterParameters:
Runtime filter parameters.
Discussion
The following code dequantizes a 16-bit integer matrix to a single-precision matrix. The code applies the scale along the zeroth axis and, therefore, the scale tensor contains four elements.
static func dequantize() {
let inputValues = [1000, 2000, 3000, 4000,
5000, 6000, 7000, 8000] as [Int16]
let input = BNNSNDArrayDescriptor.allocate(
initializingFrom: inputValues,
shape: .matrixRowMajor(4, 2))
let output = BNNSNDArrayDescriptor.allocateUninitialized(
scalarType: Float.self,
shape: input.shape)
let scale = BNNSNDArrayDescriptor.allocate(
initializingFrom: [1, 10, 100, 1000] as [Int16],
shape: .vector(4))
try? BNNS.dequantize(batchSize: 1,
input: input,
output: output,
axis: 0,
scale: scale,
bias: nil)
// Prints:
// [1000.0, 200.0, 30.0, 4.0,
// 5000.0, 600.0, 70.0, 8.0]
print(output.makeArray(of: Float.self)!)
input.deallocate()
output.deallocate()
scale.deallocate()
}