MPSCNNKernel
Base class for neural network layers.
Declaration
class MPSCNNKernelOverview
An MPSCNNKernel object consumes one MPSImage object and produces one MPSImage object.
The region overwritten in the destination image is described by the clipRect property. The top left corner of the region consumed (ignoring adjustments for filter size—for example, convolution filter size) is given by the offset property. The size of the region consumed is a function of the size of the clipRect property and any subsampling caused by pixel strides at work (for example, strideInPixelsX/strideInPixelsY in the MPSCNNPooling class). Wherever the offset and clipRect properties would cause an {x,y} pixel address not in the image to be read, the edgeMode property is used to determine what value to read there.
The z or depth component of the offset, origin and size properties indexes which images to use.
If the MPSImage object contains only a single image, then these values should be
offset.z = 0,clipRect.origin.z = 0, andclipRect.size.depth = 1.If the MPSImage object contains multiple images, then the value of
clipRect.size.depthdetermines the number of images to process. Both the source and destination MPSImage objects must have at least this many images. The value ofoffset.zrefers to the starting image index of the source. Thus, the value ofoffset.z + clipRect.size.depthmust be<=source.numberOfImages. Similarly, the value ofclipRect.origin.zdetermines the starting image index of the destination. Thus, the value ofclipRect.origin.z + clipRect.size.depthmust be<=destination.numberOfImages.
The destinationFeatureChannelOffset property can be used to control where the kernel will start writing in terms of feature channel dimension. For example, if the destination has 64 channels and thdestinationFeatureChannelOffsete kernel outputs 32 channels, channels 0-31 of the destination will be populated by the kernel (by default). But if you want the kernel to populate channels 32-63 of the destination, you can set the value of destinationFeatureChannelOffset to 32. Suppose you have a source of dimensions w x h x Ni, where N is the number of channels, which goes through a convolution filter C0 which produces the output O0 = w x h x N0 and C1 which produces the output O1 = w x h x N1 followed by concatenation which produces O = w x h x (N0 + N1). You can achieve this by creating an MPSImage object with dimensions w x h x (N0 + N1) and using this as the destination of both convolutions as follows:
C0: destinationFeatureChannelOffset = 0, this will outputN0channels starting at channel0of destination thus populating[0,N0-1]channels.C1: destinationFeatureChannelOffset = N0, this will outputN1channels starting at channelN0of destination thus populating[N0,N0+N1-1]channels.
Topics
Initializers
Instance Properties
offsetMPSOffsetclipRectMTLRegiondestinationFeatureChannelOffsetedgeModeMPSImageEdgeModekernelHeightkernelWidthstrideInPixelsXstrideInPixelsYisBackwardspaddingMPSNNPaddingdestinationImageAllocatorMPSImageAllocatordilationRateXdilationRateYisStateModifiedsourceFeatureChannelMaxCountsourceFeatureChannelOffset
Instance Methods
encode(commandBuffer:sourceImage:)encode(commandBuffer:sourceImage:destinationImage:)appendBatchBarrier()batchEncodingStorageSize(sourceImage:sourceStates:destinationImage:)destinationImageDescriptor(sourceImages:sourceStates:)encode(commandBuffer:sourceImage:destinationState:destinationImage:)encode(commandBuffer:sourceImage:destinationState:destinationStateIsTemporary:)encodeBatch(commandBuffer:sourceImages:)encodeBatch(commandBuffer:sourceImages:destinationImages:)encodeBatch(commandBuffer:sourceImages:destinationStates:destinationImages:)encodeBatch(commandBuffer:sourceImages:destinationStates:destinationStateIsTemporary:)encodingStorageSize(sourceImage:sourceStates:destinationImage:)isResultStateReusedAcrossBatch()resultState(sourceImage:sourceStates:destinationImage:)resultStateBatch(sourceImage:sourceStates:destinationImage:)temporaryResultState(commandBuffer:sourceImage:sourceStates:destinationImage:)temporaryResultStateBatch(commandBuffer:sourceImage:sourceStates:destinationImage:)