Convolutional Neural Network Kernels
Build neural networks with layers.
Overview
Think carefully about the edge mode requested for pooling layers. The default value is MPSImageEdgeMode.zero, but there are times when a MPSImageEdgeMode.clamp value may be better.
To avoid reading off the edge of an image for filters that have a filter area (convolution, pooling), set
MPSCNNKernel.offset = (MPSOffset){ .x = kernelWidth/2, .y = kernelHeight/2, .z = 0}and reduce the size of the output image by{kernelWidth-1, kernelHeight-1, 0}. The filter area stretches up and to the left of the kernel offset by{kernelWidth/2, kernelHeight/2}.Always remember the following distinction:
The MPSCNNConvolution class takes weights in the order
weight[outputChannels][kernelHeight][kernelWidth][inputChannels/groups].The MPSCNNFullyConnected class takes weights in the order
weight[outputChannels][sourceWidth][sourceHeight][inputChannels].Initialize MPSCNNKernel objects once and reuse them.
You can use MPSCNNNeuron objects and similar to perform pre-processing of images, such as scaling and resizing.
Specify a neuron filter with an MPSCNNConvolutionDescriptor object to combine the convolution and neuron operations.
Use MPSTemporaryImage objects for intermediate images that live for a short period of time (one MTLCommandBuffer object).
MPSTemporaryImage objects can reduce the amount of memory used by the CNN by several folds, and similarly reduce the amount of CPU time spent allocating storage and latency between the time a command buffer is committed and when it is actually executed on the GPU.
You cannot read or write to a MPSTemporaryImage object using the CPU. Generally, MPSTemporaryImage objects should be created as needed and thrown away promptly. Persistent objects should not retain them.
Please be sure to understand the purpose of the readCount property.
Because the Metal Performance Shaders framework encodes its work in place in your command buffer, you always have the option to insert your own code in between MPSCNNKernel encodings as a Metal function for tasks not covered by the framework. You do not need to use the Metal Performance Shaders framework for everything.
Topics
Arithmetic Layers
MPSCNNAddMPSCNNAddGradientMPSCNNSubtractMPSCNNSubtractGradientMPSCNNMultiplyMPSCNNMultiplyGradientMPSCNNDivideMPSCNNArithmeticMPSCNNArithmeticGradientMPSCNNArithmeticGradientState
Convolution Layers
MPSCNNBinaryConvolutionMPSCNNConvolutionMPSCNNDepthWiseConvolutionDescriptorMPSCNNSubPixelConvolutionDescriptorMPSCNNConvolutionTransposeMPSCNNConvolutionGradientMPSCNNConvolutionGradientStateMPSImageSizeEncodingStateMPSCNNConvolutionWeightsAndBiasesState
Pooling Layers
MPSCNNPoolingAverageMPSCNNPoolingAverageGradientMPSCNNPoolingL2NormMPSCNNPoolingMaxMPSCNNDilatedPoolingMaxMPSCNNPoolingMPSCNNPoolingGradientMPSCNNDilatedPoolingMaxGradientMPSCNNPoolingL2NormGradientMPSCNNPoolingMaxGradient
Fully Connected Layers
Neuron Layers
MPSCNNNeuronAbsoluteMPSCNNNeuronELUMPSCNNNeuronHardSigmoidMPSCNNNeuronLinearMPSCNNNeuronPReLUMPSCNNNeuronReLUNMPSCNNNeuronReLUMPSCNNNeuronSigmoidMPSCNNNeuronSoftPlusMPSCNNNeuronSoftSignMPSCNNNeuronTanHMPSCNNNeuronMPSCNNNeuronExponentialMPSCNNNeuronGradientMPSCNNNeuronLogarithmMPSCNNNeuronPowerMPSNNNeuronDescriptor
Softmax Layers
Normalization Layers
MPSCNNCrossChannelNormalizationMPSCNNCrossChannelNormalizationGradientMPSCNNLocalContrastNormalizationMPSCNNLocalContrastNormalizationGradientMPSCNNSpatialNormalizationMPSCNNSpatialNormalizationGradientMPSCNNBatchNormalizationMPSCNNBatchNormalizationGradientMPSCNNBatchNormalizationStateMPSCNNNormalizationMeanAndVarianceStateMPSCNNBatchNormalizationStatisticsMPSCNNBatchNormalizationStatisticsGradientMPSCNNInstanceNormalizationMPSCNNInstanceNormalizationGradientMPSCNNInstanceNormalizationGradientStateMPSCNNNormalizationGammaAndBetaState
Upsampling Layers
MPSCNNUpsamplingMPSCNNUpsamplingBilinearMPSCNNUpsamplingNearestMPSCNNUpsamplingBilinearGradientMPSCNNUpsamplingGradientMPSCNNUpsamplingNearestGradient
Dropout Layers
Loss Layers
MPSCNNLossMPSCNNLossDataDescriptorMPSCNNLossDescriptorMPSCNNLossLabelsMPSCNNYOLOLossMPSCNNYOLOLossDescriptor
Reduction Layers
MPSNNReduceRowMaxMPSNNReduceRowMinMPSNNReduceRowSumMPSNNReduceRowMeanMPSNNReduceColumnMaxMPSNNReduceColumnMinMPSNNReduceColumnSumMPSNNReduceColumnMeanMPSNNReduceFeatureChannelsMaxMPSNNReduceFeatureChannelsMinMPSNNReduceFeatureChannelsSumMPSNNReduceFeatureChannelsMeanMPSNNReduceFeatureChannelsArgumentMaxMPSNNReduceFeatureChannelsArgumentMinMPSNNReduceFeatureChannelsAndWeightsSumMPSNNReduceFeatureChannelsAndWeightsMeanMPSNNReduceUnaryMPSNNReduceBinary
Reshape Layer
Slice Layer
Optimization Layers
MPSNNOptimizerAdamMPSNNOptimizerRMSPropMPSNNOptimizerStochasticGradientDescentMPSNNOptimizerMPSNNOptimizerDescriptor