Contents

AVDepthData

A container for per-pixel distance or disparity information captured by compatible camera devices.

Declaration

class AVDepthData

Mentioned in

Overview

Depth data is a generic term for a map of per-pixel data containing depth-related information. A depth data object wraps a disparity or depth map and provides conversion methods, focus information, and camera calibration data to aid in using the map for rendering or computer vision tasks.

A depth map describes at each pixel the distance to an object, in meters.

A disparity map describes normalized shift values for use in comparing two images. The value for each pixel in the map is in units of 1/meters: (pixelShift / (pixelFocalLength * baselineInMeters)).

The capture pipeline generates disparity or depth maps from camera images containing nonrectilinear data. Camera lenses have small imperfections that cause small distortions in their resultant images compared to an ideal pinhole camera model, so AVDepthData maps contain nonrectilinear (nondistortion-corrected) data as well. The maps’ values are warped to match the lens distortion characteristics present in the YUV image pixel buffers captured at the same time.

Because a depth data map is nonrectilinear, you can use an AVDepthData map as a proxy for depth when rendering effects to its accompanying image, but not to correlate points in 3D space. To use depth data for computer vision tasks, use the data in the cameraCalibrationData property to rectify the depth data.

There are two ways to capture depth data:

You can also create AVDepthData objects using information obtained from image files with the Image I/O framework.

When editing images containing depth information, use the methods listed in Transforming and Processing to generate derivative AVDepthData objects reflecting the edits that have been performed.

Topics

Creating depth data

Reading pixel depth information

Evaluating depth data

Transforming and processing

Using calibration data

See Also

Depth data capture