VideoProcessor
An object that performs offline analysis of video content.
Declaration
final class VideoProcessorOverview
A video processor streamlines video content analysis through frame-by-frame processing. Instead of manually extracting frames, the video processor manages the processing pipeline and delivers results through convenient async streams. You can attach multiple different analysis requests to the same VideoProcessor instance and they’ll all operate on the same frames simultaneously. For example, you can provide a video and perform aesthetic scoring, face detection, and object recognition all at once without processing the video multiple times.
let videoURL = // A local `URL` path to the video you want to process.
let videoProcessor = VideoProcessor(videoURL)
// Calculate the aesthetics score for each frame.
let aestheticsScoresRequest = CalculateImageAestheticsScoresRequest()
// Perform face detection on each frame.
let faceDetectionRequest = DetectFaceRectanglesRequest()Processing every single video frame provides the most accuracy, but can be computationally expensive and time-consuming. Before you begin video analysis, determine how many frames to process by using VideoProcessor.Cadence. VideoProcessor.Cadence.timeInterval(_:) processes frames at regular intervals, which provides consistent sampling throughout the video’s duration.
do {
let asset = AVURLAsset(url: videoURL)
let totalDuration = try await asset.load(.duration).seconds
let framesToEvaluate: Double = 100
// Create a time interval that processes 100 frames.
let interval = CMTime(
seconds: totalDuration / framesToEvaluate,
preferredTimescale: 600
)
let cadence = VideoProcessor.Cadence.timeInterval(interval)
// Add the requests to get an `AsyncSequence` stream that provides access
// to the observation results.
let aestheticsScoreStream = try await videoProcessor.addRequest(aestheticsScoresRequest,
cadence: cadence)
let faceDetectionStream = try await videoProcessor.addRequest(faceDetectionRequest,
cadence: cadence)
// Start the analysis.
videoProcessor.startAnalysis()
} catch {
print("Error processing the video: \(error.localizedDescription)")
}After you start processing a video, access the observations that the framework provides through an AsyncSequence stream. For example, the following code stores the timestamp and the aesthetics score:
var aestheticsResults: [CMTime: Float] = [:]
for try await observation in aestheticsScoreStream {
if let timeRange = observation.timeRange {
aestheticsResults[timeRange.start] = observation.overallScore
}
}