WWDC2016 Session 503

Transcript

[ Music ]
>> Good morning.
[ Applause ]
Welcome to our session
on Advances
in AVFoundation Playback.
My name is Sam Bushell.
Today we're going to talk
about some new enhancements
that we've added to try and
smooth over some rough edges
that some developers
have found challenging.
So AVFoundation provides APIs
for a very broad selection
of multimedia activities,
including playback, capture,
export, and many
kinds of editing.
I'll be focusing
mostly on playback.
AVFoundation supports playback
from a very wide selection
of media formats
from local storage.
of media formats
from local storage.
And in most cases you
can take the same file,
and you can put it
on a web server
and then AVFoundation can
play that over the network.
The file format in
this case is the same,
but the IO is over the network.
We call this progressive
download playback.
Once we start downloading
that file,
even if the network
characteristics change,
we will continue
with the same file.
HTTP Live Streaming
is more dynamic.
Generally, the base URL
refers to a master playlist
which introduces multiple
playlists for the same content
but varying in bit rate and
format and maybe in language.
And each of these playlists
references segments containing
the actual compressed media.
So let's talk about what we're
going to talk about today.
We're going to discuss
the playback changes to do
with the pre-playback
buffering period.
We're going to introduce
a new API
to simplify looping
playback of a single file.
We're going to discuss some
playback refinements we've made
We're going to discuss some
playback refinements we've made
under the hood.
We're going to discuss
getting your application ready
for wide color video.
And then we'll spend the rest
of our time discussing a
popular topic optimization
of static time in playback apps.
Let's start by waiting
for the network.
Because when we play media
playback over the Internet,
we're at the mercy
of the network.
We don't want to start too
soon or playback my stall.
We don't want to start too late
or the user may give up on us.
We want to start at
that Goldilocks moment
and start playback when we have
enough data that we'll be able
to play consistently
and not stall.
Here is the existing API.
AVPlayerItem provides
three Boolean properties.
playbackLikelyToKeepUp,
playbackBufferFull,
and playbackBufferEmpty.
playbackBuffer -- sorry --
playbackLikelyToKeepUp is true
if AVFoundation's algorithm
believes that if you were
to stop playing now, you could
keep on playing without stalling
until you got to the end.
playbackBufferFull is
true if the buffer does
as much as it's going to.
So if you haven't
started playing back yet,
you might as well.
playbackBufferEmpty means
that you are stalling
or you're about to stall.
So for progressive download
playback in iOS 9 and earlier,
AVFoundation clients must
monitor these properties
themselves and wait until
playbackLikelyToKeepUp is true
or playbackBufferFull is true
before setting the AVPlayer's
rate property to 1.
For HTTP Live Streaming,
the rules are simpler.
You can set AVPlayer's
rate property to 1 as soon
as the user chooses to play,
and it will automatically wait
to buffer sufficient media
before playback begins.
We are streamlining the
default API contract
in the 2016 iOS releases.
iOS, Mac OS, tvOS.
For apps linked on or after
iOS 10, Mac OS Sierra, tvOS 10,
the same rules for
HLS will also apply
to progressive download
playback.
When the user clicks play,
you can immediately set
AVPlayer's rate property to 1
or call the play method,
which is the same thing.
And AVFoundation will
automatically wait
to buffer enough
to avoid stalling.
If the network drops out during
playback and playback stalls,
the rate property
will stay set to 1.
And so it will again buffer
and automatically resume
when sufficiently buffered.
If you're using the AVKit
or MediaPlayer framework
to present your playback UI,
it already supports automatic
waiting for buffering,
and it will continue to.
If your application uses
AVFoundation directly
and you build your own
playback UI, you may need
to make some adjustments.
So what should we
call this new API?
Well, the word Autoplay has
been used in QTKit and also
in HTML 5, but we
came to the conclusion
that from the perspective
of this AVPlayer API,
that from the perspective
of this AVPlayer API,
the playback is not
the automatic part.
It's the waiting.
So the formal name for
this API is automatically
WaitsToMinimizeStalling.
But you can call it
Autoplay if you like.
The network playback now looks
like a state machine
with three states.
Paused, waiting, and playing.
We start in the pause state
until the user chooses to play.
And then the app calls play, and
we move to the waiting state.
When the playback likelyToKeepUp
property becomes true,
the player progresses
to the playing state.
Now, if the buffer
should become empty,
the player will switch
back to the waiting state
until we're likely
to keep up again.
Should the user pause, we'll
return to the pause state.
Now there's one further
transition available.
Recall that in iOS 9 and
earlier before this change,
you could call play before
playback was likely to keep up
and playback would
start immediately even
if it might stall.
So we preserved this semantic
by providing another method,
So we preserved this semantic
by providing another method,
playImmediately (atRate:)
which jumps you straight
into the playing state
from either the paused
or the waiting states.
Be aware that this
may lead to a stall
that the patient waiting
state would avoid.
So be careful.
AVPlayer's rate property might
not mean what you thought
it meant.
Let's recap so everyone's clear.
The player's rate property
is the app's requested
playback rate.
Not to be confused with
the time-based rate
of the player item
which is the rate
at which playback is
actually occurring.
We've added two new
properties in this release
to give you more detail.
One is the timeControlStatus,
which tells you
which of these states you're
in, paused, waiting or playing.
And if you're in
the waiting state,
the reasonForWaitingToPlay
property tells you why.
For example, you could
be in the waiting state,
so the AVPlayer's rate
property could be 1.
so the AVPlayer's rate
property could be 1.
The timebased.rate would be
0 because you're waiting.
The timeControlStatus
would again say I'm
WaitingToPlayAtSpcifiedRate.
And the reasonForWaitingToPlay
could be
WaitingToMinimizeStallsReason.
So with that background,
I'd like to introduce my
friend Moritz Wittenhagen
who is much braver
than me, as he is going
to attempt a network
playback demo live on stage.
So everyone cross your
fingers and give him a hand.
[ Applause ]
>> Well, good morning everyone.
I want to start by
showing you a little bit
of the setup we have
on stage here.
And I have my iPad which
you can see mirrored
on the screen there.
And that iPad is
joining a network
that is hosted by my Mac.
And what that allows me to do
is I can use the network link
conditioner to actually
limit the network connection
that this iPad has available.
Can do that using the
network link conditioner
preference pane.
Sam will tell you in a
minute where to find that.
And I've set up a profile called
Slow Server that limits this
And I've set up a profile called
Slow Server that limits this
to a mediocre network connection
that's a little slower
than the media bitrate that
we actually want to play.
It's currently turned off.
And we'll leave it off, and
let's look at what the iPad does
in a decent network situation.
So what I have here
is just a selection,
and I can just select one video.
Let me do that.
And what you see is that
the video immediately loads,
and we see that we're
currently not playing.
You see this wonderful
engineering UI underneath
that gives us all the properties
and functionality involved
in automatic waiting.
This is really just taken from
AVPlayer and AVPlayer items.
So these are the properties that
you have available if you need
to know what automatic
waiting is doing.
So right now we are paused,
so the rates are all zero.
Current time is at zero.
But the interesting thing is
since we're in a fast network,
we've loaded 39 seconds
of the video,
which is actually
the whole video.
And we're currently
likely to keep up.
And we're currently
likely to keep up.
What that means is that
when I just hit play now,
the video just starts
playing without any problem.
Now we wanted to
see what happens
in a bad network situation.
So let's turn on the network
link condition on the Mac.
Here we go.
And now not much
changed for this video.
Because as I said, it
was already buffered.
It had already buffered
the whole video.
So when I go back
and load this again,
I want you to pay
attention to loadedTimeRanges
and isPlaybackLIkelyToKeepUp
again.
So let's do it.
Relaod the video.
And now what we see is
that loadedTimeRange is
only slowly increase.
And isPlaybackLIkelyToKeepUp
is false.
Eventually it will become true.
And at that moment we're at the
same state that we were before
where now ready to play and
playback will just start.
Now let's try this
one more time,
and this time I will hit play
right after I loaded the video.
and this time I will hit play
right after I loaded the video.
So this time we don't
have enough data,
and we go into this
waiting state.
And you see the spinner
telling the user
that playback is waiting.
Eventually we will
become ready to play
and playback just starts.
There's one more
thing we can do.
And that is immediate playback.
So let's also try this.
I go into the video
and immediately click
play immediately.
And we see that playback
starts but then we quickly run
into a stall because we
didn't have enough buffer
to play to the end.
In that case, we'll go into
the waiting state and re-buffer
until we have enough
to play through.
And with that, it was a short
demo of automatic waiting.
Go back to Sam and the slides.
[ Applause ]
Thanks, Moritz.
Let's recap what was
happening in the middle there.
So when we set a slower network
speed, close to the data rate
of the movie, the movie
started out paused.
When he hit play, it went
into the waiting state.
When he hit play, it went
into the waiting state.
Because playback was not
yet likely to keep up.
Notice that at this time,
the player's rate was 1,
but the timebase rate was 0.
After a few seconds,
AVFoundation determined
that playback was
likely to keep up
and so it set the
time control --
it set the state into
playing, and now you see
that the player rate and the
timebase rate are both 1.
It may have occurred to you
that there's a little
bit more detail available
in the timeControlStatus than
in the player's rate property.
Remember the player's rate
property tells you the app's
desired playback rate.
The timeControlStatus also takes
into account what's
actually happening.
So that might be something
you want to take into account
when you build a playback UI.
In case you want to try this at
home, you might like to know how
to find the network
link conditioner.
It's not something we
invented in my time at least.
It is part of the
hardware IO tools download.
To get it, the easiest way
is to follow Xcode's menu
to More Developer Tools.
And after you log in, you'll
find it something like here.
Okay, so on the 2016 SDKs if
you link on or after that,
your app will act as though
you had set this property
automatically
WaitsToMinimizeStalling to true.
You can set that property
to false if you want
to go back to the old behavior.
And there's a few reasons why
you might want to do this.
In particular, if you use the
setRate time atHostTime call
to synchronize playback
with external timeline,
then you must opt out by
setting the automatically
WaitsToMinimizeStalling
property to false.
Otherwise, you will meet
a friendly exception.
Your helpful reminder.
Finally, a reminder never
use the player's rate
to extrapolate current
timeout in the future.
If you want to do that, use
the item's timebase rate
for that instead.
Or use the other APIs
in the timebase object.
That's what they're for.
All right, that's
it for buffering.
All right, that's
it for buffering.
Let's move along to
the topic of looping.
I have a question for you.
What's the best way to loop
playback of a single item?
Well, one idea would
be to set up a listener
for the notification that fires
when playback has
reached the end.
And when you get called,
seek back to the
beginning and start again.
Well, this idea is a good start.
But unfortunately,
it will lead to a gap
between the playbacks
for two reasons.
The first reason is that
there will be latency due
to the time it takes
for the notification
to reach your program and for
your second player requests
to get back to the
playback system.
The second more significant
reason is the time needed
for prerolling.
It's not actually possible
to start media playback
instantaneously
without some preparation.
It's necessary to load
media data and decode some
of it before you can
actually start playing it out.
This process of filling
up the playback pipelines
before playback starts is
called preroll.
So what we'd like to
be able to do here is
to have AVFoundation
be in on the plan.
to have AVFoundation
be in on the plan.
If AVFoundation knows
about playback item B
but early enough, then
it can begin prerolling
and decoding before item A
has finished playing out.
And so it can optimize the
transition from A to B.
If item B is super short, then
AVFoundation may even start work
on the transition to item C.
AVFoundation's tool for
achieving this is AVQueuePlayer.
AVQueuePlayer is a subclass of
AVPlayer, which has an array
of AVPlayer items
called the play queue.
The current item is the one in
the first position of the array.
Now you can use AVQueuePlayer
to optimize transitions
between items that are
different, but for the case
of looping, you can create
multiple AVPlayer items
from the same AVAsset.
This is just another
optimization,
since AVFoundation
does not have to load
and pause the media
file multiple times.
And just a reminder, the
play queue is not a playlist.
Please do not load
the next 10,000 items
that you think you might like
to play into the play queue.
that you think you might like
to play into the play queue.
That's not going
to be efficient.
The purpose of the play queue
is to provide information
about items to be
played in the near future
so that AVFoundation can
optimize transitions.
The design patent when you want
to loop a single media
file indefinitely is
to make a small number of
AVPlayer items and put them
in the AVQueuePlayer's queue
with the action item end
property set to advance.
When playback reaches the end
of one item, it will be removed
from the play queue as playback
advances to the next one.
And when you get
the notification
that that has happened, you
can take that finished item,
set its current time back to
the start, and put it on the end
of the play queue to reuse it.
We call this patent
the treadmill.
And you can implement the
treadmill patent yourself
using AVQueuePlayer.
We have sample code to help.
The slightly tricky detail
is that you have to set
up key value observing to
watch when the item is removed
and then seek it
back to the start.
And then add it to the end
of the play queue again.
And then add it to the end
of the play queue again.
As you can see, in this code
we are deactivating our KVO
observer while we
change the play queue
to avoid any chance
of recursion.
So this is clearly doable.
It's just a little fiddley.
And the feedback
that we received was
that it would be awful swell
if we could make this easier.
So we're introducing
AVPlayerLooper,
which implements the
treadmill patent for you.
You give it an AVQueuePlayer.
[ Applause ]
You give it an AVQueue Player
and a template AVPlayerItem,
and it constructs a small number
of copies of that AVPlayerItem,
which it then cycles
through the play queue
until you tell it to stop.
Adopting AVPlayerLooper,
the code for the symbol
case is really much simpler.
So I want to give
you a demo of this
on an iPad I have over here.
So here's a piece
of sample code.
Video Looper, I'm
going to launch that.
And I have added a media file
of my own here and we're going
to play it with AVPlayerLooper.
[ Music ]
Don't you feel mellow?
Okay, this is clearly looping,
and the code is pretty
much what I pointed out.
It's fairly simple.
This would be an appropriate
tool to use, for example,
if you have a tvOS app and you'd
like to loop background
video behind a title menu.
All right, let's
return to slides.
We've talked a bit
about how to loop.
I want to spend a
moment on what to loop.
Ideally, if you have both
audio and video tracks,
they should be precisely
the same length.
Why? Well, if the audio track
is longer, then that means
that near the end
there's period of time
when audio should be
playing but video should not.
We have an empty
segment of video,
so what should the video do?
Should it go away?
Should you freeze on one frame?
Conversely, if the video track
is longer, then there's a period
of time when the audio
should be silent.
So when you build media assets
for looping, take the time
to make sure that the
track durations match up.
In QuickTime Movie files,
the track duration is
defined by the edit list.
Now if the media asset
to loop is not entirely
under your control,
another possibility is
that you could set the
AVPlayerItems forward playback
end time to the length
of the shortest track.
This will have the effect
of trimming back the
other tracks to match.
All right, next look at an
optimization that we've made
in the playback pipeline
that may have an impact
on your applications.
Suppose that we are currently
playing, and the lists
of playing tracks changes.
For example, we could
change the subtitle language
or the audio language.
Audio from English to French.
Here I'll change the
subtitle language
from English to Spanish.
Or we could remove
the AVPlayerLayer
Or we could remove
the AVPlayerLayer
that was displaying the video.
Or we could add an AVPlayerLayer
and begin displaying video.
Well, in all of these
cases in iOS 9,
AVFoundation will
pause playback,
adjust the playback pipelines to
match the list of enables tracks
and then resume playback.
In some cases, this
even causes video
to snap back to a key frame.
Well, I will say we have
received constructive feedback
from users and developers
about this.
And so I'm happy to
say that in iOS 10
and its other 2016 siblings,
these changes will no longer
cause playback to pause.
Adding or removing
the only AVPlayerLayer
on a playing AVPlayer,
changing the subtitle language
or the audio language
on a playing AVPlayer
or manually disabling
or enabling tracks.
We think that this
is an enhancement
for users and developers.
However, it's a significant
change in API behavior,
However, it's a significant
change in API behavior,
and so I would ask you please
take a look in the seeds and see
if it leads to any
complications in your apps.
If you find an issue with this
that looks like it's a bug
on our side, then
please provide feedback
by filing a bug using the
Apple Bug Reporter System.
And as always when filing a bug,
please try to give us
everything we need in order
to reproduce the
problem ourselves.
Our industry is undergoing a
transition to wider color gamuts
for digital photography
and digital video.
Many developers on iOS have
never had to deal with video
that wasn't using the standard
recommendation 709 color space.
Since that's the standard
for high-definition video
and that's what we've been
shooting since the iPhone 4.
But wider gamut color
spaces are coming.
As you may have seen
with the newest iPad Pro
when running iOS 10, you can
capture and display photographs
in the P3 color space.
Some third party products are
capturing video in P3 also.
Some third party products are
capturing video in P3 also.
So I wanted to give you pointers
to the APIs you can adopt
in your apps to be prepared
for making your apps
wide color video aware.
But I need to give you a
little bit of background first.
In media files, color
space information is part
of the metadata of video tracks.
In QuickTime Movie files, it's
stored in sample descriptions.
In several Codecs also store
it in Codec specific places.
There are three principle
parts to this information.
Color Primaries, which specific
what the 100 percent red,
100 percent green, and 100
percent blue colors are
and also the white point.
Transfer Characteristics,
which you may have heard
called gamma curves
or transfer function.
These define the mapping from
pixel values to light levels
and answer the question is that
a straight line or is it a curve
that gives you more
detail in the dark areas
where our eyes are
more sensitive.
And the YCbCr Matrix,
the coordinate transform
And the YCbCr Matrix,
the coordinate transform
from their RGB space
into the space used
for efficient compression.
So up here I have some examples.
Now if you haven't heard of
it, Recommendation 709 is
like the video equivalent
of SIGB.
SIGB is actually based on Rec.
709. Wide color can be achieved
by using a different
set of color primaries.
The P3 color primaries specify
values for 100 percent red,
100 percent green,
and 100 percent blue
that are more vivid then
Recommendation 709s.
One more point I want to make.
In our APIs, we generally
represent these choices
through the use of enumerated
strings, since they're easier
to print and display and debug.
But in media files, these
are represented by numbers.
And these standard tag
numbers are defined
in an MPEG specification called
coding independent code points.
That sounds like a
paradox, doesn't it?
How can you be coding
independent code points?
Well, it's less than a
paradox if you read it
as Codec independent
code points.
as Codec independent
code points.
The job of the spec is to
make sure that the assignment
of these tag numbers
is done in a manner
that is harmonious all
Codecs and file formats.
So the interpretation of
numbers will be the same
in QuickTime Movie,
MPEG-4, H264 and so forth.
All right, with that background,
let's look at a few new APIs.
We have introduced a new media
characteristic that tells you
that at video track is tagged
with wider color primaries,
something wider than the Rec.
709 primaries.
If your app finds that
there is wide gamut video,
it might be appropriate
for your app to take steps
to preserve it, so it isn't
clamped back into the 709 space.
If not, it's actually generally
best to stay within Rec.
709 for processing.
So you can specify a working
color space when you set
up an AVPlayerItemVideoOutput
or an AVAssetReaderOutput.
And you will then receive
buffers that have been converted
into that color space.
You can also specify a target
color space when setting
up an AVAssetWriterInput,
in which case the
source image buffers
that you provide
will be converted
into that color space
prior to compression.
into that color space
prior to compression.
With AVPlayerItemVideoOutput
or AVAssetReaderOutput
if you don't want image
buffers to be converted
into a common color space,
then you should set the
AVVideoAllowWideColorKey to true
and then you'll receive buffers
in their original color space.
This is effectively a promise
that whatever software receives
and processes those buffers,
whether it's ours or yours,
it will examine and honor
their color space tags.
There are analogous properties
for configuring video
compositions.
First, you can specify
a working color space
for entire video compositions.
Alternatively, if you have
a custom video compositor,
you may choose to make
it wide color aware.
You can declare that your custom
video compositor is wide color
aware and that it examines
and honors color space tags
on every single source
frame buffer
by implementing the optional
supportsWideColorSourceFrames
property and returning true.
Running it out with a reminder,
if you create picture
buffers manually, for example,
using a pixel buffer
pool in metal,
then you should explicitly
set the color space tags
on every buffer by
calling core videos APIs.
Most developers won't
need to do this.
In most cases when you're
using a color space aware API
for source buffers, that'll take
care of tagging them for you.
By popular request, I'm
going to spend the rest
of our time discussing
some best practices
for optimizing playback
startup time.
I'll talk about local
file playback first.
And then we'll move on
to HTTP Live Streaming.
Now some of these
optimization techniques may be
counterintuitive at first.
They require you
to consider things
from the perspective
of AVFoundation.
And to think about when it
gets the information it needs
to do what your app
is asking it to do.
For example, here is a
straightforward piece of code
for setting up playback
of a local file.
We start with the
URL to the file.
We create an AVURLAsset
representing the product
depositing that file.
We then create an AVPlayerItem
to hold the mutable state
for playback, and an AVPlayer
item to host playback.
And then we create
an AVPlayerLayer
to connect video playback
into our display hierarchy.
Now this code is correct,
but it has a small flaw,
Now this code is correct,
but it has a small flaw,
which maybe you may
not initially see.
As soon as the player
item is set
as the player's current item,
the player starts setting
up the playback pipeline.
Now it doesn't know the future.
It doesn't know that
you're going
to set an AVPlayerLayout later.
So it sets things up
for audio only playback.
And then when the AVPlayerLayer
is added, now AVFoundation knows
that the video needs
to be decoded too.
And so now it can
reconfigure things
for audio and video playback.
Now, as I said earlier,
we have made enhancements
in this year's OS releases
to mean that minor changes
to the list of playback
to the list
of enabled tracks do
not necessarily cause
an interruption.
But it still ideal to
start with the information
that AVFoundation needs in order
to get things right first time.
So I'm going to change
this code a little bit.
I'm going to watch where the
AVPlayerItem is connected
to the AVPlayer.
So now the player is
created with no current item,
So now the player is
created with no current item,
which means it has no reason to
build playback pipelines yet.
And that doesn't change when
you add the AVPlayerLayer.
Playback pipelines
don't get built
until the player item
becomes the current item.
And by that point, the
player know what it needs
to get things right first time.
We can generalize this.
First, create the AVPlayerLayer,
so first create the AVPlayer
and AVPlayerItem objects.
And set whatever
properties you need
to on them including connecting
the AVPlayer to an AVPlayerLayer
or an AVPlayerItem to an
AVPlayerItemVideoOutput.
Now this might seem crazy,
but if you just want playback
to start right away,
you can tell the player
to play before you give
it the item to play.
Why would you do this?
Well, if you do it
the other way around,
the player initially
thinks that you wanted
to display the still frame
at the start of the video.
And it might waste some time on
that before it gets the message
that actually you
just want playback.
Again, starting with the
actual goal may shave off a
few milliseconds.
Let's move on to HLS.
The timeframes we're trying to
optimize with HLS are longer
because they're donated by
network IO which is much slower
because they're donated by
network IO which is much slower
than local file storage.
So the potential benefits
of optimizations are
much more noticeable.
The network IO breaks
down into four pieces.
Retrieving the master playlist
that's the URL you passed
to AVURLAsset.
If the content is protected
with fair play streaming,
retrieving content keys,
retrieving the selected
variant playlists
for the appropriate bitrate
and format of video and audio,
and retrieving some
media segments
that are referenced
in that playlist.
Now the media segments
will be the highest amount
of actual data transfer but
with network IO we need to think
about round-trip latency.
Some of these stages
are serialized.
You can't download
things from playlist
until you've received
the playlist.
So a thing to think about
then is can we do any
of these things before
the user chooses to play?
For example, maybe in your
app you display a title card
when content is first selected,
and that gets the user to say,
is this actually the
one I wanted to play?
Or do I want to read some
information about it.
So the question is, could
we do some small amount
of network IO speculatively
when the user has identified
the content they probably want
to play before they
make it official?
Well, AVURLAsset is a lazy API.
It doesn't begin loading
or pausing any data
until someone asks it to.
To trigger it to load data
from the master playlist,
we need to ask it to load a
value that would derive from it
like duration or available
media characteristics
with media selection options.
Duration is easy to type.
You don't have to provide a
completion handler here unless
you're actually going to do
something with that value.
Speaking of playlists, they
can press really easily,
and we've supported compressing
them with gzip for many years.
So make sure you're doing that.
Possibly it's just a matter
of configuring your server.
If your content is protected
using fair play streaming,
then there's round-trip involved
in negotiating content
keys with your server.
And you can trigger
that to happen sooner
by setting the
preloadsEligibleContentKeys
property of the
asset.resourceLoader to true.
For this to work, the
master playlist must contain
SESSION-KEY declarations.
So how are we doing so far?
With these techniques,
we can start --
they can get the master playlist
and the content keys downloaded,
while we're still
on the title card.
Now that's pretty cool.
The variant playlists
and the segments
of data will still
load after we hit play.
So you might be asking
yourselves,
can we push this
technique even further?
Well, there is a new
API in 2016 called
preferredForwardBufferDuration.
Setting it to something low
like five seconds will
buffer the minimum amount
that AVFoundation thinks
you need to get started.
But once playback begins,
set the override back to zero
to allow normal buffering
algorithms to take over again.
Here's a list of video variance
that might be in
a master playlist.
They vary in dimensions
and bitrate.
For an Apple TV on a fast
connection with a big TV,
the 1080p variant
might be ideal.
For an iPhone SE, even with
a superfast Wi-Fi connection,
the 720p variant
is the best choice.
It's already higher resolution
than the iPhone SE screen,
so going bigger probably
won't improve any quality.
On a giant iPad Pro, there are
a lot of pixels, so we could go
up to a big variant
for full screen.
But if we play
picture-and-picture,
we don't need such a
high resolution anymore.
And a lower bitrate variant
could reduce the size
of our cache and help us
make more memory available
for other apps.
If the network connection
is slow on any device,
then that's going to
be the limiting factor.
So what this means is that
AVFoundation needs to take
into account both the
display dimensions
and the network bitrate
when choosing the variant.
AVFoundation uses the
AVPlayerLayer size on the screen
to evaluate the dimensions.
So set up your AVPlayerLayer at
the correct size and connect it
to the AVPlayer as
early as you can.
It can be hidden behind other UI
if you're not ready
to show video yet.
On a retina iOS device,
it's currently necessary
to set contentsScale manually.
As for bitrate, well
AVFoundation is in a bit
of a chicken and egg
situation when it comes
to playback first beginning.
It has to choose some variant,
but it does not know what
bitrate it's going to get.
Once it's begun downloading
segments,
it can use the statistics
from those downloads
to adjust the choice of variant.
But for that first variant,
it hasn't gathered
any statistics yet.
So AVFoundation's
base algorithm is
to pick the first applicable
variant in the master playlist.
If that's a low bitrate
option, the user will start
out seeing something blurry,
but AVFoundation will soon
decide what the actual network
bitrate is and switch up
to the appropriate variant.
Well, the question is, what
if you would like to try
to improve that initial choice?
Well, remember, there is a
tradeoff you have to make
between initial quality
and startup time.
A higher bitrate first segment
takes longer to download.
And that means it will
take longer to start.
You might decide that
it's best to start
with a lower bitrate variant
in order to start faster.
Well one way to make the
tradeoff is to figure
out a minimum acceptable
quality level you'd like to see
on a particular size of
screen and start there.
Then AVFoundation will
switch to a higher quality
after playback begins
as the network allows.
after playback begins
as the network allows.
And maybe you know one thing
that AVFoundation doesn't.
Maybe your app just played a
different piece of content.
And maybe you can use
that playback's access log
to make a better guess
about the bitrate
that the next playback
station is going to get.
So let's suppose that you come
up with a hero stick based
on startup quality and
recent bitrate statistics.
And you decide on
a way to choose
which variant you
want to start with.
Well, how do we plug that
choice into AVFoundation?
There are two techniques
that have been used.
Here's the first technique.
On the server, you have
to sort your variance
from highest to lowest.
Like that.
And then in your app, you need
to set the player items
preferredPeakBitRate
to you bitrate guess.
This will eliminate the
higher bitrate variance
from initial selection.
Shortly after playback
starts, you should reset
that control back to zero, which
will allow AVFoundation to move
up to a higher bitrate variance
if the network improves.
up to a higher bitrate variance
if the network improves.
The second technique is
to dynamically rewrite the
master playlist in your app
and move your preferred
choice to the top of the list.
To do this, use a custom URL
scheme for the AVURLAsset
and implement the AV asset
resource loader delegate
protocol in which you can
supply that rewritten playlist
in response to the load request
for that custom URL scheme.
I want to remind you to
profile your code too.
Look for any delays before
you call AVFoundation.
In particular, you do not need
to wait for likelyToKeepUp
to become true before
you set the player rate.
You don't need to now, and in
fact, you never have for HLS.
Make sure that you release
AVPlayers and AVPlayerItems
from old playback sessions
so that they do not waste
bandwidth in the background.
You can use the Allocations
Instrument in Instruments
to check the lifespans
of AVPlayer
and AVPlayerItem objects.
And if you have an application
that does other network
activity,
that does other network
activity,
consider whether you should
suspend it during network
playback so that the user
can take full advantage
of available bandwidth
for playback.
All right, in conclusion,
automaticallyWaits
to minimize stalling,
Autoplay, Autowait.
It's set to true by default
if your app is linked on
or after this year's SDKs.
And it provides uniform
buffering rules
for progressive download
and HLS playback.
We've introduced a new
API called AVPlayerLooper
to simplify using
the treadmill patent
to loop playback
of a single item.
Changing the set of enable
tracks during playback no longer
always causes a brief pause.
And we've looked at
the AVFoundation APIs
that you can use to prepare
your app for wide color video.
Finally, we talked about
optimizing playback startup
for local files and for HLS.
In short, avoid accidentally
asking for work you don't need.
In short, avoid accidentally
asking for work you don't need.
And for the work you do need,
see if you can do it earlier.
We'll have more information at
this URL about this session,
including sample code
that we've shown.
We have some related sessions
that you might like to catch
up to see in person
or catch up online.
The bottom one is an
on-demand only one
that you can watch in the app.
Thank you for attention.
It's been a pleasure.
I hope you have a great week.