WWDC2017 Session 509

Transcript

>> Hello, good afternoon and
welcome to session 509,
Introducing AirPlay 2 Unlocking
Multi-Room Audio.
My name is David Saracino, I'm a
member of the AirPlay
engineering team, and I'm very
excited to be here this
afternoon to talk to you about
the many new features we've
added to audio to AirPlay over
the last year, and how you can
adopt them in your app.
But before we get into that
adoption and how you can build
this into your app, let's first
take a quick survey of AirPlay
today, and where we're taking it
with AirPlay 2.
So, with AirPlay today, you can
wirelessly send your myriad
screen, audio, or video content
from just about any Apple device
to an Apple TV or AirPlay
speaker.
Over the past year, we've added
so many features to the audio
domain that that's where we're
going to spend the majority of
our talk.
And because we've added so many
features to the audio domain
we're introducing this as a new
feature called AirPlay 2.
So, what is AirPlay 2?
Well with AirPlay 2, you can
still wirelessly send the audio
content from your app to an
AirPlay speaker.
But additionally, with AirPlay
2, you can send that content to
multiple AirPlay 2 speakers with
very tight sync.
Additionally, in AirPlay 2,
we've enhanced the audio
buffering on the AirPlay 2
speakers so that your content
can play with the high degree of
robustness, reliability, and
responsiveness.
And lastly, we're introducing
multi-device control.
And this will allow multiple
Apple devices in your home to
interact with the audio that's
being streamed throughout your
house.
So, where is AirPlay 2
supported?
Well I'm happy to say if you
have iOS, tvOS, or macOS app,
you can take the steps that I'm
about to outline today and build
them into your app to take
advantage of AirPlay 2.
And when you do, your app will
be able to play on a very wide
ecosystem of AirPlay 2 speakers,
including the HomePod, latest
generation Apple TV, and
third-party AirPlay 2 speakers
that will soon be coming to
market.
So, that's the high-level
overview of AirPlay and AirPlay
2.
Let's look at our agenda for the
rest of the talk.
First, we're going to talk about
basic AirPlay 2 adoption.
The steps that you need to do
take in order to get AirPlay 2
inside your app.
Next, we're going to focus on
some advanced playback scenarios
that you may encounter as you
adopt AirPlay 2.
And lastly, we're going to talk
about the availability of
AirPlay 2.
So, let's begin our talk about
AirPlay 2 adoption.
So, in order to adopt AirPlay 2
in your app, there are basically
four steps you need to take.
First, you should identify your
app as presenting long-form
audio.
I'll explain what long-form
audio is in a minute.
Next, you should add an AirPlay
picker inside your app.
And third, you should integrate
with certain parts of the
MediaPlayer framework.
And lastly, you should adopt one
of the playback APIs that are
specifically designed to take
advantage of the enhanced
buffering in AirPlay 2.
So, let's walk through each of
these individually.
First, identifying yourself as
presenting long-form audio
content.
So, what is long-form audio
content?
Well long-form audio is anything
like, is content like music,
podcasts, or audiobooks, as
distinct from things like system
sounds on a Mac.
Now, identifying yourself as
presenting long-form audio
content is very easy.
You simply set your
application's AVAudioSession's
route sharing policy to be
long-form.
Now here's a code snippet of how
you do that on iOS.
This is something that many of
you iOS developers are likely
familiar with setting your
category and mode, and there's
just a new parameter here which
is routeSharingPolicy:
.longform.
As I said, this has been around
on iOS for a little while.
This class is actually,
AVAudioSession is new altogether
on macOS, and I'm not going to
show a code snippet, but it's
even simpler than what you have
to do on iOS.
You simply set the route sharing
policy to long-form.
And for more information on this
new route sharing policy and
what it can do for your app in
regards to the rest of the
system, for example allow the
user to take a phone call while
you're app is AirPlay, using
AirPlay 2 to stream to speakers,
I encourage you to take a look
at this session from earlier in
the week called What's New in
Audio.
All right, so that's the first
step you need to do to take
advantage of AirPlay 2.
The second is to add an AirPlay
picker inside your app.
This will allow users to route
their content to AirPlay
speakers without leaving the
confines of your app.
And doing this is very easy.
You simply adopt a new API
called AVKit's
AVRoutePickerView.
You simply add this view to your
view hierarchy.
And you may want to control when
this is shown, for example only
show it if there actually is an
AirPlay 2 speaker or an AirPlay
speaker available to be routed
to.
And you can do that by adopting
AVFoundation's AVRouteDetector
which will tell you if there are
speakers, if there are routes
available.
These API, they're new in the
latest OS releases and they're
available on macOS, iOS, and
tvOS.
And a note to all of you iOS
developers who are currently
using MPVolumeView, it will
still work, but we are
encouraging people to move to
these API instead.
So, that's the second thing you
need to do to get your app up
and running with AirPlay 2.
The third thing you should do is
to integrate your app with
certain parts of the MediaPlayer
framework.
Now this is rather large, so
there's basically two things
that I'm specifically focusing
on today.
And these are things that will,
these API will allow you to do
things like display the
currently playing album art on
the lock screen or an Apple TV
it's being AirPlayed to.
And additionally, they will
allow you to receive playback
commands and pause commands from
other devices on the network or
from other accessories like
headphones.
And the two pieces of API that
we need, we are asking you to
adopt.
The first is
MPRemoteCommandCenter and this
will allow you to receive those
remote commands, and the second
is MPNowPlayingInfoCenter which
will allow you to inform the
system metadata about the
currently playing track.
All right, so that's the third
thing you need to do to get your
app up and running with AirPlay
2.
The fourth this is you should
adopt one of the playback APIs
that are specifically designed
to take advantage of enhanced
buffering in AirPlay 2.
And to motivate this
conversation, let's take a look
a little bit deeper at AirPlay
audio and the buffer levels
beginning with how AirPlay works
today.
So, AirPlay today is effectively
a real-time stream of audio to
which the speakers adds just a
few seconds of buffering before
playing out in real-time.
Now, this works very well to a
single speaker, and it works
pretty well with lots of
content.
But, if we are able to restrict
ourselves on only focusing on
long-form content, like music,
podcasts or audiobooks, we can
probably do better.
So, let's talk about the
enhanced buffering in AirPlay 2.
So, what am I talking about with
enhanced buffering in AirPlay 2?
Well, as the name implies, we've
added very large buffer capacity
on the AirPlay 2 speakers.
Here I want you to think
minutes, not seconds.
Additionally, we're able to
stream the audio content from
your app to the speaker
faster-than-real-time.
And I think the benefits of this
should be clear.
The large buffering on the
AirPlay speakers will add a
large degree of robustness to
the AirPlay session.
You should be able to survive
more typical network glitches,
like walking, taking the trash
out, or walking to the dead spot
of the house, or even
microwaving some popcorn.
Additionally, this should
provide more responsive playback
experience.
The real-time nature of existing
AirPlay means that there's a
fixed output latency that is
related to the buffer level on
the AirPlay speaker.
So, when you hit play, with
AirPlay 2 it should play a lot
faster.
When you hit skip, it should
skip a lot faster.
This will add a great, this is a
great user experience for your
users which is why we're really
excited to talk about the
enhanced buffering.
And so in order to take
advantage of the advanced
buffering, you should adopt one
of a few API.
The first, is AVPlayer and
AVQueuePlayer.
The AVPlayer's set of playback
API.
These have been around for quite
a while and they're your easiest
route to AirPlay 2 in the
enhanced buffering.
The second API, set of API are
new API that we're introducing
today which are
AVSampleBufferAudio Renderer and
AVSampleBufferRender
Synchronizer.
These will prevent, present more
flexibility to apps that need
it.
So, let's talk about these
individually by first surveying
AVPlayer and AVQueuePlayer.
So, as I've said these have been
around for quite a while, since
iOS 4 on the iOS platforms.
And because of that there's a
ton of documentation on the
developer website and a lot of
sample code there.
So, if you want a lot of detail
on these, I encourage you to
look to the developer website,
because I'm just going to give a
very high-level overview of
these API today.
So, what I'm going to do, is I'm
going to walk through what it
takes to build an app with this
API.
So, here you have your Client
App, and if you want to build an
app with an AVPlayer or
AVQueuePlayer, the first thing
you're going to do is your going
instantiate one of the objects.
Here I'm using an AVQueuePlayer
but the steps are exactly the
same if you use AVPlayer.
Next, you're going to take a URL
that points to the content that
you want to play, and that
content can be local or it can
be in the cloud, it can be
remote.
You're going to take that URL,
and you're going to wrap it in
an AVAsset and you're going to
wrap that AVAsset in an
AVPlayerItem.
Next, you're going to give that
AVPlayerItem to the
AVQueuePlayer.
Now after you've handed it over,
you're ready to initiate
playback.
And you do so but simply setting
the rate of 1 on the
AVQueuePlayer, which in response
will begin downloading the audio
data from wherever it resides,
and then playing it out the
speakers.
So, this is the quick survey of
AVPlayer and AVQueuePlayer, I
should also note that this works
really well for video content.
So, if you want to play some
video content, you can do it
pretty easily with AVPlayer and
AVQueuePlayer.
So, just to reiterate, your
easiest way to AirPlay 2 is
using these API.
But we recognize that this won't
work for everybody.
There's a certain class of audio
playing apps that either
require, that want to do their
own IO possibly.
Possibly their own DRM or even
preprocessing on the media data
before it's rendered out.
They need more flexibility than
the, than what these API
provide.
And for these developers we're
introducing these new classes,
AVSampleBufferAudio Renderer and
AVSampleBufferRender
Synchronizer.
Now with these APIs when you use
them to do playback, your app
has additional responsibilities.
So, first of all your app is
responsible for sourcing and
parsing the content.
You need to go get it wherever
it is, download it, read it off
disc, whatever.
And then your app has to parse
it and feed the raw audio data,
the raw audio buffers to the API
for rendering.
This is kind of similar to
AudioQueue but it works better
with the deeper buffers that we
need for buffered AirPlay, for
enhanced AirPlay.
Because these are new, we're
going to spend the remainder of
this session talking about
these.
All right, so let's walk through
a simple block diagram of how
you might build an app using
these API.
So, again use your Client App,
and the first thing you're going
to do when building an app with
these is you're going to
instantiate an
AVSampleBufferRender
Synchronizer and
AVSampleBufferAudio Renderer.
Now of these classes, the
AudioRenderer class is the one
that is responsible for
rendering the audio, and the
Synchronizer class is the one
that's responsible for
establishing the media timeline.
Now, there's good reason, you
may be asking yourself, why are
they two classes, why didn't you
roll it into one?
Well, I assure you there's a
good reason.
We'll talk about that later.
But never the less,
AudioRenderer it does the
rendering of the audio.
The Synchronizer establishes the
media timeline.
So, after you've instantiated
these, you add the AudioRenderer
to the Synchronizer.
This tells the AudioRenderer to
follow the media timeline
established by the Synchronizer.
Now, as you begin working with
an AVSampleBufferAudio Renderer,
one of the things it's going to
do is it's going to tell you
when it wants more media data.
And when it tells you that it
wants more media data, in
response, you should feed it.
So, give it some audio data.
And after you've done that, you
can begin playback by setting
the rate of 1 on the
Synchronizer.
And after you've set the rate of
1 on the Synchronizer, audio
data will begin flowing out of
the AudioRenderer.
So, that's the basic high-level
overview of how you build a
playback engine, with
AVSampleBufferAudio Renderer and
AVSampleBufferRender
Synchronizer.
Now I'd like to invite my
colleague Adam Sonnanstine up on
stage to give a demo of this.
[ Applause ]
>> Thank you, David.
I'm very happy to be here today
to demonstrate AirPlay 2 and its
enhanced reliability and
robustness.
I'm going to do that using this
sample application that we've
developed.
And this application will be
available as sample code shortly
after the conference ends.
And we also have an Apple TV on
stage so I can demonstrate
AirPlaying from the phone to the
Apple TV both with current
AirPlay and with AirPlay 2.
So let's take a look at the app.
You can see it's just a simple
sort of stripped down music
player interface.
We have this nice AirPlay or
sorry Control Center integration
because I've adopted the
MediaPlayer APIs that David
mentioned before.
So, I'm just going to go ahead
and start playing.
And so you can hear it, I'll
start AirPlaying to my Apple TV.
[music] Now this app has not yet
been optimized for AirPlay 2 so
I just want to give you a base
line of what the performance is
with the current AirPlay.
And I'm going to do that, I want
you to imagine that you are
enjoying this music sitting on
my couch in my living room, the
music being streamed from my
phone to my Apple TV and I
decide that it's a good time to
walk outside with the phone in
my pocket to take out the trash,
maybe to, walking right outside
of my Wi-Fi range, and I'm going
to simulate that by using this
bag, which shields everything I
put inside it from
electromagnetic radiation going
in and from coming out, and that
includes Wi-Fi.
So, I'll take my phone, and here
I am walking outside and you can
hear that almost immediately the
music cuts out.
You've probably experienced
something like this before and
if you're actually sitting in my
living room, listening to music
you're probably going to think
I'm not a very good host.
So, let's go back and I'll go
ahead and pause the music.
And we'll switch into Xcode here
and I'll show you how you can do
a couple of simple steps to opt
your app, this app into AirPlay
2.
So, here we are in Xcode and I
have just a very small snippet
from my application here, the
rest of it's implemented in
other files.
And I just have a function that
the rest of the app is using to
grab an object that it can use
to actually do the audio
playback.
It's basically a factory
function.
And this can return an object
that conforms to a protocol that
I've defined elsewhere in the
app.
We don't need to look at the
details, but it defines methods
for things like play and pause
and the sort of basic playback
operations that a player should
do.
And we have an existing
implementation of this protocol
which is being used in the demo
that we just saw.
And what we're going to do right
now is I'm going to create a
brand-new implementation of this
audio player, and I'm going to
call it SampleBufferAudioPlayer
because it's going to be built
using AVSampleBufferAudio
Renderer.
I'm just going to preemptively
swap that in so that the next
time we launch the app, it will
use the new implementation
instead of the old
implementation.
So, as I mentioned, this class
is going to be built on top of
AVSampleBufferAudio Renderer,
and we're also going to need to
use AVSampleBufferRender
Synchronizer.
These are the classes that David
just introduced to you.
And I just want to reiterate
what David said that AVPlayer
would give you all of the
benefits that I'm about to show
you, but with a little bit less
work on your part.
So, if you're already using
AVPlayer, or you think you can
use AVPlayer, we recommend that
you do that.
For the rest of you, I want to
show you how to use these new
classes.
So, once I have my state here,
I'm going to hook up my
AudioRenderer with my
RenderSynchronizer using the
addRenderer method, and I'll do
that as soon as my
SampleBufferAudioPlayer is
created.
And then we need to interact
with the renderer, and we're
going to do that by calling the
requestMediaDataWhenReady
method.
And what this is going to do is
it's going to call a closure in
my app so that I can feed the
AudioRenderer some more audio
data whenever the AudioRenderer
decides that it's ready to
receive more data.
And it will call this closure as
often as it needs to to stay
full of audio data.
Now within my closure, I'm
actually going to loop so that
I'm going to keep appending more
data as long as the
AudioRenderer is still ready for
more media data.
Within my loop, the first thing
I need to do is grab my next
chunk of audio data and package
it as a CMSampleBuffer.
Now this method here is my app's
own logic for doing this, for
grabbing its next little bit of
audio data.
Your app will have your own
version of this logic, maybe
pulling the data off the network
or decrypting it from disc.
If you want to see the details
of my version, once again, check
out the sample code.
And I have this set up to return
an optional CMSampleBuffer and
that's just so I can use a nil
return value to signal that I've
reached the end of data.
Once I have my sample buffer, I
turn around and enqueue it into
the AudioRenderer using the
enqueue method.
And what this will do is it will
hand the audio data off to the
renderer so that it can schedule
it to be played at the
appropriate time.
As I mentioned, I'm using a nil
sample buffer to signal that I
don't have any more data.
And when that happens I need to
explicitly tell the
AudioRenderer to stop requesting
more media data.
If I don't do this, and I exit
my closure, that AudioRenderer
is just going to invoke the
closure right away, when it's
ready for data again, but since
I don't have anything more to
give it, I don't want that to
happen.
So, I need to tell it to stop.
And this function here is pretty
much your entire interaction
with the AudioRenderer for a
simple use case like this.
Now, we need to do a couple
things with the
RenderSynchronizer.
I have a play method, and all
that play really is, is setting
the RenderSynchronizer's rate to
1, which will start playback.
I have a little bit of prep work
I need to do there, which we
won't look at the details, but
it does end up calling this
start enqueueing method so you
can sort of get a feel for how
things flow here.
And I have a little bit of UI
updating to do every time I play
or pause, so this method will
dispatch back to the main queue
in order to keep the UI up to
date.
Similarly, I have a pause
method, and the only difference
here is that it sets the
renderSynchronizer's rate to 0.
So, now you've seen all of the
interaction that we're going to
have with AVFoundation.
I have one more bit of code I
need to drop in, just to make
sure my app will build and work
correctly, which we won't look
at the details of, but I'll just
drop it there.
And then, we will rebuild the
application.
And I seem to have some
compilation errors.
So, what I'm going to do is
figure out what I did wrong.
Well, why don't we just take
this and, ah I think I know
what's wrong here.
Just drag that over here, and go
back here and we'll try again.
All right, so we got our build
ready.
And now we're relaunching the
app.
So, we'll switch back to our
side by side view.
And once the app finishes
launching, I'm going to go ahead
and restart the music playback.
Here we go.
And once the music starts
playing [music] now we're
playing using AirPlay 2 from the
phone to the Apple TV.
So, now I want to run the same
scenario that I did before using
our bag again.
So, let's grab the phone.
And we'll put it in the bag.
Once again, here's me taking the
phone outside, outside of Wi-Fi
range.
I'll fold up the bag nice and
tight.
And as you can see, where as in
the first demo, the music cut
out almost immediately, now that
we're using AirPlay 2, the music
can keep playing even through
these sorts of minor Wi-Fi
interruptions.
So, that's the power of AirPlay
2 using AVSampleBufferAudio
Renderer.
Thank you very much and back to
David.
[ Applause ]
>> Thank you, Adam.
So, now that we've shown, walked
you through the steps to create
a simple app with
AVSampleBufferAudio Renderer and
AVSampleBufferRender
Synchronizer, let's move on to
some more advanced playback
scenarios that you may be, you
may encounter when using these
API.
So, what we're going to talk
about in this section is audio
buffer levels of
AVSampleBufferAudio Renderer.
We're going to talk to about how
to implement a seek, how to
implement play queues, some of
the support audio formats with
AVSampleBufferAudio Renderer,
and lastly, we're going to take
a slight detour and talk about
video synchronization.
So, let's jump into the next
section and talk about the audio
buffer levels of
AVSampleBufferAudio Renderer.
So, what am I talking about?
I'm talking about the fact that
the amount of audio data request
by an AVSampleBufferAudio
Renderer of your app will vary
depending on the current route.
So, let's take a look at this
pictorially.
Here I've drawn a media timeline
and I'm going to drop in the
playhead.
And when you're playing back
locally, the AudioRenderer is
only going to ask for a few
seconds ahead of the playhead.
So, that is where the, that is
where you should enqueue to, as
much as it asks for.
And as you play, again you're
just going to enqueue a few
seconds ahead of the app.
But suppose your user suddenly
decides to route to an AirPlay 2
speaker.
Now when that happens, the
AVSampleBufferAudio Renderer is
going to ask for up to multiple
minutes ahead of the playhead.
And once again, when you hit
play, you'll work multiple
minutes ahead of the playhead.
Now the key here, is that the
amount of data requested by
AVSampleBufferAudio Renderer
varies depending on where the
audio is currently routed.
If it's routed locally, this
will just be seconds, to
Bluetooth seconds, to an AirPlay
1 speaker seconds.
But as soon as the user routes
your audio content to an AirPlay
2 speaker, that AudioRenderer is
going to get really hungry and
it's going to ask for multiple
minutes of audio data.
And the key here is that your
app should be ready to handle
these changes.
All right, next let's talk about
seek.
So, what is a seek?
Well a seek very quick, very
simply, is manually changing the
location of the playhead.
So, again let's draw out our
media timeline that we just saw
a moment ago.
Put in the playhead, and talk
about local play or talk about
standard playback scenario.
User hits play, and then after
it plays for a little bit the
user decides you know what, I
want to seek further into the
track.
So, they pick up the playhead
and they drag it.
So, they've asked for a seek.
So, how do we handle this with
AVSampleBufferAudio Renderer?
Well it's actually really easy.
The first thing that you're
going to do is you're going to
stop.
You're going to stop playback of
the AVSampleBufferRender
Synchronizer and you're going to
stop enqueueing media data into
the AVSampleBufferAudio
Renderer.
Next, you're going to issue a
flush on the
SampleBufferAudioRenderer which
will clear out all media data
that's already been enqueued.
And at that point, you're free
to begin audio, re-enqueueing
audio data at any media time, so
you do it beginning at the seek
to time.
And after you've enqueued media
data the seek to time, kick off
playback once again and there
you go.
So, that's a seek.
So, now that I've shown you how
seek works, let's take a look at
how you would implement that in
code, by extending Adam's app
with a method called
seek(toMediaTime.
So, how do we implement this?
Well it's pretty easy.
The first thing we're going to
do is we're going to tell the
renderSynchronizer to stop
playback by setting the rate to
0.
And then we're going to tell the
AudioRenderer to stop requesting
media data.
Next, we're going flush the
AudioRenderer to clear out all
the old audio data we've
enqueued.
And then we're going to call
some app specific code, that
tells your sample generating
code that the next sample that
it should prepare is at that
seek to time.
Remember your app is responsible
for producing the audio data
when using this API, so you need
to instruct it where the next
audio sample should be generated
from.
And after that happens, you can
reinstall your closure on the
AudioRenderer to tell it to call
you back for more audio data.
And you can set a rate to 1 on
the RenderSynchronizer.
Pretty simple.
So, that's seek.
Let's get into something more
interesting which is play
queues.
And what is a play queue?
Well here we have a screenshot
from Adam's app.
And a play queue is very simple
is, you know, you order a set of
items.
I hit play on one and they all
play out in order.
So, if we take these, this play
queue and we lay it out on that
media timeline, what we're going
to see is Item 1 followed by
Item 2, followed by Item 3.
That's just a very generic look
of those items laid out on this
timeline we've been showing.
But if we dig into the timelines
especially we're going to see
the timelines of each item,
we're going to see something
slightly different.
Let's suppose hypothetically
that each item is 100 seconds in
length.
That means that each item will
go from 0 to 100, 0 to 100, 0 to
100.
Of course the AudioRenderer
knows nothing about these
individual items.
All the AudioRenderer has is a
continuous media timeline.
So, as you enqueue audio data
into the AVSampleBufferAudio
Renderer, you may need to offset
from the item's natural timeline
to the continuous timeline of
the AVSampleBufferAudio
Renderer.
And again, let's take a look at
this enqueuing animation that
we've seen before.
Here are I'm playing back
locally and I've just enqueued a
few seconds ahead of the
playhead and as you can see I'm
going to queue each item as I go
through it.
And if the user is routed to an
AirPlay 2 speaker, and we're
working well ahead of the
playhead, same idea.
You're just going to enqueue
ever so slightly ahead of the
playhead.
All right.
So, that's the simple idea of
play queues.
Let's get into something a
little more interesting,
editing.
And in this hypothetical
example, let's suppose the user
decides they don't want to
listen to Item 2 anymore.
So, they remove it from the
queue, and so items 3 and 4
shift over in place.
Now, when that happens, it's not
a big deal.
The user will expect Item 1 to
play, followed by Item 3, and
followed by Item 4.
But that's a really relatively
simple case of play queue
editing, where I, made the edit
before playback is engaged.
But since we're working well
ahead of the playhead oftentimes
when using the
AVSampleBufferAudio Renderer,
you could run into a situation
like this, where the user is
instantiating, or initiated
playback.
You're still playing Item 1, but
you've begin to enqueue media
data from Item 2.
And then the user decides they
don't want to listen to Item 2
anymore.
So, what the user expects is
Item 2 disappears, Items 3 and 4
shift over.
And once again, what the user
expects is that after Item 1,
the currently playing track
plays, Item 3 will begin to
play.
But if we do nothing at this
point, since we've already
enqueued audio data from Item 2
into the AudioRenderer, well
that's what we're going to hear,
a blip of that, because we have
the wrong audio data in the
AudioRenderer.
So, what do we do here?
Well it's actually relatively
simple to handle, because
there's a command called Flush
from Source Time.
And what this command Flush from
Source Time does, what it means,
is that given this time that I
specify to you, I want you to
throw away all of the media data
on the timeline after it.
So, here I'm going to call Flush
from Source Time with the source
time pointing at the time in the
continuous timeline that it's
the transition from Item 1 to
the next item.
So, I call that and it throws
away all the media data.
Next thing that it does, is it
resets the pointer to the last
enqueued sample buffer to that
source time, so then I'm free to
enqueue media data from Item 3.
And of course, playback
progresses and we're good to go.
The key point about this is, for
the animation purposes in this
talk, I showed it may look like
playback was paused, but you can
actually execute this command
while playback is engaged, so it
can be totally transparent to
your users.
All right.
So, let's talk about the steps
involved in executing the flush
from source time.
There's basically three of them.
First is, stop enqueueing audio
data in the renderer.
Note that I'm not saying stop
playback, because again you
don't have to do that.
Then you issue the
flushfromSourceTime and after
you've issued the
flushfromSourceTime, well that
flush from source is an
asynchronous option, operation
into which you pass a closure.
Well, you should wait for the
callback to be called.
Now there are few gotchas with
this, even though it's a
relatively simple operation, and
the first is that the flush may
fail.
Now, why may it fail?
Well, suppose the source time
the you specified is too close
to or behind the playhead.
Well in those situations, we may
not be able to get the audio
data back from the audio
hardware.
And in those cases well, rather
than leaving you in an unknown
state where you may play out
stale data, we're just going to
fail the operation and it's, and
it's going to be as if it was
never issued in the first place.
So, the second gotcha which is
really just the other side of
the same coin is that you need
to wait for the callback.
So, you should wait for the
callback which will tell you
whether or not that flush
failed, and if it did you need
to take the appropriate action.
So, let's take a look at this in
code, again by extended the
sample app by calling a method
called -- by implementing a
method called
FlushfromSourceTime, perform
FlushfromSourceTime.
So, once again, in this method
the first thing you're going to
do is you're going to tell the
AudioRenderer to stop requesting
media data.
And then you're likely to call
some app specific logic to
really ensure that no media,
additional media data is
enqueued.
This is really important,
because this is an asynchronous
operation so it's naturally
racy.
So, we want to make sure that
you're not enqueueing any
additional audio data while that
FlushfromSourceTime is flight.
So, after you've made sure that
you're no longer enqueueing
audio data in the AudioRenderer,
you can actually execute the
FlushfromSourceTime.
Again, you'll pass in a closure
because this is an asynchronous
operation.
And then closure will tell you
then whether or not the call to
the closure will tell you
whether or not the flush
succeeded.
If it succeeded as I'm showing
here, well, the first thing
you're going to do is again
you're going to tell your sample
generating app, your app's
sample generating code to begin
preparing samples at that source
time.
And then you can reinstall your
callback closure and begin
enqueueing again.
Of course, the flush may fail,
in here it's really app specific
logic to decide what you want to
do in these situations.
Maybe you want to just advance
to the next track.
Maybe you want to execute a full
flush and glitch playback, it's
really up to you.
So, that's play queues and Flush
from Source Time.
Let's move on to the supported
audio formats in
AVSampleBufferAudio Renderer.
So, what audio formats does the
AVSampleBufferAudio Renderer
support?
Well good news, basically all
platform supported audio formats
are also supported by the
AVSampleBufferAudio Renderer.
So, this includes LPCM, AAC,
mp3, Apple Lossless, lots of
sample rates, lots of bit
depths.
And additionally, mixed formats
may be enqueued.
If Item 1 is AAC at 44.1 you can
then enqueue that and then MP3
at 48k and 16bit Apple Lossless
at 48 kilohertz.
You can just enqueue the audio
data from each of these back to
back and we will take care of
the format transitions for you.
So, those are the supported
formats.
Let's talk about the preferred
formats.
So, because we can play anything
just give us what you got.
You have LPCM, give it to us,
encoded give it to us.
But one caveat there is all
things being equal, if you have
a decision between encoded or
PCM, we would prefer the
encoded, but don't do any
special logic to give us that.
Just give us what you have.
And we prefer interleaved
channel formats and 1 to 2
seconds of audio per
CMSampleBuffer.
All right, now I'd like to take
a slight detour and talk about
video synchronization.
Now, you may be wondering why
we're talking about video in a
AirPlay talk that's mainly about
audio.
And it's because the classes
that we're introducing today not
only are they good for AirPlay 2
but they're also just really
good playback APIs for general
use cases.
And you may want to use them to
play video.
Now, once again, if you can use
AVPlayer, please use that.
But if you can't use AVPlayer
and you want to play video, here
you go.
So, let's refer, jump back to a
block diagram that we showed
earlier in the presentation.
We've got the Client App, the
AudioRenderer and the
Synchronizer.
Well, if I simply move those out
of the way, I can make room for
a new renderer class and here
I'm going to add an
AVSampleBufferDisplayLayer.
Now for those of you who are
unfamiliar with
AVSampleBufferDisplayLayer, it's
a class similar to the new
AVSampleBufferAudio Renderer,
except this one is geared
towards video.
It's been around for quite a
while and there's some
documentation and sample code on
the developer website.
But new in these releases is the
idea that you can have its
timeline controlled by a
RenderSynchronizer.
You can add it to a
RenderSynchronizer just as you
add the AudioRenderer to your
RenderSynchronizer.
And this is really cool.
Because then you can add the
audio data to the AudioRenderer.
The video data to the video
renderer, set a rate of 1 on the
Synchronizer, and because
they're tied to the same
timeline, that will come out in
sync.
Now a couple words of caution,
the video will not be AirPlayed,
and additionally when you're
using long-form you can only add
one AudioRenderer to a
RenderSynchronizer.
But never the less, I think you
can see the power in this and
flexibility in this architecture
and I'm sure you'll come up with
some really cool use case,
applications and we're excited
to find out what you can do.
All right.
Lastly, let's talk about the
availability of AirPlay 2.
So, I'm happy to say that the
APIs that I've discussed today
plus the advanced buffering are
all available in the beta that
you guys have access to today.
And you simply set the AirPlay 2
toggle in the developer pane,
flip that on.
And then you can use an updated
Apple TV as your AirPlay 2
receiver and send to it.
In an upcoming beta, we're going
to enable multi-room audio.
And lastly, this will be
available to customers in an
upcoming release.
Let's go through the summary.
We've covered a lot today.
So, AirPlay 2 introduces many
new features for audio.
Long-form audio applications can
enable them with the steps that
I've outlined today.
And AirPlay 2 adoption can begin
with the beta you guys all have
in your hands.
Additional information, this
website will have the
presentation that we've gone
over today, and there will be
sample code available soon.
A couple sessions that we've
talked about earlier.
There's the What's New in Audio
session, that's the one that
will discuss long-form in more
depth and also Introducing
MusicKit.
That's one that you should check
out.
Thank you very much and I hope
you have a great rest of the
week.
[ Applause ]