WWDC2017 Session 606

Transcript

>> Good Morning [applause].
Thank you, and welcome to What's
New in Screen Recording and Live
Broadcast.
My name is Johnny Trenh, I'm a
software engineer at Apple
working on a ReplayKit team.
Today, my teammate Alexander
Subbotin and I are really
excited to talk to you about all
the new upcoming features we
have planned for ReplayKit this
year.
We've got a lot to discuss, so
let's go ahead and get started.
ReplayKit is a framework that
allows developers to record
their applications audio, video,
and microphone content to a
movie file that their users can
then review, edit, save, or
share with the rest of the
world.
ReplayKit also supports Live
Broadcast.
For applications generating
content, ReplayKit provides you
with all the tools necessary to
stream your application's audio,
video, and microphone content to
a broadcast extension.
For applications implementing a
broadcast extension, ReplayKit
comes equipped with easy to use
Xcode templates that will help
your application stream
ReplayKit content.
ReplayKit records both audio and
video in HD quality with
low-performance impact and
minimal power usage.
Privacy is a very big thing with
us, so both recording and
broadcasting comes equipped with
privacy safeguards such as user
consent as well as all
recordings and samples that
exclude system UI.
Since the release of ReplayKit,
we have seen some amazing
adoption from developers all
around the world.
Applications like Galaxy on
Fire, Vainglory, Call of
Champions, and Modern Combat 5,
are all using ReplayKit today to
record and broadcast their
users' experiences with the rest
of the world.
Last year we introduced Live
Broadcast with ReplayKit.
And we are proud to say that
ReplayKit now powers some of the
industry leaders in mobile
streaming.
The support adoption and
feedback we've received from
developers has been amazing.
That is why this year we're
really happy to introduce
ReplayKit 2.
With ReplayKit 2, our goal is to
bring ReplayKit to every user
and every application.
We've designed our new features
to take ReplayKit beyond just
gaming, enabling all developers
to record and broadcast their
users' experiences to the whole
world.
And to do that, the first thing
I want to talk about is In-App
Screen Capture.
Look, we've already seen the
amazing content your users have
been creating with ReplayKit.
With ReplayKit 2, we're
introducing In-App Screen
Capture, which is a powerful new
API that's going to give you
direct access to your
application's own audio, video,
and microphone content straight
from ReplayKit.
Our goal with ReplayKit 2 was to
bring ReplayKit to every
application.
And to do that we're making
ReplayKit instantly available to
every application on iOS.
Previously, to use ReplayKit and
to have a application using
ReplayKit, you'd have to use our
API.
But now, users can record and
broadcast their entire iOS
screen experience straight from
the new screen recording control
found in Control Center.
Also new with ReplayKit 2 is the
ability to pair your application
with a particular streaming
service.
Developers can now seamlessly
integrate their entire streaming
solution right from within their
own application.
Last year, we introduced front
camera support for ReplayKit.
This year with ReplayKit 2 we're
introducing Fast Camera
Switching, which is going to
enable you to use both the front
and rear cameras for additional
video commentary.
We've got a lot of new and
exciting things to talk about.
So, I'm just going to go ahead
and jump right in with In-App
Screen Capture.
In-App Screen Capture is a
powerful new API that's going to
give you direct access to your
application's own audio, video,
and microphone content straight
from ReplayKit.
This new API is going to open
doors to new user experiences
that just weren't possible
before.
But, before we get into In-App
Screen Capture, let's go ahead
and take a look at how
applications are currently using
ReplayKit to handle recording.
An application that's currently
using ReplayKit will call in to
RPScreenRecorder to get the
shared recorder instance.
From there, you'll call start
recording onto shared recorder
instance, at which time the
replay daemon will start to
capture your application's
audio, video, and microphone
content, and create and manage a
movie file just for your
application.
When you call stop recording on
a shared recorder instance,
replay daemon will communicate
with a share and preview
extension and instantiate an
RPPreviewViewController.
We'll pass that
RPPreviewViewController back to
your application so that you can
present it to your users,
allowing them to review, edit,
save, or share the recording
they just made in your
application.
With a new In-App Screen
Capture, just like with
recording, you're going to call
into RPScreenRecorder to get the
share recorder instance.
From there, you'll call start
capture on the share recorder
instance, at which point the
replay daemon will start to
capture your application's
audio, video, and microphone
content.
But instead of creating and
managing a movie file just for
your application, ReplayKit will
send those audio and video
samples right back up to
RPScreenRecorder.
A capture handler block is then
called, and we'll send the audio
and video samples, right back to
your application's process.
Your application now has direct
access to its own audio, video,
and microphone samples straight
from ReplayKit.
Giving you direct access to your
application's audio and video
samples from ReplayKit will
provide you with more
flexibility and control over the
content your users are already
creating.
Just like with recording, In-App
Screen Capture captures audio
and video in HD quality, with
low-performance impact and
minimal power usage.
Again, privacy is a very big
thing with us, so In-App Screen
Capture comes equipped with
privacy safeguards such as user
consent as well as all samples
excluding system UI.
The API is simple and
lightweight.
So, let's go ahead and take a
quick look at it.
StartCapture takes in two
blocks, a captureHandler block
and a completionHandler block.
The captureHandler block is
called every time ReplayKit is
ready to hand your application
back a sample.
We provide you with a
CMSampleBuffer ref an
RPSampleBufferType and an NS
Error.
The completion handler block is
called when startCapture has
completed and will give you an
NS Error indicating to you
whether or not an error has
occurred during startCapture.
The stopCapture also takes in a
completionHandler which will
also pass you back and NS Error
indicating to you whether or not
an error has occurred during
stopCapture.
Let's go ahead and take a look
at how we can use this in
practice.
Here, in our example, what I
want to do is I want to take the
samples that ReplayKit has given
me and I want to use it to write
a simple movie file to disc
using AVAssetWriter.
So, here I have method called
didPressCaptureButton.
Inside of it, I'm simply going
to grab the sharedRecorder
instance from RPScreenRecorder.
From there, I'll call
startCapture providing it a
capture handler block and a
completion handler block.
Remember, I'm trying to write a
movie file using AVAssetWriter
with the samples we get back.
So it's really important for me
to know how I'm going to handle
these samples.
So, let's take a closer look at
the capture handler.
Remember, the capture handler is
called every time ReplayKit is
ready to hand your application
back a sample.
We'll provide you with a
CMSampleBuffer ref, and
RPSampleBufferType, and an NS
Error.
Since I'm using these samples to
write a simple movie using
AVAssetWriter, it's really
important for me to know what
type of samples I'm getting back
from ReplayKit.
Here we have a switch on
RPSampleType that's going to do
just that.
If I get a sample buffer type
that's video, I'm simply going
to append that sample to my
video input for my
AVAssetWriter.
If I get a sample buffer type
that's audio, I'll append that
sample to my audio input for my
AVAssetWriter.
And finally, if I get a sample
buffer type that's for the
microphone, I'll append that
sample to my microphone input
for my AVAssetWriter.
And, just like that, I am now
handling all the expected types
of samples I'm going to get back
from ReplayKit.
And I'm also using them to write
a simple movie to disc using
AVAssetWriter.
Let's go ahead and take a closer
look at the completion handler.
Just like with recording, you
want to indicate to your users
that a capture session has
currently occurred or is in
session.
So, in our completion handler,
we're going to simply call
update the capture button with
the error that we get back
during our completion handler.
This will allow me to update my
UI and indicate to my users that
a capture session has occurred.
And that's it.
That's all you have to do to
start using this powerful new
API.
I want to take a minute to
revisit how In-App Screen
Capture's actually working.
And I'd like to remind you that
your application now has direct
access to its own audio, video,
and microphone content straight
from ReplayKit.
In our example, we use these
samples to create a simple movie
using AVAssetWriter, but you can
do so much more with this.
Now that your application has
access to its own audio and
video samples, you can create
and manage videos right into
your own application.
You can even create a custom
video editor and have it
seamlessly integrated in the
user experience for your
application.
Again, a goal of ReplayKit 2 was
to bring ReplayKit to every
application, not just gaming.
Here, I have the same
productivity app running on two
different iOS devices.
Now that I have access to my
application's audio and video
samples, just like with
broadcasts, I can encode these
samples myself, and I can send
them to my own personal server.
From there, I can share my
application's Screen Capture
with any other instance of my
application running on any other
device.
This just wasn't possible
before, but it is now with
In-App Screen Capture.
These are just some of the
examples of the new user
experiences you can create using
In-App Screen Capture.
We're really excited to see what
developers are going to do with
it.
With ReplayKit 2, our goal was
to bring ReplayKit to every type
of application, not just gaming.
And to do that, I'm really happy
to introduce iOS Screen
Recording and Broadcast.
iOS Screen Recording and
Broadcast is an amazing new user
feature that's going to allow
users to record and broadcast
their entire iOS screen
experience.
Let's go ahead and take a look
at how we can use this new
feature.
To use iOS Screen Recording and
Broadcast, you're going to first
need to enable the Screen
Recording control for Control
Center.
To do that, we're going to jump
right into settings, and Control
Center.
From there we'll be shown all
these supported controls for
Control Center.
We simply add the screen
recording control and we start
Control Center.
And just like that, you're now
up and running ready to share
your entire iOS screen
experience.
[applause] Thank you.
To initiate an iOS screen
recording, you simply tap the
Screen Recording control.
You are now recording your
entire iOS screen experience.
That status bar and our screen
recording control will indicate
to you that a recording is
ongoing by showing you recording
indicator as well as the elapsed
time for your current recording
session.
To stop the recording you simply
tap the screen recording control
again.
When the recording has been
stopped, you'll be presented
with a notification indicating
to you that the screen recording
you just took has now been saved
in your Photos application.
Tapping on this banner will take
you straight to your Photos
application where you can
review, edit, or share the
recording you just made.
You can also 3D Touch into our
screen recording control which
will bring you straight to our
expanded view where you'll have
access to the microphone
settings as well as the ability
to start recording, or to stop
recording.
Now, let's say I've just
downloaded a broadcast
application that supports
ReplayKit Live Streaming.
And I'd like to use that
broadcast application to share
my entire iOS screen experience.
Well, that's actually pretty
simple to do, because all you
need to do is 3D Touch into our
screen recording control.
You'll be taken to our expanded
view, where all the applications
that are installed on your
device that currently support
ReplayKit Live Streaming will be
shown.
You simply need to select the
service you'd like to use and
tap Start Broadcast.
You are now broadcasting to the
entire world your entire iOS
screen experience.
Just like with recording, the
status bar in the screen
recording control will indicate
to you that a broadcast is
currently in session by showing
you the recording indicator, the
current selected broadcast
service, and the elapsed time
for your current broadcast.
To stop the broadcast, you
simply tap the stop broadcast
button.
Now, you can record a video to
teach your grandparents how to
make that FaceTime call.
You can even stream a video to
teach your parents how to send
that important email.
Or better yet, you can now
record a video to teach your
kids how to find their favorite
TV shows and movies in iTunes.
It has never been easier to
share your entire iOS screen
experience.
iOS Screen Recording and
Broadcast creates new user
experiences with applications
that are already using
ReplayKit.
So, let's take a quick minute to
talk about some best practices.
iOS Screen Recording and
Broadcast has priority when it
comes to ReplayKit.
So, if your application is
currently using ReplayKit to
record or broadcast and the user
initiates an iOS Screen
Recording and Broadcast, your
application will be notified via
RPScreenRecorderDelegate that
your session has been
interrupted.
In this case, the recording will
be discarded and you should
update your UI and notify the
users accordingly.
iOS Screen Recording and
Broadcast is an amazing feature.
It's one that we hope users and
developers will use to create
new exciting iOS content.
And to help you do that, I'd
like to bring Alexander Subbotin
up to the stage to talk to you a
little bit more about how we do
broadcasting with ReplayKit.
[ Applause ]
>> Thank you, Johnny.
Good morning.
My name is Alexander Subbotin
and I am so excited to be here
talk to you and share more
details about our Live Broadcast
API.
ReplayKit enables applications
to stream their content to third
party broadcast services
directly from your iOS and tvOS
device.
On iOS you can also include
voice and video commentary using
the microphone and the camera.
And all this content is
absolutely secure and
unaccessible with a broadcast
service that you use.
This is a high level picture of
how Live Broadcast really works.
All on the left, you see a
player who at some point wants
to broadcast his game Tower Dash
to Mobcrush which is a broadcast
service.
So Tower Dash communicates to
the ReplayKit API to initiate
the broadcast.
And once that happens, ReplayKit
will present some UI that allows
user to ping the broadcast
service.
And that would be Mobcrush for
us.
And the broadcast starts.
And once broadcast is running,
ReplayKit will be providing
audio and video samples to the
mob crush app extension that
directly talks to the broadcast
service and the viewers around
the world can watch the
livestream online.
This is just a picture of how
broadcast runs.
And the way I want to describe
it to all of you is, I want to
break it into parts because if
you are a client application
developer, game developer the
only thing you need to do is to
present some ViewController
initiate broadcast and couple of
other fairly simple steps.
But I also know that some of you
in this room broadcast service
developers.
And we are going to walk through
the process for you and talk
about how do you actually
implement those extensions.
So, starting with the client
side and what the client
application wants to do, for
that we have a fairly simple API
that really wrapped up in those
three classes.
There's
BroadcastActivityViewController
and this class is used to
initiate a broadcast and it
presents this built in UI where
you select the broadcast
service.
RPBroadcastController allows you
to manage the broadcast in your
code and you would usually wire
to some UI where user can start,
stop, pause, or resume the
broadcast.
And the
RPBroadcastControllerDelegate is
internal and this is a dedicated
[inaudible] part about different
events during the broadcast.
The broadcast can stop for some
reason, there could be an error,
or the broadcast extension may
want to pass some information to
the client app, and this is
where the delegate tells the
user.
And now, I'm going to stop here
talking about this part of the
picture because last year,
we covered this API in depth in
our session Go Live with
ReplayKit.
So, in case you have not adopted
the API yet, please go to the
app and you can find the link to
last year's session and learn
more about that developer API.
Now let's talk about the second
part of the equation, you have
developer of the broadcast
service.
How do you integrate your
service so that you could stream
the content created by all these
players, games, and other
applications?
And the answer is these three
extensions.
The first one on the top is what
we call Broadcast Setup
extension, and its purpose is to
present some UI where you can
ask user to enter some
information, you may want user
to give the broadcast a
particular name or maybe log in
credentials or any details that
you need fire up your broadcast.
And the second one is called
Broadcast Upload extension.
And the function of this
extension is to receive the
media samples handed by
ReplayKit and encode them,
create a video stream, and
upload to the online service.
Each extension is a separate
binary that is installed on your
device along with the broadcast
app.
So you can install to Mobcrush,
you also get setup extension and
upload extension installed on
your device.
And each extension runs in its
own process independently from
Mobcrush that contains
extensions and Tower Dash that
initiates the broadcast.
So usually these are three
processes and the process of the
containing app of the Mobcrush
is never launched.
And to help you get started
developing new broadcast
extensions, we provide very easy
to use Xcode templates for both
types of extensions.
And just add two extensions to
your Xcode project and you are
ready to begin.
And now let's talk about how you
actually code all of this.
Starting from the setup
extension.
As I said, the purpose of the
setup extension is to present
this UI for user can enter the
name of the broadcast or
something.
But it also has another
important function.
It can get some information
about the client application
such as a bundleID, the name of
the application, or the icon of
the application.
And it can upload this
information to the broadcast
service so that the broadcast
service could build an
experience for viewers when the
viewer comes to the website and
he knows here's this place where
everyone playing Tower Dash,
there's an icon of Tower Dash.
And although the extension can
request from the broadcast
service a URL of that particular
Live Broadcast happening on the
website, and share it back to
the application so that the
player could send it to his
friends and get more followers.
The ViewController that
implements this UI communicates
to ReplayKit using property
called extensionContext.
And the class of this property
is a category of
NSExtensionContext that has two
more functions added by
ReplayKit.
The first one is
loadBroadcastingApplicationInfo
and the second one is
completeRequest with
broadcastURL and setupInfo.
The first one as I said it's
used to get the icon and the
name of the application, and
here is a small code example of
how you could do that and pass
this information to the
broadcast session.
The broadcast service when it
has this information, it can
identify in the broadcast
sessions, create channels, and
build simple intuitive user
experience of viewers experience
like this.
The viewer comes to the app and
he has an icon and name of the
application.
So, when you're done uploading
this icon, and already collected
all the information that you
need to begin the broadcast, you
should call the second one
called completeRequest with
broadcastURL and setupInfo.
The broadcastURL will be
available to the client
application as a property of the
broadcast controller and the
setupInfo it's a dictionary that
you create inside this extension
and all the information you
collected from the user, you put
it to that dictionary, and it
will be passed to the upload
extension when the broadcast
starts.
You should also always provide
an option for the user to cancel
broadcast, and for that you just
use your regular old
cancelRequest method of the
extensionContext.
Now, let's talk about the upload
extension.
As I said, it's function is to
receive the media frames handed
by ReplayKit in real time,
encode them, create a video
stream, and upload it to the
broadcast service.
If you create a new upload
extension using Xcode templates,
you will find a sampleHandler
class like this in your Xcode
project, and this is where you
override functions that take
care of events like the
broadcast has started, stopped,
paused, or resumed.
And also the function called
processSampleBuffer, now this is
the one that you want to
override to handle incoming
media samples.
This is where all the magic
happens.
You upload, the encoding, and
uploading here.
So, when the broadcast starts,
ReplayKit notifies extension
that it will begin providing it
with media samples and ReplayKit
is using the function called
broadcastStarted with setupInfo
of this.
And here's a code example that
shows you that you do receive a
setupInfo as an argument of that
function, and you could extract
say, name of the broadcast from
the dictionary and pass it to
the broadcast session, or in
case a broadcast has started
from the Control Center you can
just let know the session about
this one.
And now, let me zoom in on the
processSampleBuffer function.
ReplayKit provides extension
with three types of the media
samples.
Video samples captured from the
screen.
Audio samples captured from the
application, this is the audio
your application's playing back
right now.
And the audio samples captured
from the microphone.
And you can use any technology
to encode this media, but as a
practical matter we provide you
a lower level API called
VideoToolbox.
This is a framework that
provides access to the hardware
accelerated encoding and
decoding.
All samples go to the upload
extension and handled by the
function processSampleBuffer.
And the function should encode
and upload the media samples.
Here in this code example, we
show you how you could use a
VideoToolbox to encode the video
sample.
In your real code, you would
also have to implement the call
back to receive the encoded
data, et cetera.
But here I would like to speak
about the VideoToolbox, because
it's especially important when
you do your video processing in
an app extension.
Because app extensions have much
lower memory limits compared to
the programmed applications.
And having access to this
hardware accelerated video
encoding is vital for the
ReplayKit upload extensions.
And this year, we have great
news, VideoToolbox is also
available on tvOS now.
So, you can build highly
optimized, very efficient
extensions for both platforms.
This is a high level picture of
how Live Broadcast works, and
who implements each part.
Again if you are a game
developer, client application
developer, all you need to worry
about is just present the
ViewController initiate and stop
the broadcast, and you could
implement this API and be done
by the end of this session.
And if you are a broadcast
service developer there's a
little more work on your side.
You need to receive the samples,
encode them, and upload to the
broadcast service.
There's one more thing I need to
go over before we finish with
this part of the talk.
Is that so far we've been
assuming that all the data flows
upwards from the application to
the online service, but indeed
it is also possible for the data
to be flowing back from the
online service to the
application.
And that could be some viewers
feedback, like the comments of
the viewers or likes, or any
kind of data number viewers.
And for that we provide an API
that helps you to deliver this
information from the extension
to the client application.
And you have to just put this
data in a dictionary and use the
function called
updateServiceInfo from within
your extension.
And that dictionary would be
available to the application as
a property of the broadcast
controller.
The name of the property is
serviceInfo and it's KVO
observable, so you can monitor
this and update the UI
appropriately.
This is it for the Live
Broadcast API overview.
And having all this information
you can build absolutely new
products, given that you also
construct the broadcast from the
Control Center you can build
third parties screen mirroring
or video share, sharing of the
screen during the video
conference.
Thank you, and please come,
please welcome Johnny back on
stage.
[ Applause ]
>> Thank you, Alexander.
So, as Alexander has just
stated, current broadcaster
application, you would need to
present to the user a
RPBroadcastActivity
ViewController that allows them
to choose a public service to
broadcast your application to.
Well, sometimes you don't want
to broadcast your application to
a public service.
Sometimes, you want your
application to use your
broadcasting service.
Well, with ReplayKit 2 and
Broadcast Pairing, you can do
just that.
Broadcast Pairing enables you to
fully and seamlessly integrate
your entire streaming solution,
all from within your own
application.
Here we have our budget
application and we have our
conference application that
supports ReplayKit Live
Streaming and has implemented a
broadcast extension.
I want our budget application to
stream exclusively to our
conference application's
broadcast extension.
So to do that, I'm going to
initiate a broadcast pair.
Our budget application is going
to call load with preferred
extension on the class
RPActivityViewController.
We'll get back an instance of
RPBroadcastActivity
ViewController much like we do
with general broadcast
initiation, but here is the main
difference between Broadcast
Pairing and general broadcast
initiation.
Because when you go and present
that ViewController, instead of
the user being presented with a
picker that allows them to
choose a public service, they'll
be presented with an alert that
indicates to them your
application's intent to use a
particular broadcast service.
Here, our budget application
wants to stream to our
conference application's
broadcast service.
When a user taps accept,
ReplayKit will immediately
launch the paired broadcast
extension allowing the user to
input any information that might
be important to the broadcasting
session.
When the user has finished
inputting all their information,
ReplayKit will start
broadcasting to the paired
broadcast extension.
And just like that, our budget
application is now streaming
exclusively to its paired
broadcast extension.
Let's go ahead and take a look
at the API for Broadcast
Pairing.
The API for Broadcast Pairing is
very simple.
It's a new class method on
RPBroadcastActivity
ViewController called
load(withPreferredExtension.
We'll hand you back an instance
of
broadcastActivityViewController
that you can then present to
your users.
Let's jump in and take a look at
how our budget application is
going to imitate a broadcast
pair in a little bit more
detail.
Here, I have a method called
didPressBroadcastPairButton.
Inside of it, I'm simply going
to call
load(withPreferredExtension: on
the class RPBroadcastActivity
ViewController.
We'll go ahead and get back an
instance of RPBroadcastActivity
ViewController that.
When we go ahead and present the
user be presented with an alert
that indicates to them my
application's intent to use a
particular broadcasting service.
Once the user accepts, ReplayKit
will handle the rest and we'll
the broadcasting extension
that's been paired.
Broadcast Pairing has been
designed so that the
broadcasting app and the
broadcasting service are tightly
coupled.
So, initiating a Broadcast
Pairing session, developers are
going to have to supply a
bundleID for the broadcast
extension.
Users will also need to accept
the broadcasting pair through
the alert, which will be shown
every time you wish to initiate
a Broadcasting Pair session.
Broadcast Pairing allows you to
seamlessly integrate your entire
streaming solution all within
your own application.
Now, we know that creating
replays and Live Streams with
user commentary is a fantastic
way to get new user engagement
in your application as well as
generating a community around
your application.
That's why last year we
introduced front camera support
for ReplayKit.
And which is why this year we're
also introducing Fast Camera
Switching for ReplayKit 2.
Fast Camera Switching allows you
to change the camera feed in the
camera preview view found in
RPScreenRecorder to use either
the front camera or the rear
camera.
The camera preview view found in
RPScreenRecorder is a subclass
of UIView, so it can be added to
just about any application.
Developers are responsible for
UI elements that allow the user
to present and dismiss the
camera preview view, as well as
UI elements to allow the user to
switch the camera preview view.
Let's go ahead and take a quick
look at the API for Fast Camera
Switching.
The API for Fast Camera
Switching is really simple.
It consists of a new property on
RPScreenRecorder called
cameraPosition, which is used to
note the current cameraPosition
for the sharedRecorder instance.
CameraPosition is an enumeration
RPCameraPosition which includes
RPCameraPosition front and
RPCameraPosition back which is
used for the front and back
cameras respectively.
Let's go ahead and jump into an
example of how we can start
using Fast Camera Switching.
Wow, that photo looked a lot
better on a smaller screen.
Here, we have a function called
showPreviewView.
Inside we're simply going to
grab this sharedRecorder
instance on RPScreenRecorder.
From there, I'll grab the
cameraPreviewView for the RP,
for the sharedRecorder instance.
Since the sharedRecorder
instance and its cameraView is a
subclass of UI view, I'm simply
going to add it as a subview to
my view in my application.
And just like that, we're now
using the cameraPreviewView and
the front facing camera for our
video commentary.
Let's go ahead and take a quick
look at how we can use Fast
Camera Switching.
Here, I have a method called
didPressCameraSwitch.
Again, I'm simply going to grab
a sharedRecorder instance from
our RPScreenRecorder.
I'm really interested in knowing
what the current cameraPosition
is.
So, in our method here, we're
going to check to see what the
current cameraPosition is.
And whatever it is, I'm going to
switch it to its opposite.
So here, We notice that our
cameraPosition for our
sharedRecorder is using
RPCameraPosition.front.
I'm simply going to set the
shareRecorder's cameraPosition
to be RPCameraPosition.back.
And just like that, our preview
view is now using the rear
camera for its video commentary.
It really is just that easy and
it really is just that fast.
Fast Camera Switching will give
you additional tools to provide
your users with to create more
engaging video commentary in the
content their creating in your
application.
We have covered a lot today.
So let's go ahead and take a
minute to recap.
In ReplayKit 2, we introduced
In-App Screen Capture, which is
a powerful new API that's going
to give you direct access to
your application's own audio,
video, and microphone content
straight from ReplayKit.
With iOS Screen Recording and
Broadcast, users can now record
and broadcast their entire iOS
screen experience straight from
Control Center.
We introduced Broadcast Pairing
which enable you to seamlessly
integrate your entire streaming
solution all within your own
application.
And finally, with Fast Camera
Switching you now have more
tools to provide your users with
to create more engaging video
commentary.
For more information about our
session today, visit us at
developer.apple.com.
We are session 606.
We hope you have a wonderful
WWDC.
Thank you.
[ Applause ]