Transcript
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
>> PETER TSOI: Hey, everyone.
Good afternoon.
And welcome to Advanced
Touch Input on iOS.
My name is Peter.
I work on the iOS
Performance team on Apple.
Today with my friend
Jacob, from the UIKit team,
we would like to tell
you a little bit more
about how touch input works
on iOS and how you can use
that information to make
your applications even more
responsive to touch input.
We've got a lot to
talk about today.
As the previous slide alluded
to, reducing latency is the name
of the game when it comes
to making your applications
even more responsive.
We will talk about what latency
is and why you should care
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We will talk about what latency
is and why you should care
about latency in
your application
and latency in iOS as a whole.
In order to discuss where
this latency comes from,
we will discuss and dissect
the major pieces of iOS,
which are responsible
for everything
between handling
your touch input
to drawing the pixels
underneath your finger
in response to that touch.
We have made a lot of
improvements to this system
over the last year in iOS 9,
and we would like to tell you
about the improvements
as well as some
of the APIs you guys can
use to take advantage
of all these improvements
we have made.
And finally, we would like
to leave you with some tips
and best practices about
how to find, diagnose,
and fix the performance
bottlenecks in your application.
So why should you care
about reducing latency
in your application?
Touch input on iOS is
built around the idea
of direct manipulation.
This is the idea that a user
is actually touching a physical
object with their
finger and moving it
around in a virtual space.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
around in a virtual space.
For example, if a user was
trying to move this circle
from point A to point B,
the expected result is
that the circle feels glued
to the end of the finger.
You'll notice that
in this example,
the circle tracks very
precisely and responsively
with the user's finger.
However, as soon as
you reduce latency,
or the lag between
when the finger moves
and the circle moves,
this illusion
of direct manipulation
starts to break down.
In this case, you can see
that the circle is following
the finger and it doesn't feel
like you are moving it around
with your finger anymore.
This effect is compounded
when the user is
moving really quickly.
In this case, the finger gets
a substantial distance away
from the circle, and
it no longer feels
like the finger is
glued to the circle,
rather the circle is now playing
catch-up with the finger.
So this type of latency affects
iOS as a whole, from everything
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So this type of latency affects
iOS as a whole, from everything
from button presses to
moving objects around,
to scrolling anything
even a web page.
But we have identified a couple
of applications where increases
in latency are even
more noticeable.
One of these types
of applications are
drawing applications.
Not only is the artist then
distracted by the amount
of distance between the end of
the line and a user's finger.
Artists often rely on
quick, responsive updates
to their application or to
the user interface in order
to quickly adjust their
physical behaviors in order
to get the result
they are going for.
In addition, latency
in applications
like games makes the
games harder to play,
in effect the perceived
quality of your application.
Where does this latency
come from?
It comes from a lot
of different places.
In order to discuss where
it's caused, we'll discuss all
of the different parts
of the system involved
in handling your touches and
drawing the touches in response.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in handling your touches and
drawing the touches in response.
Throughout the next
portion of the talk,
we will be relying heavily
on these pipeline diagrams.
So let's make sure we are all
on the same page as
to what they mean.
Up on the screen, you
see five different boxes.
Each of these boxes
represents the amount
of time a frame is shown
on the display for.
Our products refresh
the screen at 60 hertz
or 60 times per second.
So the amount of time
represented by each
of these boxes is approximately
one-sixtieth of a second.
You may hear me refer to each of
these boxes as a display frame,
a display interval,
or a display cycle.
They all mean the same thing.
Now, the vertical
lines separating each
of these boxes is
the display refresh.
This is the point in
time where one frame
on the display is swapped out
with the next one to be shown.
You will hear me refer to this
as either the display refresh
or the v-sync, again they
mean basically the same thing.
The display refresh
is important in iOS
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The display refresh
is important in iOS
because many important system
processes are kicked off
or triggered by this
display refresh.
So let's actually talk
about what's going
on inside this pipeline.
The first stage of the
pipeline is Multi-Touch.
This is the process where the
hardware will scan the surface
of the display looking
for touches.
On most of our products,
this can take less
than the entire display frame,
but on some of our products,
it can take up to that
entire display frame.
In order to symbolize
that, we will fill
in the entire box
with this green box.
Once Multi-Touch is finished
scanning this display
and is finished filtering
out any of the noise
which was present
on this screen,
your UI application's UITouch
callback will be called near the
beginning of the next
touch frame, usually right
after a display refresh
has occurred.
This is the point in time where
your application should respond
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
This is the point in time where
your application should respond
to the touch inputs and,
in a drawing application,
maybe plot the points and
connect the dots in between.
You may want to do any smoothing
to make the line very smooth.
In an application that
isn't a drawing application,
this is where you would
respond to button presses
or key presses, maybe create
views, view controllers
and present them to the user.
The amount of time spent here
is variable, but can take
up to one display frame.
So, again, I filled
in the entire box.
Once your application is done
responding to the touch events
and has updated the
state accordingly,
Core Animation will wake up
at the next display refresh
and begin translating
your views and your layers
into GPU commands that can
be rendered by the GPU.
You will notice that the
GPU does not have to wait
until the next display
refresh to begin.
It begins immediately as soon
as Core Animation has
given it the instructions
that it needs to
render the frame.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Again, the time in these
stages is variable,
based upon how complicated the
views in your application are.
Finally, once the GPU has
finished rendering your frame,
that frame is then
enqueued to be displayed
on the display once the
next display refresh occurs.
As you can tell, between sensing
your touch on the display,
all the way through to drawing
it, can take several frames.
In this case, it
takes four frames.
So it's not instant.
In addition, this is a pipeline.
So other touches that may
occur while your application is
processing that previous
touch can also be happening.
These just go through
the process
at different points
in the pipeline.
Let's talk about what you as
a developer have control over.
There are no APIs to alter the
behavior of the Multi-Touch
or the display hardware layers.
This is handled for
you by the system.
You exercise indirect control
over the Core Animation render
server and the GPU based
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
over the Core Animation render
server and the GPU based
on how complicated the views
in your application are.
But you have almost complete
control over your application.
So that's where we will begin.
As I mentioned earlier,
this is the point
where you update the
stay of your application
in response to the touch inputs.
For example, in a drawing
application, you plot the points
and connect them,
or you create views
in response to button presses.
This also might be where
you issue your OpenGL
or Metal commands.
The amount of time
spent here is variable.
You can optimize this to take
a smaller amount of time,
and we encourage you to do
this, but you will notice
that when we optimize
the application,
Core Animation does not slide in
to fill in the space left behind
by UIKit or your application.
This is because of how updates
to your views have worked
historically in the past on iOS.
When you update the state
of your views on iOS,
you can either explicitly
commit A CATransaction
or UIKit will implicitly
generate one for you
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
if you update the views
properties with the UIMethods.
We will represent this
CATransaction commit
with this red dot.
Now, the reason why Core
Animation doesn't slide
in to fill in the time is
because your application
is allowed
to update its state several
time during one display frame,
in this case, symbolized
by the second dot.
Now, in order to reduce the
amount of redundant work or work
that will never be
shown on the display,
Core Animation will batch
up all of your updates
and render it once at
the display refresh.
So we will only render
the combined state of both
of those Core Animation
transactions.
Once Core Animation decides
to snapshot your view
at the display refresh, it
will begin translating all
of the logical views and layers
that you have created for it
into GPU commands that can
be rendered by the GPU.
As I mentioned earlier, the
GPU starts immediately as soon
as it has the necessary
instructions
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
as it has the necessary
instructions
from Core Animation.
So if you optimize the amount
of time spent in Core Animation
or in the GPU, the GPU
will fill in the space
that was left behind it.
The view debugger in Xcode
is a very helpful way
to understand how complicated
your view hierarchy is
and a great way to find
views that you can take out
and therefore optimize
your application.
However, we recognize
the need for views
that are very complicated,
that can't possibly
be represented
in one display frame.
So the iOS pipeline is
flexible enough to handle that.
If your application needs
to spend additional
time rendering views,
we can split the Core
Animation and GPU work
over two display frames.
Of course, this adds an
additional frame of latency,
but you can still get 60 frames
per second smooth animations
everywhere in your application,
with more complicated views.
There is no manual trigger to go
between the faster mode
and the slower mode.
This, again, happens for you
and is arbitrated by the system.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
This, again, happens for you
and is arbitrated by the system.
So it's important that we
understand what things can
trigger you into the faster mode
and what things can trigger
you into the slower mode.
The faster mode is called double
buffering, and we call it this
because there are two buffers,
one for the GPU to draw into,
and one for the LCD
to show to the user.
At the display refresh,
as you will recall,
Core Animation grabs a buffer
for it and the GPU to use,
and it will begin outputting
GPU commands for that frame.
Once the GPU has those commands,
the GPU will begin rendering,
and if the render completes
before the next display refresh
has to occur, we enqueue
that frame for display
at the next display refresh.
Once we reach that
display refresh,
the frame is then
swapped onto the screen
and we begin the process
with the next frame.
Again, the GPU will render
and then enqueue the frame
for display at the
next display refresh.
Once we hit that
display refresh,
these two frames
will swap places,
we will reclaim the green buffer
and continue this process,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we will reclaim the green buffer
and continue this process,
as long as your application has
views that it wants to render.
Now, this is all fine and good.
We have finished our Core
Animation and our GPU work
within one display frame.
We have good performance.
But what happens if you
can't do all of this?
If you can't do it in one frame?
Then we fall into a mode
called triple buffering.
Again, Core Animation
will output GPU commands,
which the GPU will then
render but in this example,
the GPU hasn't finished
rendering the green frame
by the time we hit
the display refresh.
In this example, since we can't
show it yet, the blue frame has
to be extended on the
screen for an extra frame.
Core Animation now needs
to allocate a third buffer
to begin work on the next
frame, and it will do
so by creating this
third buffer.
Then Core Animation will
start outputting GPU commands
for it while the GPU finishes
rendering the previous frame.
And then the previous frame
will be enqueued for display
at the next display refresh.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
at the next display refresh.
This process then repeats
with the Core Animation render
server outputting GPU commands,
and the GPU will
then render them.
You get the idea.
We swap the buffers,
reclaim the buffer,
and then the process repeats.
So you might be thinking, all of
these slides say Core Animation.
What if I don't use
Core Animation?
What if I have optimized
my application
to use Metal or OpenGL?
You might think instead of
your pipeline looking like this
and getting your frame out
to the display in four frames
of latency, that you can do
it in three frames of latency.
Unfortunately that's
not the case.
Under iOS 8, if you
use Metal or OpenGL,
Core Animation still acts
an arbiter, to make sure
that any updates you make to
any Core Animation content
on the screen are synchronized
with any GPU, OpenGL,
Metal updates that you
make to those layers.
So you still have
four frames of latency
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So you still have
four frames of latency
when you use OpenGL
or Metal in iOS 8.
So we talked about how the iOS
pipeline is flexible enough
to handle very complicated
views,
getting you 60 frames per second
animation but at five frames
of latency and how you can
optimize your application
to bring that latency
down to four frames
by optimizing what
you are drawing.
However, in this example,
there is no real way
to make it any faster, because
Core Animation needs to wait
until the display refresh
to start generating
the GPU commands.
In iOS 9, we are
removing that dependency.
You can now start Core Animation
work immediately as soon
as your application is
done updating the state
of your application.
In order to take
advantage of these,
we have introduced some new APIs
and new tricks in
the iOS system.
To tell you a little
bit more about them,
I would like to introduce
Jacob [applause].
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> JACOB XIAO: Thanks, Peter.
So I would like to tell you
about some additions
we have made in iOS 9,
and how you can use them to get
even lower latency in your apps.
I will be talking about
three things today.
First, low latency
support in Core Animation.
Then, a new system for touch
coalescing, and finally,
a really cool system
for touch prediction
that we built into UIKit.
So let's get started with low
latency in Core Animation.
As Peter just showed
you in iOS 8,
even with a well-optimized
app there was a limit
to how far you could
get your latency down.
By using low latency
Core Animation in iOS 9,
we can combine the app's frame
with Core Animation frame,
and this gives you
much lower latency.
The best thing about
this feature is
that it happens automatically.
There are no changes you have
to make to your app other
than optimizing your
performance.
However, there's one
thing to keep in mind,
which is that this low latency
mode is automatically disabled
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which is that this low latency
mode is automatically disabled
when you have animations
active in your app.
This includes CA animations
and UIKit animations,
so if you want the absolute
lowest latency in your app,
you want to make sure to disable
those animations while touches
are active on the display.
Now this system also works
with Metal and OpenGL content.
So as you saw earlier
before in iOS 8,
we had to wait an
additional frame
to get your GPU content
to the display.
But with the new low
latency mode, we can now get
that content to the display
as quickly as possible
in the very next frame.
This happens automatically just
by using CAeagllayer
or CAMetalLayer.
However, if there's one thing
to keep in mind in your app
if you have Core Animation
content that you want
to draw along with this
OpenGL or Metal content.
In this case, the GPU content
will be drawn to the display
as quickly as possible but your
Core Animation content may take
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
as quickly as possible but your
Core Animation content may take
a little longer to go through.
And if that happens, then by
default, it's not guaranteed
that your GPU content will
arrive in the same frame
as your Core Animation content.
Now this may be a problem if you
want those two to be in sync,
and that could happen,
for example,
if you had an OpenGL map
view, that you wanted
to draw UIKit content on top of.
In that case, you may want
to synchronize those updates
in something like this,
and there's a property
that allows you to do
that called Presents
With Transaction.
It's on CAeagllayer
and CAMetalLayer.
When this is set to False,
which is the default value,
then it will get your GPU
content to the display
as quickly as possible.
But when you set it to True,
then we synchronize
your GPU content
with the Core Animation
content so they both appear
on the display at the same time.
All right.
Next, let's talk about
touch coalescing.
But before we do, I would
like to tell you a little
bit about the iPad Air 2.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We introduced the
iPad Air 2 last year,
and it has a 60-hertz
display update rate,
which means the display
updates at 60 times per second
like our other iOS devices.
It also has a really cool
feature that affects touch
and touch latency that
I'm really excited
to announce to you today.
And that's the fact that it has
a 120-hertz touch scan update
rate [applause].
It's pretty cool.
This means that it scans for
touches at twice the rate
of all other iOS devices.
And this is great, because you
can get a lot more information
about where the user's
finger is going
as they interact
with the display.
Let's take a look at how this
affects your app in practice.
With the 60-hertz touch scan
rate, as the user's finger moves
across the display, we
will periodically sample
where that finger is and give
that information to the app.
The same thing happens with
the 120-hertz scan rate,
but since it's twice as fast,
you get twice as many samples,
and this gives you a
lot more information
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and this gives you a
lot more information
about what the user is doing.
Now, once we get those samples,
we will pass them to your app,
and you can use those to
know what the user is trying
to do with their touches.
For example, in a drawing app,
you might connect them together
to represent the drawing that
the user is trying to perform,
and this 120-hertz information
gives you a lot more information
that you can use to get a better
representation of their drawing.
So now that we have
seen the benefits
of 120-hertz touch scan
rate, let's take a look
at how it affects the
touch to display pipeline.
This is the 60-hertz
touch scan rate pipeline
that we saw earlier.
And let's focus in on
just the Multi-Touch stage
of the pipeline.
At 60 hertz, we get a new
touch sample every frame.
And with 120 hertz, we'll now
get two samples every time.
However, note that the size
of the display frame
is still the same,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of the display frame
is still the same,
since we have the same update
rate for the display itself.
Now, we could take those new
touch samples and pass them
to your app, and your app could
use that to update its drawing,
which would update what it
showed in Core Animation
and on to the display.
But you will notice
that if we did,
that your app would actually
be updating twice as often
as the display updates,
which would lead
to wasted work in your app.
So we introduced the
touch coalescing system
to get the best of both worlds.
This allows you to get
the increased information
from the 120-hertz
touch scan rate but not
to have any wasted work in
your app withdrawing too much.
Let's take a look at how
this pipeline changes
with coalescing.
Now we will only deliver a touch
to your app once
per display frame,
so when the first touch comes
in, we will deliver that to you.
And then in the next frame,
we will deliver you the touch
for that frame, and also
any intermediate touches
that happened since the
last time we sent touches
to your app.
This repeats each time the
user makes more touches
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
This repeats each time the
user makes more touches
to the display.
We'll give you the current
touch and any coalesce touches,
and that continues as long
as touches are active.
Now, the API to use these
coalesced touches is
really simple.
It's a new method on UIEvent
called Coalesce Touches
For Touch.
You pass into that method the
touch that you're looking at,
and we'll give you back an array
of all the coalesced touches
since the last time we delivered
that touch to your app.
To get a better idea
of how to use this API,
let's look at how touch handling
works in iOS in general.
When the user first
touches the display,
we will call Touches
Began on your app.
As their finger moves, we will
call Touches Moved, and finally,
as their finger is
removed from the display,
we will call Touches Ended.
Now, as we are talking
about these touch callbacks,
another very important
callback is Touches Canceled.
This gets called when
the stream of touches
to your app is interrupted.
For example, if the user
swipes from the bottom
to activate Control Center.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to activate Control Center.
In that case, your app will get
some the initial touch callback,
and we will get Touches Canceled
when the system gesture
takes over.
It's important to implement
this method to do any cleanup
for things that you started in
the previous touch callbacks
and roll back any
changes you made.
For example, in a drawing
app, you might want
to remove the line that
the user was drawing.
So now that we have seen how
these touch callbacks work,
let's see how they interact
with coalesce touches.
The touches we deliver to all
of those callbacks are
what we call main touches,
and these work the same with the
120-hertz scan rate as they do
with 60-hertz devices.
However, with the Coalesce
Touches For Touch method,
you can get access
to more information
with these coalesced touches.
The coalesced touches
not only have information
about the intermediate touches
but they also give you a copy
of the main touch itself.
And the great thing about this
is that it allows you a choice.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You can look at the main touches
if you don't need the
increased information
of the higher touch scan
rate in your app, or,
if you want that
information, you can look
at just the coalesced touches
and you don't have to worry
about the main touches.
So now let's revisit the touch
sequence that we saw and see how
that works with both main
touches and coalesced touches.
As the user's finger comes down,
we will give your app a
main touch, and also a copy
of that as a coalesced touch.
Then as their finger moves,
we will deliver you new
main touches and a set
of coalesced touches
for each one.
And finally, as their
finger leaves,
we will give you
the last main touch
and any remaining
coalesced touches.
Now here, I have shown only one
or two coalesced touches
for each main touch.
But it's important
to keep in mind
that your app can receive
a different amount.
If your app takes a long
time to process a touch,
then we will give you some
time to catch up and wait
to send you new touches
until you have caught up.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to send you new touches
until you have caught up.
And if this happens,
then the touches
that weren't delivered
you to will be sent
to you later as coalesced
touches.
So make sure that you don't
hard code any dependencies
on the number of coalesced
touches that you receive.
Now, there are a few
differences between the way
that coalesced touches
behave and the way
that main touches behave.
One of those is related
to previous location.
And previous location is
something that you can get
with the method Previous
Location In View from UITouch.
For main touches, this gives
your app the last location
that that touch had when
it was delivered to you.
And for coalesced touches,
it behaves very similarly.
It gives you the location
of the last coalesced
touch to your app.
And this is one of the reasons
that it's really important
to focus on just
the main touches
or just the coalesced touches.
That way, you won't
get any confusion
with the previous locations.
So it's really important
not to cross the streams.
[ Applause ]
Now, another difference
between main touches
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, another difference
between main touches
and coalesced touches is
in how the UITouch
objects themselves behave.
With main touches, the same
UITouch instance is reused every
time the touch is
delivered to your app.
This is helpful because it
allows you to differentiate
between different touches if
the users has multiple fingers
on the display at once.
And for coalesced touches, this
works a little bit differently.
There, each time we deliver a
coalesced touch to your app,
we deliver a new
UITouch instance
that has the new properties.
And so you can think of
these as snapshots instead
of the shared identity
that the main touches have.
So now that you understand
how touch coalescing works,
let's dig into some code for
how to use coalesced touches.
This is some code that
you might have in an app
that does something like drawing
and you could have something
like this in touches moved.
Here we are iterating through
the touches that we have,
and we are grabbing the line
that corresponds to each touch.
Then we are adding the
latest touch as a new sample
onto the end of that line.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
onto the end of that line.
And to add touch coalescing
support, we just need
to add this small bit of code.
Now, we are iterating through
all the coalesced touches
for the given main
touch, and for each
of those coalesced
touches, we are adding it
for the sample to the line.
Notice that we are only
adding the coalesced touches
of the samples, not
the main touches.
And that's touch coalescing.
Now I would like to tell
you about touch prediction.
This is a really cool system
that we have added right
into UIKit that you can use
to get even lower
latency in your apps.
Now, as we deliver your app,
new touches will also give
you a look into the future
at what we predict that the
user's touches will be doing
in the near future.
And the API for this works
very similarly to the API
for coalesced touches.
It's another method on UIEvent
called Predicted Touches
For Touch.
Once again, you pass in a
main touch to this method
and you will get back an
array of predicted touches.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you will get back an
array of predicted touches.
And you can use those predicted
touches to update your drawing
or whatever else you are
doing with the user's touches,
to get even lower latency.
So earlier, we saw
how main touches
and coalesced touches
are related,
and predicted touches work
in a very similar way.
They are another set of touches
associated with a main touch,
and they behave as snapshots
just like coalesced touches do.
Now, one thing that's different
about predicted touches compared
to coalesced touches
is what happens
when new touches come in.
As you get a new main touch,
you will get a new set
of predicted touches, and the
new predicted touches are the
only thing you want to use then.
Any previous predicted
touches are no longer useful
since we now have information
about where the user actually
touched during that time.
So you generally want to throw
those old predicted touches out.
Now, previous location
in view works similarly
for predicted touches as it
does for other touch types.
It points to the location
that the previous
predicted touch had, or,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that the previous
predicted touch had, or,
for the first predicted touch,
it points to the last location
that was delivered to your app.
So you may be wondering how we
actually get these predicted
touches, and it's pretty simple.
We built a time machine into
every iOS device [laughter].
That's not quite how it works.
What we are actually doing
is looking at the touches
that are delivered to
your app and using a set
of highly tuned algorithms
to determine
where the user's finger looks
like it's going at this time.
And as we get new touch samples,
we will update our prediction
and deliver new predicted
touches to your app.
Now, each of those predicted
touches are complete UITouch
objects and they have all of
their properties filled out,
like their location
and time stamp.
So now we can look at how those
predicted touches affect the
pipeline that we
have been looking at.
This is what we saw
earlier from main touches
and coalesced touches,
and we can easily add
in predicted touches as well.
Each frame, as your
app gets a main touch,
you also get a set
of predicted touches.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
you also get a set
of predicted touches.
And if you get main touches
and coalesced touches as well,
then the predicted touches
are just more information
that you have available to you.
And this process just repeats
as new touches are delivered.
One thing to note is
that coalesced touches
and predicted touches
are independent.
You can use one without
the other,
and predicted touches are
supported on both 60-hertz
and 120-hertz touch
scan rate devices.
So now let's take a look at
how we can add touch prediction
to the code that we were
taking a look at earlier.
All you need to do is add
this small bit of code,
and what we are doing is first
removing any previous predicted
touches that we added
to our line.
And this is important since we
now have the actual locations
where those touches were.
Then we are iterating through
the predicted touches we have.
And for each of those predicted
touches, we are adding it
as a sample to our line.
But notice that we are
calling a different method here
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But notice that we are
calling a different method here
to add the predicted
sample as what we called
to call the normal samples.
This is so that we can mark
that sample as something
that will need to be
removed the next time we go
through this code.
So that's touch coalescing
and touch prediction.
Now that you have seen
all of these techniques,
let's see what happens when
we combine them all together.
In iOS 8, with a
well-optimized app,
this was the touch latency
view that you could get.
We measured the latency
as the time
between when the touch
first comes down to
when the display has updated
with that touch's information.
And so you can see
that in iOS 8,
we would have four
frames of latency.
Now by using low latency
Core Animation and iOS 9,
we can remove one frame
of latency from that.
And by using touch
coalescing and running
on a high-touch-scan-rate
device,
you can not only get
increased information
about the user's touch, you
can also remove a half frame
of latency from the
beginning [applause].
But there's more!
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
By also using touch prediction
you can get information
about approximately a
frame into the future
of where the user's
touches are going.
And this lets you give the user
an effective latency that's
reduced by about a
frame more as well.
And so altogether, in iOS 9,
you can get down to
approximately one
and a half frames of
latency for your users
which is a huge improvement from
iOS 8's four frames of latency.
[ Applause ]
So we think this
is really great,
and I highly encourage you
to adopt these techniques
in your app to give your users
a great low latency experience.
Now I would like to turn
things back over to Peter
to tell you how to
fine-tune your app.
[ Applause ]
>> PETER TSOI: Thanks, Jacob.
So now that you know about all
of the new low latency modes
in iOS 9, we would like to tell
you how you can take advantage
of those by fine-tuning your
applications so you can fit
within one display frame of
time so you can get your frames
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
within one display frame of
time so you can get your frames
out to the display that quickly.
The first way to ensure
that your application is doing
the least amount of work is
to minimize the amount of work
that your application
needs to do.
By using the coalesced touches
API that Jacob just introduced
to you, you can get the benefits
of high-fidelity touch inputs
of the iPad Air 2
while making sure
that your application
only renders images
that will make it
onto the screen.
In addition, keep in mind
that your user only cares
about the content that they can
see on the device's display.
Your application may do the
work to keep track of the state
of the world outside of
the screen, but ultimately,
you should make sure that the
rendering work is restricted
to only the work that is
necessary to generate the image
that ultimately shows
up on the screen.
If you are trying to
profile your application,
to figure out how much time
your application is spending
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to figure out how much time
your application is spending
on the CPU, Time Profiler
is a great way to do this.
Time Profiler will show you how
much time your application is
using on the CPU by sampling
it at a fixed interval.
In this case, in Time Profiler,
I selected a 16-millisecond
interval,
which corresponds roughly
to one display frame.
You can also tell
that my application
in this case is using
only a small fraction
of that amount of time.
In this case, 3 milliseconds.
Now this is all fine and good
if you are trying to measure
and profile how you are
doing in terms of CPU work.
What about GPU work?
The frames per second gauge
in the GPU report available
in the Xcode debugging session
give you a high-level view
of your application's
GPU performance.
In this case, you can see
that this application is
hitting 60 frames per second,
and it has a relatively low
amount of GPU frame time.
In this case, just
3.8 milliseconds.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Keep in mind, though, that this
is just a high-level overview
of what your application
is doing.
It doesn't give you
fine-grained information
about individual frames that may
be causing you to drop frames.
If you require that type of
precision, then you will want
to turn to the new
GPU driver instrument,
which we have introduced
in Xcode this year.
The GPU driver instrument can
show you exactly how long the
GPU is active for while you
are using your application.
In this case, you can see
that the amount of time spent
in the vertex and
fragment shaders
of my application
is relatively small.
In fact, it's just a small
fraction of the amount of time
that a frame is shown
on the display for.
Notice you only see
two colors here.
These two colors represent
the two buffers which are used
in the double-buffering scheme.
If our application is spending
more time in Core Animation
and in GPU, you will
see three colors here
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and in GPU, you will
see three colors here
to represent the
triple buffering
which is happening
in the system.
We have talked a lot about
reducing latency and how
to make your application more
responsive, but ultimately,
building a great iOS experience
is about building a natural
and intuitive experience
for your user
and making your application feel
more alive is another great way
of doing that.
Over the last year, we thought
long and hard about each part
of our system, and we found ways
to make it better
and faster than ever.
Throughout this process,
we have improved our APIs
to give you more control
and more information
over how the system works.
With the new low latency
modes on OpenGL, Metal,
and Core Animation,
you have more control
over when your frame is shown to
the user and how it synchronizes
with any other content
you have on the screen.
With touch coalescing, you
can take advantage of all
of our hardware and all of
its awesome capabilities
to provide that to your user.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to provide that to your user.
And with touch prediction, we
are offering you a small glimpse
into the future as to where
that touch is going to go.
And finally, we have built
and created some great tools
in order for you to
understand the performance
of your application so that
you can improve upon it
to provide an even better
experience for your users.
We at Apple are committed
to making the experience
of using our products
feel more alive than ever,
and we think reducing latency
is a great way of doing that,
and we would like to invite
you along on this journey.
You can find more information
about the technology, tools,
and APIs we've discussed
today at developer.apple.com.
We would also like to
invite you to take part
in the developer
technical conversation
on our developer forums.
We talked about a lot
of different new
technologies today,
and there have been really
great sessions both this year
and in previous years
that go over topics
which are related to this talk.
For example, if you
are very interested
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
For example, if you
are very interested
in profiling the GPU
performance of your application,
and if you are really, really,
really excited to get your hands
on that new GPU instrument,
I would like to point you
at the Metal Performance
Optimization Techniques talk,
which was given earlier
today, where they talk
about a whole bunch of different
techniques that you can use
to optimize your GPU work, not
just if you are using Metal.
In addition, if Time Profiler is
your jam, then you want to check
out the new Profiling in Depth
talk, which was given yesterday,
which goes into a deep dive
on how to use Time Profiler
to get a great idea of what
your application is doing.
And finally, if you guys
are really interested
in what is happening in the
Core Animation and GPU stages
of the pipeline we
have discussed today,
I would like to point you
to the Advanced Graphics
and Animation talk
from last year's WWDC.
All of these talks and
many, many more can be found
on our developer portal
at developer.apple.com.
I hope you learned a lot today
and over the entire course
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I hope you learned a lot today
and over the entire course
of this week, and I hope
you had an enjoyable WWDC.
Thank you.
[ Applause ]