WWDC2016 Session 220

Transcript

[ Music ]
>> Good morning.
Welcome to Leveraging
Touch Input on iOS.
My name is Dominik
Wagner, and I'm an engineer
on the UIKit Team,
and I will tell you
about how you can make the
most out of our advances
in Multi-Touch and
of Apple Pencil.
First, let's go over our new
and recent hardware releases.
Since last WWDC, we
released a lot of stuff.
For example, 3D Touch with
the iPhone 6s and 6s plus,
which gives you access to
the force for each touch.
I won't talk much about this,
but you can have all the great
experience for Peek and Pop
in the session at A Peek
at 3D Touch later today.
We introduced faster
touch scanning beginning
with iPad Air 2 and iPad Pro,
giving you a temporary
resolution
of twice the refresh
[inaudible] display.
We also introduced Apple
Pencil for Apple iPad Pro,
and this amazing device, thank
you, gives you precise location
at a [inaudible] accuracy
for your drawings.
It has an even higher temporary
resolution of 240 hertz.
It gives you access to its
tilt and orientation, and,
of course, all the force.
And our algorithms team did an
amazing job of palm rejection.
So you can rest your palm
while you are drawing
and don't have to
think about it.
We also released Apple
TV with the Siri remote,
and its track mainly
drives the UIFocusEngine
and your interactions
with Apple TV.
and your interactions
with Apple TV.
But you can also access the
track pad using the Game
Controller framework for both
acting as a Game Controller
or also getting the absolute
position of the track pad.
And, finally, you can
also handle the track pad
in indirect touches in our
UIKit touch handling methods.
I won't talk much
about this either,
but there are the great
Apple TV tech talks,
and there's also Game Controller
Input for Apple TV yesterday.
What am I going to talk about?
I'm going to talk about how
you build a drawing app.
We will build a drawing app from
the ground up, and I will talk
about all the new
API for Apple Pencil
so you can access all
these great new data.
We will talk you
through it step by step,
and show you all the different
steps you can take depending
on what you see,
and the sample code
of this app is completely
available.
So you can just relax
on the code slides
and play with it later.
So say hello to SpeedSketch.
This is our sample.
SpeedSketch is one sheet
of paper you can draw on.
It has full support for
Apple Pencil on iPad Pro
and 3D Touch on iPhone 6S.
It also works on all
previous iOS devices.
So let's first talk
about the model.
So this is a stroke, and this
is how UIKit sees the samples.
Each of those points is one
instance, the same instance
of a mutable UI touch.
You get handed down in
the touch handing methods.
And we will model our data
as a series of strokes.
And because of the UI touch
being a mutable representation
of one touch sequence,
we need to copy the data
out of this UI touch in
the touch handing methods
into something more static.
So we will build a stroke sample
as being the innermost element
of our data structure, which
for now will just contain the
location of this, of the
UI touch, but later on,
we will fill in all the
additional data parts.
And we will put those
samples into a stroke.
Let's just an array
of stroke samples,
and we have a little bit
of methods to add them.
And because we want
to use the stroke
as our main data capture
structure, it also has a stage.
So it can be active while
the user is drawing, is done,
when the user is done,
and it can be cancelled
if a different user interaction
causes it to be cancelled.
So we can throw it away
instead of keeping it around.
And, lastly, we put this
into a stroke collection,
which will just be an array of
strokes, and we will add the
down strokes, and to make it a
complete data model for our app,
we will have the active
stroke as an optional in this.
we will have the active
stroke as an optional in this.
So we can use the
stroke collection
as our data model
for a complete app.
So next up is where to
capture those strokes.
So, essentially, you have
three places to do so.
The first one, is a
UIGestureRecognizer,
a custom UIGestureRecognizer
[inaudible] plus.
The next place to look
for is a UIView subclass
where the touch goes down, and
you can handle it all on the way
up the responder chain,
and I want to think,
I want you to think
about this in this order.
If you can do a
UIGestureRecognizer,
or a custom subclass do that.
Second, do a UIView as close as
to the events as you can get,
and only if you have to
go up the responder chain.
So this is what we going to do.
We are building a stroke
gesture recognizer accustomed
to a subclass.
We will target our
main view controller,
and then in the action methods,
we will trigger a redraw
and then in the action methods,
we will trigger a redraw
of our view with a stroke.
So let's start building this
gesture recognizer subclass.
The first thing you
have to do is
to import the
UIGestureRecognizer subclass.
This exposes the internals of
a UIGestureRecognizer to you
so you can do subclassing.
Be aware, though, that these
internals should not be used
outside of a
UIGestureRecognizer.
For example, the state setter
is exposed if you import this,
and you shouldn't use
it outside the subclass.
Otherwise, the gesture system
will act in not so nice ways,
and you will have
several issues to debug.
So let's add our stroke
as our main data structure
in our gesture recognizer
to capture the stroke
and implement the
touch handing methods.
And since we do various
similar things
in all the four touch
handing methods,
we will have a helper
method actually looking
into the set of UITouches.
Determining if we are interested
in one of those touches.
Adding that to our data model
just by copying the location
as a sample at the
moment, and return to us
if you were interested in those.
We will use this helper
method in the touchesBegan
and set our state to begin.
For UIGestureRecognizer, that
is unusual because the time
between the state possible and
the state began is the time
that different gesture
recognizers can compete
about have, handing
this touch sequence.
However, for our stroke, we
really want to begin immediately
when the touch goes down.
So this is exactly
what we want to do.
In touchesMoved, we do the same
and switch the state to changed.
Note that in a gesture
recognizer,
each state change
triggers an action method,
even if it's for the same.
And we do the same for
touchesEnded and Cancelled.
And, finally, we have to
reset our gesture recognizer
and replace the previous
stroke with a new one
and replace the previous
stroke with a new one
so we can catch on the next one.
And as a good citizen, we
always call super.reset.
Let's use this in
our ViewController.
In our viewDidLoad, we set up
our stroke gesture recognizer,
ourselves as the target with
the action strokeUpdated,
and then we add it
to our main view.
And in the update callback,
we just take the stroke
from the gesture, and
set it on our view.
That's all for now.
I told you about the
stroke collection before,
but just to bring up, I
like to keep it simple.
So let's see what this gives us.
I always like this moment
when the first time
something will run.
So what you see is
this is the position
of the pencil on the display.
Let's have a look.
Oh. That doesn't look good.
What happened?
Let's have a closer
look in slow motion.
That's really not
what we want to do.
So let's, what did we see?
We have the position
of the pencil.
We have a really,
really big distance
to the last line we drew,
and the lines we drew were
really long and choppy.
That is not what I,
what I was expecting.
We have a really good
temporary resolution.
Why does it look this way?
So, obviously, we
missed some events,
and we missed some events
because of multiple reasons.
One of them is our drawing
engine is implemented
very naively.
It is drawing a complete bit map
every time, and event comes in,
and that is not quick enough to
keep up with the display rate.
And we will talk about
this a little bit later.
And the most, more important
thing is we did not use the new
And the most, more important
thing is we did not use the new
iOS 9 API that we have
to actually get to
the missed events.
And for that, I want to look a
little bit closer at a stroke.
So let's see that again.
Here are all the rich samples
we want to have and see.
And in our touch handing
methods, we always get the began
and the ended, but in
between, we get touchesMoved.
And we don't get all of
the touchesMoved that came
in as a sample, and
that's for reason.
If we would deliver to you
all the touchesMoved even
if you would block
the main thread,
you would see something
akin to a replay
of the touch interaction, and
if you do live interaction,
that's not at all what you want.
You want the most recent
position delivered to you,
and that's what we do generally,
and that's what we
did before OS 9.
We actually dropped
all the other events.
If you weren't quick
enough in the main run loop,
If you weren't quick
enough in the main run loop,
then you just didn't see any of
them, but beginning with iOS 9,
we give you access
to the previous ones.
And the other reason is as
our digitizer is now faster
than the display, we don't
want to give you each an event
for each datapoint we have.
So you do too much work.
So we try to coalesce
them together
into one delivery per
[inaudible] refresh.
So what you can do
now is in the,
using the life touch
you get in our API,
ask for the touches you missed,
and that includes the touch
you currently asking for.
So you have a complete
consistent picture.
And those touches are
called Coalesce Touches.
And you do that for
all your touch events.
And you do that also
for begin and ended.
Because you have to
stay in the same area.
Because you have to
stay in the same area.
Either you handle
your life touches
or you handle the
coalesce touches.
Because there are methods
like previous location in view
that reference the previous
touch, and if you mix and match,
then you into problems there.
So now that we know how,
how we can get them,
let's do this in code.
The method is coalescedTouches
for touch on UIEvent,
and you get the event in
the touch handing callbacks.
And now that the optional
of this result is not
because we will give you nil.
Anytime if you only
have one touch,
and we don't have
Coalesced Touches,
it's because you could ask for
any UI test that is not part
of the event, and then
you would get nil.
You're guaranteed to
always get UI touches,
at least the one you put in
even if we didn't coalesce more.
So you don't have to do an
if statement around this.
So let's use this in our code.
In our code where we look at
the touch we're interested in,
In our code where we look at
the touch we're interested in,
we did appendTouch, and all
we have to do is to loop
over the Coalesce Touches
instead and append those.
That gives us all the data.
Let's have a look
at how that looks.
Nice. Now we really
have all the data.
Let's have a comparison.
And, again, in slow
motion with a stop.
So what are you seeing here?
This is in [inaudible]
the Pencil's location
on the glass currently.
These are the Coalesce Touches
in gray, colored in gray.
Now debugging drawing engine.
The black one is also
a Coalesce Touch,
but the one corresponding
to the live touch.
And what you also see is
that you see too many
Coalesce Touches.
If our digitizer is running
at four times the speed
of the display, you see,
you should see an average three
gray ones among black ones.
you should see an average three
gray ones among black ones.
These are way too many.
And we still have this
excessive gap at the end,
which leads to lag,
visible lag for users.
So what did we see?
The speed is still lagging
of our drawing engine.
We didn't address this up
to now, but UIkit helped
by coalescing the touches,
and the final drawing really has
all the data it needs already.
So if you only take
one thing of the sort,
then use Coalesce
Touches if you really want
to have the rich
data of your Pencil.
So what is the problem
with the drawing?
You should not draw
on every touch event
because the display
refresh rate is 60 hertz.
Although we try to actually
deliver only one event per
frame, you can draw, that
sometimes is impossible
because if you mix
fingers and the Pencil
or have other events coming in,
we need to deliver
them in order.
And so you need to be
prepared to receive more events
And so you need to be
prepared to receive more events
than you can, than
[inaudible] refresh rate.
And do not try to draw faster
because this is just costing
performance and adding lag
and doing work that doesn't even
get displayed on the screen.
So when should you render?
In our example, we are using
a regular [inaudible] core
graphics, and in that case,
you should just use setsNeeds
display on that view,
marking that view is needing
update and giving the work to CA
to actually call the draw method
you need to implement instead
of your custom bit drawing.
If you're using a GLKView or
a MetalView you can also opt
into this behavior
instead of a steady update
by setting the enableSetsNeeds
display property to true.
That way those views behave in
the same way if you want to.
If you need to draw at a
steady pace, then please draw
at a steady pace and not
based on the events coming in,
and you can do so
using the Metal
and GLViews internal
mechanisms, or you can do
and GLViews internal
mechanisms, or you can do
so using a CADisplayLink
and calling display
in the wake of your DisplayLink.
What we did was we had
in our strokes [inaudible] just
a did set on the stroke to draw
with drawImageAndUpdate.
This created the
[inaudible] this was bad.
So let's just do
setsNeedsDisplay,
and move the drawing code
into the draw method.
If you're using a regular
UFU, you can do even better.
You can only mark the changed
areas a setDisplayInRect
that also needs some kind
of bookkeeping in the touch
because your drawing might,
might be a little bit bigger
than the changed touches,
and the touches change
in our samples.
You can look at the sample
code to have one example
of how you could do this
kind of bookkeeping.
And even further, you can
activate drawsAsynchronously
on the layer.
This puts all the drawing you
do in the draw rect up to CG,
This puts all the drawing you
do in the draw rect up to CG,
up to CA, and CA draws it
outside of the main thread
which opens your main thread
for quicker event handling,
and you do so by simply setting
drawsAsynchronously to true
or your diffuse layer.
Let's see how far this got us.
Again in slow motion.
So now we have the steady amount
of Coalesce Touches
I was talking about,
about three Coalesce
Touches and one black one.
But we still have some lag.
It however, it's way smaller
because now we really draw
at the display speed, and we
just have the remaining lag.
And how can we improve that?
So we have a facility called
Predicted Touches since iOS 9,
and Predicted Touches give
you a glimpse into the future.
and Predicted Touches give
you a glimpse into the future.
You use the same way as Coalesce
Touches, and you ask your event
for the Predicted Touches for
touch, and you get an array
of touches that are
in the future.
And what do you do
with those touches?
You add them to your data
structure but temporarily.
They change on every
event callback.
So you really have to do
them just temporarily.
And you choose their appearance,
depending on your app.
I highly recommend that you make
them appear like actual touches,
and look at the result, and
only if our prediction is off
by too much, then tone it down.
It makes them appears tentative
to still get the closer
look to the Pencil.
Let's look over that in code.
So now in our touch
setting methods,
after you added the
Coalesce Touches,
you add the Predicted Touches
temporarily, and you need
you add the Predicted Touches
temporarily, and you need
to make sure that you remove
the temporary touches before.
So I will show you a video of
how that looks with the opposite
of what you should do.
I will highlight the Predicted
Touches in red so we can see
if they are good
enough for our example.
And, again, in slow motion
because this was so quick.
So these are the Predictive
Touches, and they get you closer
to the actual Pencil position,
which is really, really helpful
for perceived lag
on the display.
And as you can see
in this example,
this really worked out fine.
So we will just use them
and draw them the same way
we draw regular touches.
So have we seen so far?
We have seen how to collect
the input using an custom
UIGestureRecognizer, how to
access the Coalesce Touches,
how to make the rendering faster
and efficient, and, finally,
how to make the rendering faster
and efficient, and, finally,
how to use the Predictive
Touches.
All of those techniques work
on, across all iOS devices.
We used them for the Pencil
right now in our examples,
but they work overall.
Now let's go on to the
actual new Apple Pencil API.
So let's begin with touch types.
With Apple Pencil UITouch
added a new method called type,
and UITouch type can
be one of three values.
It can be direct, which would
be all your previous touches you
know about.
There's indirect only for
the Siri remote touches,
and there's stylus
for Apple Pencil.
And the first thing
you can access
with Apple Pencil is
the higher precision,
and you do so by using
precise location in view,
and you also have the precise
previous location in view.
and you also have the precise
previous location in view.
You should use those whenever
you want the precise location
for something like a drawing.
If you want to do hit testing,
you should still use the
previous ones called location in
and previous location in.
But for drawing, this
really makes a difference.
Without the precise locations,
you will add some staircase
patterns to your drawing,
which you don't want to see.
And you can ask all kind of
touches for precise location.
You will just get
the regular one.
Next up, there's force.
Force is exposed as a
property called force
and a maximum possible force.
Those are CG floats, and they
are in the range from zero
to the maximum possible force
where 1.0 represents
an average touch.
So these are not
really physical values.
So you shouldn't do
anything that relates
to the actual force,
but you use this value
to affect your drawing.
And on all previous devices
and for regular finger touches,
And on all previous devices
and for regular finger touches,
it will always return zero.
A quick note on force.
Since we added force to
UITouch, there's one difference
in touch handing, and that
is touchesMoved gets called
more often.
Because you want to
be able to discern
if the force value changed.
We will send you
touchesMoved now all the time.
Previously, we were
trying very hard
to only send you touchesMoved
when the location
actually changed,
and that is through
regular location,
not even the precise location.
And that has implications
for you.
So, for example, we've seen
a lot of this in the wild.
If you did in touchesMoved,
if you just cancel a tap
willingly, that is bad.
That doesn't work anymore.
And what you see is that
on a, on an iPhone 6s
or with the Pencil,
if you have to really,
or with the Pencil,
if you have to really,
really just slightly
touch your display,
then this is what you're running
into, and you should have a look
at your touch handing code.
What should you do?
You should actually use a
UITouchGesturesRecognizer
if you can because
it encapsulates all
of our knowledge there, or the
least thing you have to do is
to remember the location
where the touches began
and only cancel it if you
moved enough distance away
from the original location.
So let's add our force to the
model, to our stroke sample,
and we just do that by an
optional force variable.
We will add all the other
things in there later, too,
and I won't show
this slide again.
So let's see how the force
looks in our drawing.
Nice. What did we do?
We just changed the
width space of the force.
So next up is tilt.
Apple Pencil gives you access
to its tilt towards the device,
Apple Pencil gives you access
to its tilt towards the device,
and this is measured in an
angle between the Pencil
and the device which
we call altitude.
And the altitude angle is
exposed as altitude angle.
It's a CG float.
It reported at angle at
a radian from the range
from about 10 degrees
to 90 degrees.
And that second part
is the orientation.
The orientation is measured
[inaudible] to your device,
and it's measured between
the positive x direction
and the direction the
Pencil is coming from.
And this is called azimuth.
Together, azimuth and tilt form
the full location of the Pencil,
and you can drive your
UI or your datapoints
for your drawing with it.
So azimuth is depending on
your orientation of the device.
So you have to call a method
called azimuthAngleInIvew,
So you have to call a method
called azimuthAngleInIvew,
and most of the time
you probably want
to use a vector anyways.
So we exposed the azimuth
unit vector in view to you
which will be a vector pointing
in the direction
the Pencil is coming
from with a magnitude of one.
Next up, force.
So force, for the Pencil
behaves a little bit differently
than 3D Touch force, and that is
the force is measured along the
axis of the Pencil.
With 3D Touch, the force
is measured on the display
and perpendicular to
the device surface.
That makes for some difference,
and I really urge you to try
out if you want to
have the actual force,
all the perpendicular
force in your drawing tools
because it really, it
really feels different.
Luckily, it's easy to
calculate this component,
and here's the code for that.
So you can get the perpendicular
force by dividing the force
through the sign of the altitude
angle and to make sure you stay
in the same range, you
should also clamp it
in the same range, you
should also clamp it
to the maximumPossibleForce.
And there's finally
another tidbit
for Apple Pencil with the force.
It's measured inside the
Pencil, and then transmitted
over the air to the iPad, and
this is all the properties
of over-the-air transmissions.
That means they take
a little bit
of time, and data can be lost.
So instead of making you wait
for the over-the-air
transmission of the force,
we decided to give you
estimated properties at first
and update them later so you
can have the best experience.
So for that, we exposed
estimated properties on UITouch,
which is of the type
UITouchProperties.
So estimated properties
can have a value of force,
which for Apple Pencil
will always be true
for the first event you get.
But also the azimuth
and altitude can be
marked as estimated.
That happens when you
come in from the sides,
and we not hundred percent
sure what the values will be
or if you draw very
closely to your fingers
because our senses can't really
detect them super-accurately.
And we tell you that
they are estimated
so you can do something with it.
For example, when
coming in from the sides,
you could back fill them
with the first solid value,
and in the sample I gave to
you, that is what I did just
as to illustrate that point.
And there's also the location,
which is an estimated property
only for the Predicted Touches,
which gives you an easy way
to determine Predicted
Touches from regular ones.
And for the updates, we also
have the estimated properties
expecting updates.
This is also of type
UITouch properties,
and currently it's only force.
It might be in the future also
as an azimuth altitude, but,
currently, it's only the force
value, and if that is set,
currently, it's only the force
value, and if that is set,
we will send you updates in
the future, and we will do
so in our new responder
called touchesEstimated
PropertiesUpdated, and we
will do so after the fact.
So we will send you
touches when touches begin,
and then send you updates later.
Let's walk through this.
So what do you do?
[Inaudible] touches
began or moved,
you check the
estimatedProperties
ExpectedUpdates on your touch,
and if that is not [inaudible],
you use the property
estimation updated [inaudible]
on the UITouch, which is
[inaudible] and is only set
for touches that either expect
updates or represent an update,
and use that to store your
thing you want to update,
your current touch
sample in a dictionary
so you can look it up later.
In the touchesEstimated
PropertiesUpdated, then you look
up your sample, and using the
estimation updates of the touch,
you get an update
just the values
that were expecting updates.
Note, though, that some
of the updates will arrive
after touch has ended.
That's a life cycle thing
you have to be aware.
If you don't keep
your data structure
around after touch has ended,
you will see estimated force
at the end of your stroke,
and it will look weird.
So let's go with that in code.
So we have touchesEstimated
PropertiesUpdated,
and we go through our touches.
We look up the estimation index
because we are in this method.
We can just implicitly
unwrap it.
We find our sample
in the sample index.
We update the sample,
and as I told you,
we update only the
values in that method
that we're expecting
updates before.
And then we update our
stroke, and to be future-proof,
we check if this touch
still expects updates,
and only if it doesn't we
remove it from our set.
So let's see this in action.
So I tried to use
the azimuth angle
to do a calligraphy
pen simulation.
And now that we have all the
data we want, let's look at this
without the debugging mode.
Doesn't that look nice?
And that's literally just me
connecting all the dots we got
from the hardware.
That is not interpolating
anything
or doing something fancy
like your drawing
engines [inaudible].
So with that, let's add
some finishing touches
to the final app.
Up to now, we just drew
a whole screen full.
We don't want to be
restricted to that.
So let's support our
arbitrary canvas sizes.
For that, we include our stroke
view into a container view,
add a little bit of
shadow, and put them
that into a [inaudible] draw
view, and we're done, right.
that into a [inaudible] draw
view, and we're done, right.
Not quite.
So now we have to think about
how to handle our gestures
because the [inaudible]
recognizes now conflicts
with our stroke gesture
recognizer.
If we change nothing, then
we will always draw a stroke,
and we can never scroll.
This is not what we want.
So one way around this is
to disable scrolling
with Apple Pencil.
This would be useful for
something like an annotation app
where you would always like to
just annotate with the Pencil
and specifically disable that.
You can do that because
we added allow touch types
to UIGestureRecognizer.
UITouch types is an array of
NSNumber of the touch types,
and it defaults to all
of the touch types.
So what we are going to do here
in this example is we would
get the pan GestureRecognizer
from our scroll view, we will
set the allow touch types
to only allow direct touches
so it only reacts to fingers,
to only allow direct touches
so it only reacts to fingers,
and we will change our stroke
recognizer to only allow stylus.
So this obviously is
not the full picture.
In the sample code, you
can see one implementation
but switches dynamically
depending on the usage
and is a little bit
more complicated,
but it illustrates the point
that you can restrict
your touch handling
to either the Pencil
or regular touches.
And one final note on this.
There's also a new property
on UIGestureRecognizer
that is called
requiresExclusiveTouchType,
and although our gesture
recognizers are defaulting
to all the touch types, if
they see one touch and begin
to recognize, they are
stuck in that touch type.
That is so you don't
accidentally pinch
with a Pencil and a finger.
This is the regularly what you
want from a UIGestureRecognizer.
If you don't want this, you set
the requiresExclusiveTouchTest,
type to force, and then
you can do recognition
type to force, and then
you can do recognition
between fingers and the Pencil.
So to summarize, I showed
you all the new properties
of UITouch so you can
make out of Apple Pencil.
I showed you how to use the
Coalesce and Predicted Touches
to both have the richest data
available to your drawing
and to have the least latency.
I told you about property
estimation, why we do it,
and how you actually
update your data to get
to the full rich data the Pencil
provides, and I showed you how
to adjust gestures to either
react to Pencil or finger only.
And this is a screenshot of the
sample app that is available
that does the nice
calligraphy pen as a default,
and more interesting to you,
it also has the debug mode
you have seen the videos made
of them.
So you can see how your
Coalesce Touches behave,
how they are estimated at
the edges, and see the tilt
how they are estimated at
the edges, and see the tilt
and azimuth, and just play
around with it to see how all
of our touch handling works.
So the full information of
this session is available
at this URL.
We have controlling game
input for Apple TV yesterday
for Siri remote handling.
A peek at 3D Touch will show you
all the higher level interaction
with the force in 3D Touch to a
Peek and Pop-like experiences,
and to get, to know more about
touch-to-display latency,
you should have a look
at advanced touch input
on iOS from last year.
And with that, that's it.
Thank you very much.