WWDC2010 Session 120

Transcript

>> Good morning everyone and welcome.
My name is Brad Moore.
I work on keyboards, text editing and of course this
and today Josh Shaffer and I are going to walk you
through Gesture Recognition, which is a new way
of dealing with Multi-Touch input on iPhone OS.
Now before we get into the nuts and bolts
of Gesture Recognition I think it's
worth asking why does this matter to us?
Why do we care about Multi-Touch processing?
And at one level, simply there's no alternative.
IOS, iPad, iPhone, they are all
built around touch-screen interfaces.
So if you want to respond to user events
they are going to be Touch events.
But more importantly touch-screen interfaces
make for a really incredible user experience.
And you as app developers want to tap
into that magical interactive quality.
So how do you get there?
Well if you want to live up to the
potential one thing you want to make note
of is why Touch interfaces are easy to use.
On our system, it's direct manipulation.
You put the fingers on the screen and
you physically manipulate content.
It feels great and nothing could be simpler.
When the iPad first launched you saw videos up on
YouTube of first-time computer users including toddlers,
a 99-year-old woman, even a cat and they spend one
minute with the iPad and they know how to use it.
And it's because of this direct manipulation.
It's easy and it feels great so
you want to be shooting for that.
Another reason it's easy to use is that we
have a common set of gestures and behaviors.
It's a vocabulary that works system wide.
So, for instance, when you encounter new content
one of the first things you do is you tap on it.
You tap on an icon in the Home screen an app launches.
You tap on a text field, a keyboard launches, comes up.
You tap on a button and it responds.
Tap is so easy, so familiar, and if you come from the mouse
and pointer world we don't begrudge
that, it's similar to a click.
Something that has no counterpart in
the mouse and pointer world is pinch.
You put two fingers down on the screen move them together
or apart and the content just magically tracks and scales.
It feels great.
There's no counterpart in the mouse and pointer world
because it's a really difficult behavior to implement.
You either have a fiddly slider or a
text box you enter a percentage into.
On iPhone we have pervasive scalable
content because pinch is so natural
and it works all over the place and users expect it.
Other gestures we have include swipe.
You swipe between photos, items in a set.
If you swipe an item in a list it will let you delete that.
We have a pan gesture.
When you encounter scalable content you just put
your finger down on the screen and move it along
and the content glides beautifully with your
finger and it has a little inertial scroll.
It works sometimes free form, sometimes
it's locked to a particular direction,
but it works everywhere and it works great.
One other gesture we support is a bit more
advanced, it's the press-and-hold gesture.
And it's kind of hard to discover
because you put you finger there
and don't move it for a while and then something happens.
But it gives you advanced things on the Home screen.
An application icon starts jiggling.
You can move it around.
In a text field you can move the
cursor, and bring up a magnifier.
And in all sorts of places it brings up the copy/paste UI.
Now I said it's a little less discoverable but because it's
consistent across the system, once you've encountered it
in just one place you know to try it
everywhere and that goes for all these gestures.
It's a toolbox that users carry around with
them and expect them to work in your apps.
So it's easy for users.
I think you know where this is going.
It's hard to write.
There are unique challenges to touch-screen
interfaces that use developer space.
One of those is the limited precision
inherent in touch input.
We give you a coordinate that at first blush looks kind
of like a mouse coordinate but it's nothing of the sort
because it's backed by a finger, which
has all the precision of a wobbly sausage.
You try to put it where you want and it doesn't go there.
You try to move it by a certain amount and it
doesn't move in the direction or the amount you want.
You can't even hold it steady, which is the
easiest thing in the world with a mouse.
So you can't really think of that coordinate as a
precise coordinate so much as a disc of possibilities.
Another problem you face is the multi in Multi-Touch.
Again there is an analogy when you put your
finger down you move it and you lift it.
It's almost like mouse down, mouse move, mouse up right?
Well sort of, but only if your computer
has 20 mice attached to it and each one
of them is controlling an independent pointer.
So the added overhead of those different
points of onscreen that you have to track,
let alone trying to make sense of
them, makes your lives a lot harder.
But I should mention that these two problems I've
listed here you can solve systematically if you ensure
that your User Interface elements are
of a certain size and if you make sure
that your User Interface elements respond
independently to Touch events you can get really far.
There's another problem though that's
really hard to solve systematically
and that's the inherent ambiguity of Multi-Touch interfaces.
Now what do I mean by that?
Well I'll take Safari as an example but you come to
Safari, you come to a page and you put your finger
on the screen and you don't know what to do.
The finger is over a link, maybe you follow a link.
It could be a tap.
But it could very well also be a double-tap so when
the finger lifts you don't want to immediately respond
to that tap but maybe it's not
just one finger that's coming down.
If you wait a bit another finger is going to come
down and it's a pinch and then again maybe one
or more fingers are going to come down and move a
bit in the same direction, in which case it's a pan.
And maybe it's going to wait there for a while and
not move dramatically in which case they want to bring
up an action sheet and as if those weren't enough gesture
support, you also have the possibility that they're going
to tap and then hold, in which case we
bring up an accelerated selection you want.
So this a bewildering array of possibilities.
And you see this sort of thing not
just in Safari but all sorts of apps
and different context and it leads to a paralysis.
When a finger comes in contact with the
screen, I know I want to do something
but I don't know what gesture it is so I have to wait.
I have to guess.
So there are these suboptimal solutions.
And you can understand the first, waiting.
It comes out of a very good intention of
responding to the full set of gestures, right.
I wait until the finger is off the screen or I
wait until a certain amount of time has gone by
and then I can make my determination very accurately.
But what it leads to in practice is I move
my fingers and then the content follows
and that just kills the direct experience,
the direct manipulation experience.
So as a user and a framework developer
nothing makes me sadder or more frustrated.
So we can't do that, it would make me sad.
If you recognize the problem of latency,
well maybe we'll say I'll take my best guess.
I'll take my shot at it.
And when a finger comes down well if it's in an
area of the screen that isn't otherwise interesting,
I'll wait for another finger to
come down, see if it's a pinch.
But if it comes down on a link I can reasonably infer
that I should follow the link and it does minimize latency
but it's pretty frustrating cause you swipe over
a link to move the page and you follow a link.
So that's not a solution either.
What usually ends up happening at this point is you
say, "Boy, I've got this wonderful Multi-Touch interface
but I'm going to have a mono-gesture interface.
I'm going to tap everywhere.
I tap to zoom.
I tap to follow a link and you
litter your application with buttons.
And, yeah, tap is direct manipulation
but you can do so much better.
Now I should say there's a fourth solution.
Some of you out there recognize all of these problems
and you write a very tight state machine
so there's a minimal amount of latency.
You correctly infer user intent and things just work great,
except they don't because you haven't used our own
definition, the system-wide definition of pinch,
swipe and scroll, you've got bizzaro swipe.
So the user comes into your app and it's self-consistent
but it's not consistent with any other app on the system.
So even when you do things perfectly you fail.
So touch-screen interfaces are
easy to use but hard to write.
Obviously our goal today and with UIGestureRecognizer
is to make it very easy for you guys to use.
So the topics we're going to cover include a brief recap
of Touch handling, we'll contrast that to gesture handling
and we're going to step in the mechanics of gesture handling
so you know what UIGestureRecognizer is doing
on your behalf when you're employing it.
Then we're going to go into the
nitty gritty details of the API.
Finally we'll talk about how gesture interactors
and gesture recognizers interact with each other
and how they interact with normal Touch event delivery.
So first off, Touch handling, this should all be old hat.
A finger comes down on the screen
and it's bound to a UITouch right.
For the lifetime of that finger's contact with
the screen the same UITouch is associated with it.
Also when a finger comes down on the screen its hit
test to a view and that UITouch is bound to that view
for the duration of the finger's contact with the screen.
And finally changes to that touch are delivered
to the UIView via a set of methods on UIResponder.
Touch is began, touch is moved,
touch is ended and touch is canceled.
All this sounds familiar right?
So how would you go about detecting a
gesture using this raw event handling?
Well first of all you need to subclass UIView so you
can store state and respond to these responder methods.
Here I'm looking for a swipe so
I store away the first touch.
And I store away its initial location.
So when touch is began I see if it's the first
touch and if so I pull it out, get it's location.
When touch is moved I look at the current location and
then I say, "Well if it's beyond some horizontal threshold,
and beneath some vertical threshold, looks like a
swipe" and then touch is ended, I just clear the state.
Not so bad, right?
Well there are a lot of problems with this.
One I've had to subclass to store
state, two it's not reusable.
If I go to any other view on the system
I have to copy/paste this code at best
and three, this is a horrible definition of swipe.
Please recognize this don't paste this code into your app.
There is a point here and point here, and if my finger moves
between them along any path it's going to count as a swipe
and it doesn't pay attention to philosophy.
There's so many nuances it doesn't get right.
So you might go in and try to tweak it and
you'll have a more restrictive definition.
But it then it's not going to be
the system definition at all.
So this is a mess and it's a state
of the art until UIGestureRecognizer.
So let's see what gesture handling is.
Well to use gesture recognizers you instantiate a
pre-defined UIGestureRecognizer, optionally configure it.
You set up one or more handlers.
These are your target action pairs and then
you just add it to the view and you're done.
So let's look at this example of
swiping with UIGestureRecognizer.
Here I've subclassed UIView, but I don't really need to
but in its initWithFrame method I'm
allocating the SwipeGestureRecognizer.
I'm setting its target to self and its
action to the swipeRecognized method.
So when the gesture is recognized
swipeRecognized will be called on me.
Then I add it to the view and because the view retains
the GestureRecognizer I just release it, and that's it.
This is reuseable.
I don't have to do any of the work of detecting
the gesture, that's left to the GestureRecognizer
and best of all I get the system definition of swipe.
So when a user comes to my app it feels
just like every other app on the system.
Okay so hopefully that whets your appetite.
We'll get into more details later.
But now the mechanics, how does Gesture Recognition work?
Well we're familiar when a finger makes contact
with the screen that it's hit test to a view.
Then a sequence of touches are
delivered to that view over time.
But when you add a Gesture Recognizer to
the mix the gesture receives the touches
in addition to the view receiving the touches.
So it looks kind of like this when the
swipe comes through and it compares it
to its own internal definition of the gesture.
So let's look at that again more slowly.
A swipe comes down and as it finds something that
it deems to be a swipe it notices that it's a match
and it messages your handlers, your
target-action pairs and things proceed.
It's actually very uncommon for a
gesture to be recognized though.
Most of the time Gesture Recognizers are not matching,
so what happens there is something comes down,
something happens to make it reject the
definition and then no messages are sent
to your target-action pairs and things proceed.
So if we add another Gesture Recognizer to the mix.
We're still getting the multicast touch
delivery but something I want to point out is
that Gesture Recognizers don't know about each other.
They independently process touch-info sequences.
For example, I have a Tap Recognizer
and a Swipe Recognizer here.
They're going to be looking at sequences of
touch input and not know a thing about the other
and they'll independently transition to a recognized
or failed state messaging target-action
pairs along the way as needed.
So let's look at a more complicated example
where there are multiple views to consider,
we have a container view and two subviews.
Well let's add Gesture Recognizers to
the first subview and the second subview.
An important property of Gesture Recognition
is that if a touch isn't delivered to a view,
then that Gesture Recognizer doesn't do any processing
and similarly a touch is delivered to a view.
But when a touch is delivered to a view
its Gesture Recognizers receive it.
So if subview one receives the touch
the Tap Gesture Recognizer sees it.
If subview two receives the touch the
Swipe Gesture Recognizer receives it
and the Tap Gesture Recognizer does not
receive any touches delivered to subview 2.
And just to drive home the example, if a touch is delivered
to the container view absolutely no Gesture
Recognizer sees that input sequence.
And this is really important because this contextual
analysis means we're not considering all possible gestures
at all possible times.
We're only doing this as needed and that's
really important on an embedded device
where you don't have infinite processing resources.
So let's extend this example and add a
Gesture Recognizer to the container view.
We've added a Pinch Gesture Recognizer here.
And normally views only receive
touches that are hit test to that view
and you might expect the same thing
from Gesture Recognizers.
But what actually happens is the Gesture
Recognizer receives touches delivered
to that view and to descendents of that view.
So here we're going to have a finger come down in the
first subview, a finger come down in the second subview
and only the Pinch Gesture Recognizer
is going to know about both of them.
Okay and it matches.
So really that's how it works in a nutshell.
We send Touch events to more than one thing,
the Gesture Recognizer in addition to the view
and Gesture Recognizers independently compare that
input sequence to their own internal definition.
They only do this analysis if they've
been attached to a view.
If no Gesture Recognizer is attached no processing
is going on and it works across multiple views.
Specifically a Gesture Recognizer receives touches both
delivered both to its view and descendents of that view.
So finally into the nuts and bolts of the API.
Well first thing worth messaging is that UIGestureRecognizer
is available both on IOS 3.2 and IOS 4.0.
So you have it available to you on iPad and iPhone
and UIGestureRecognizer is an abstract base class.
So it has a lot of useful properties and methods
but you don't want to instantiate it directly.
That said we provide of course, a huge number of
built-in subclasses that you do want to instantiate.
Among these we have TapGestureRecognizer and of course
that looks for taps but also double-taps and 2-finger taps.
We have PinchGestureRecognizer.
We have SwipeGestureRecognizer.
We have PanGestureRecognizer and that's
a bit more forgiving than its definition.
Notice the path that took was slow and all over the screen.
LongPressGestureRecognizer both recognizes long presses
and long presses that have some number of taps before them
and continue to move after they've been recognized.
And one other one that I didn't mention
earlier is the Rotation Gesture.
It's available on iPad and some Contacts like Photos and
you get it as one of the built-in Gesture Recognizers.
Now this set is pretty comprehensive.
I don't think you'll often have to go outside of it
but it's worth noting the UIGestureRecognizer
is designed from the get-go to be subclassed.
So if you have some reason to define a custom gesture
or otherwise encapsulate your touch handling code,
feel free to subclass UIGestureRecognizer.
We're not going to talk about that now
and you can come to the following session
if you're curious but it's worth pointing that out.
Okay, So you have UIGestureRecognizer.
The most important thing you can do with it is
establish a handler and that's reflected in the fact
that its initializer takes a target-action pair.
And if you want to manipulate the target-action pairs
after you've created it you can add and remove targets.
And the action method takes an optional
UIGestureRecognizer as an argument.
You almost always want to take the GestureRecognizer
as an argument and then within the handler,
within the action method you want to do something.
Well what do you do?
In certain cases if you have a really tiny view and it has
a Tap Recognizer on it, the fact that the gesture occurred
at all is sufficient information to process it.
But in a much larger number of cases you need more
information and Gesture Recognizers have properties
and methods associated with them that
make it really easy to handle them.
So the first of these that seems interesting
is location, the location on screen.
We have a location and view method on UIGestureRecognizer
and it means different things for
different concrete subclasses.
For instance, in a Swipe Gesture it's going
to be the point at which the swipe started.
And in a Pinch Gesture it's going to be
the center of the pinch, natural right.
Occasionally you want a little more
information so you might expect us
to just expose an NS array of touches
associated with the gesture.
But what we actually do is similar to that but we
implement location of touchInView on UIGestureRecognizer.
Why don't we give the array of touches?
Well take for example a DoubleTapRecognizer.
By the time the gesture is Recognizer the finger
and the touch associated with it are long gone.
So we store away that information and location of
touchInView and make it available to you at a later time.
Another property that you might want to
pull out of GestureRecognizer is the state.
There is a state property on UIGestureRecognizer
of type UIGestureRecognizerState.
So what values does the UIGestureRecognizerState take on?
Well I want to get out of the way that there are a couple
that for your purposes are uninteresting,
they're just for bookkeeping.
The possible state and the failed state although they are
very important in the implementation of Gesture Recognition,
you'll never receive your action methods while
things are in this state, so disregard them.
Then there's Discrete Recognizers.
A Discrete Gesture Recognizer is
something that recognizes a gesture
at a particulate moment in time and then it's done with it.
There are no further updates.
So you're action method will fire just once and when
that happens Discrete Gesture Recognizers are always
in the UIGestureRecognizerStateRecognized state.
That was a mouthful.
So since it's always in that state maybe it's not very
interesting to you and you don't even need to check it.
But one case you care about immensely is Continuous
Recognizers, recognizers like Pinch and Rotate that happen
over time and will cause your action
method to be fired multiple times.
Here there is the Began, Changed, Ended and Canceled state.
And let's look at that visually.
We have the Began state.
Every Gesture Recognizer that's continuous starts
out in the Began state when you receive it.
It might move to the Changed state.
It might stay in the Changed state for a while and it will
definitely eventually move to a terminal state, like Ended.
Now one thing I want to point out there is an
important difference between Gesture Recognizer states
and touch phases, which are similar right?
You have began, changed, ended, similar to began,
moved, ended but the problem with touch phases is
that in the began phase you don't really
know that you can perform any action.
In Gesture Recognizers you know for sure
because we've already waited however much time
is necessary to correctly identify the gesture.
So in your Began state, you can go and start
updating the UI and save any initial state.
In the Changed state you can continue updating the UI
and in the Ended state you can tear
down any transit UI and confirm changes.
Now I mentioned there's a Canceled state as well.
So the state diagram actually looks
like this from either the Began
or Changed states it can move straight to the Canceled.
That will happen say when a phone call comes in or we notice
that the phone is up to your face with the proximity sensor.
You want to handle this similarly to the Ended
state but ideally you won't confirm any user action
because the user didn't actually finish that gesture.
So let's look at an example of dealing
with a ContinuousStateGestureRecognizer.
Here we're going to handle a LongPressGestureRecognizer
and we're going
to try do something similar to
what happens on the Home screen.
Your finger is going press something, you want it to start
jiggling and you move it around as your finger changes.
So the first thing we'll do is pull
off the view and the currentLocation.
We'll use these in every example and note that
UIGestureRecognizer has a handy backpointer to its UIView.
It's been added to a view and you
can access it from the handler.
You want to do that all the time.
And the location we're getting in the view is
super view so we have a steady reference frame.
And then we're going to switch on State.
It will go to through four states.
In the first we'll store away the startLocation
and the centerOffset so we can make geometry deltas
in the further states and we'll start
the animation, we'll beginJiggling.
In StateChanged we're just going
to update the center of the icon.
In StateEnded we're going to update the center and
stop the animations because it's been confirmed
and in StateCanceled we'll do something similar but this
time we're going to restore the center to the startLocation
because something interrupted the gesture.
So all those states were common
to every UIGestureRecognizer.
There's a lot of specialized state on
the concrete subclasses we provide you
that is useful for those particular subclasses.
For instance UIPinchGestureRecognizer has a scale with it.
And UIPanGestureRecognizer has an associated translation.
These method names, these property
names should be a very big clue
that these Concrete Gesture Recognizers are
intended to be used in a very particular way.
I can't emphasize this enough.
We've done all the trouble of decoupling
Gesture Recognition from Gesture Handling.
But that doesn't mean that you can mix and match
in any old fashion and have a good user experience.
It's entirely possibly to have your PinchGesture start
rotating the view, but it's not going to make any sense.
So please try to use the properties appropriately.
OK So that's how we handle gestures.
Sometimes you want to customize a
Gesture Recognition just slightly.
There's a common way to override behavior without
subclassing in Cocoa Touch and that's delegation.
Well UIGestureRecognizer does indeed have a delegate.
It conforms to the UIGestureRecognizerDelegate property,
protocol and it has a number of optional methods.
The first of these GestureRecognizerShouldBegin
allows you to decide at the last moment
that a Gesture Recognizer is not going to be recognized.
It is a way of restricting the definition
of a gesture without subclassing
and without reconfiguring it in any other way.
Another delegate method the
GestureRecognizerShouldReceiveTouch allows you
to prevent a gesture from even
seeing a touch in the first place.
This is useful perhaps if you have a Gesture
Recognizer on your outer container view
and there's one tiny descendent view that
you don't want to interact with gestures.
So we're looking at this visually.
What happens when a swipe comes in and the Gesture
Recognizer says, "Yes it matches my definition?"
Well it first sends a message to its
delegate asking if it should begin.
And so delegate says, "No" the
Gesture Recognizer never transitions
to the RecognizedState and no action methods are sent.
In the other delegate method when a finger comes in
contact with the view and its hit test to the view,
we at that point send a message to the delegate
asking if it should even receive the touch.
And if you say no to that, the SwipeRecognizer
never even receives the touch sequence.
So you can use the delegate to override every definition
of UIGestureRecognizer or any UIGestureRecognizer
but the subclasses offer many more ways to
configure those particular Gesture Recognizers.
So to give an example TapGestureRecognizer
allows you to override both the number of taps,
say single versus double and the number of touches required.
Say you want to implement a 2-finger tap like you
use in Maps to scroll out, or zoom out rather.
There is PanGestureRecognizer.
You can specify both a minimum number of
touches and a maximum number of touches.
And UILongPressGestureRecognizer, which has
a huge number of configurable properties.
You can specify a number of taps
required beforehand and it defaults to 0
but if you set it 1 say you'll
get a tap and a half behavior.
If you change the number of touches required
you can get a 2-finger Long Press Gesture.
You can also tweak the minimum press
duration and even the allowable movement.
So the built-in subclasses are highly, highly configurable
and the jury is out on whether that's a good thing or not;
so I want to take this opportunity to caution you all
against overly configuring these in very exotic ways.
If you create a 4-finger Triple-Tap Gesture it's entirely
possible, we've given you the rope to hang yourself by
but it's going to make for a very
poor user experience for two reasons.
It's going to be inconsistent with the rest of the system.
So if for some reason users come to love that gesture
in your app it's not going to work anywhere else.
And, as sort of the flipside to this is it's going
to be very hard to discover it because it's not going
to exist anywhere else, so users will come to
your app and not know to try that awesome gesture.
So you don't want to hide complex or interesting
behaviors behind undiscoverable gestures.
That said, if you're determined to shoot
yourself in the foot, let us do it for you
and use the UIGestureRecognizer
and configure it however you want.
Okay so that's how to use the API.
And now Josh Schaffer is going to come on stage
and actually show you in a demo how it works.
[Applause]
>> Alright thanks Brad.
So I'm really excited to share this with you guys
because we've been doing some touch handling things
over the last couple of years and it just got a lot easier.
So we're going to do two demos today actually.
And I've gone ahead and downloaded a couple of the
samples off developer.apple.com and we'll take a look
at how we did them before and see how we can
modify them to make them significantly easier now.
So you may have seen this before.
It's part of the Scroll View Suite
set of sample applications that went
out with last year's Mastering iPhone Scroll View session.
And you can pinch in using a UIScrollView and
scroll around and that's all part of UIScrollView.
But additionally we have the ability to
double-tap to zoom in and 2-finger tap to zoom out.
Now the way that was implemented last year and
in this sample code that you can download was
through a custom subclass of UIImageView
called a TapDetectingImageView.
And this TapDetectingImageView has a whole bunch of I bars
to attract state and defines its own delegate protocol
to inform the class that allocated it that there were taps.
Double-taps and 2-finger taps so that's all down here
and then within the implementation file there's all
this code just to figure out whether we're tapping,
double-tapping, 2-finger tapping and then
in the end we have to notify our delegate
that one of these things has actually happened.
So the first thing we're going to do this year is
select those two files for the TapDetectingImageView
and just delete them [Applause] and now we're done.
No, So we had first allocated our
TapDetectingImageView here in our root view controller.
So we now can just go back.
We don't need a subclass at all anymore and
we're just going to create a UIImageView instead.
And additionally now we'll delete this setDelegate because
UIImageView doesn't actually have a delegate protocol
so that would not have actually compiled.
And now we can just allocate a couple of
UITapGestureRecognizers, actually three because we want
to configure tap, double-tap, and 2-finger tap.
So we'll go ahead and create three of them.
We've got three direct instantiations
of UITapGestureRecognizer initing
with target self and three different selectors.
We've got handleSingleTap, handleDoubleTap and handleTwoFingerTap.
Now all of these by default are
configured to be single-finger,
single-tap because that's the default allocation
you get when you allocate a TapGestureRecognizer.
So for our double-tap we'll have to set the number of
taps required to 2 and for our 2-finger tap the number
of touches required to 2, but that's all the
configuration that we have to do in order
to get all three of these different behaviors.
So with that done we can just attach them to
that UIImageView so we'll call imageView
addGestureRecognizer: Single-tap,
Double-tap and 2-finger tap.
And now that they're all retained by
the ImageView we can just release them.
We don't have to even hang on to
them in high bar or anything.
So a single-tap release, double-tap
and 2-finger tap all released.
So the last thing that's left to
do before this actually works is
to replace those delegate methods
that we had implemented before.
Now one thing to take note of in these
delegate methods is they actually provided us
with a point where the taps had occurred.
So we'll need to solve that as well.
But first off we'll just select this and replace
our TapDetectingImageView got SingleTapAtPoint
with our new action method we defined Handle
single-tap and the same for the double-tap,
we'll replace that with Handle double-tap and for the
2-finger tap we'll replace that with Handle2-fingerTap.
Now that last part is we no longer have this tap point to
find because that was passed to us before by our subclass.
But as Brad showed us UIGestureRecognizer knows
where the gesture occurred and for any number
of taps it's always the centroid of
however many fingers were involved.
So if it's a 2-finger tap it's the
point that was exactly between them.
So we can actually just replace the thing that was
passed in with the GestureRecognizer locationInView
and since GestureRecognizer knows what view it's
attached to and that's the view we want it in,
we can just get that view back from the Gesture Recognizer
and assign that to our local variable now at tap point.
So we just replace exactly what
we had been calculating before
with something the Gesture Recognizer is already calculating
for us, and the same thing down here in Handle 2-finger tap.
So with that done there is not really anything left to do.
We can build and run.
We still have the same ScrollView functionality we had.
We can still pinch.
We can scroll around and we can still double-tap to
zoom in and 2-finger tap to zoom out with way less code
than we had before and with a definition of tap
that's the same as everywhere else on the system.
So, that's-- [Applause]
>> Thank you Josh.
As someone who's had to write and maintain
code like that before this warms my heart too.
And hopefully you're looking at this and salivating and
saying, "Boy I can replace a lot of hairy code I've read."
So let's now go into what happens when
there are multiple Gesture Recognizers
and they're interacting with each other in some way.
Well you want to think of the UIGestureRecognizer
state machine as a superposition of possibilities.
It's like earlier a finger comes down on the
screen but you don't know what it is yet.
But of course, we only support at most one gesture from a
single sequence of touch input and we achieve this by saying
that the first Gesture Recognizer
to match is the one that wins.
It's the one that gets the target-action
paired messages sent.
And this works really well in a lot of cases.
It reduces latency.
I mean as soon as something recognizes, an action happens.
And let's look at it visually.
Here the swipe has been recognized and there hasn't been
anything in the touch input sequence up to this point
that disqualifies the touch input sequence from being a pan.
The pan is a very promiscuous Gesture Recognizer.
But by virtue of the fact that swipe has been
recognized the pan will actually be excluded
and only the Swipe Recognizer will
have its handlers notified.
So that's how we deal with conflicts in general.
But if two Gesture Recognizers recognize at exactly the
same moment in time in response to precisely the same step
in the touch input sequence, well we have to
break ties and we do this by favoring views
that are closest to leaf nodes in the view hierarchy.
For instance if you have an outer view and an inner view
that both have TapGestureRecognizers installed and you tap
on the inner view we're going to deliver the
target-action pair for the inner view not the outer view.
And if two Gesture Recognizers on exactly the
same view recognize at the same moment in time,
we're going to prefer the Gesture
Recognizer that was most recently added.
So visually this looks like the following.
We add a SwipeGestureRecognizer to both the outer
and the inner view, the container and the subview
and then when a touch input sequence
come in that matches the swipe we have
to choose among the different possibilities and here
we choose the more deeply nested view with subview.
And that in turn excludes the outer view Swipe Recognizer.
So let's go ahead and add another Swipe Recognizer to
the subview and run the same touch input sequence again
and here we have to choose between Gesture Recognizers
attached to different views so we choose the subview again,
but this time we go the more recently added Swipe
Recognizer so it's recognized, it's target-action pair fires
and it excludes the other SwipeGestureRecognizer.
So this is great.
It minimizes latency.
It works in a huge number of cases and hopefully
makes everyone happy but there are a few cases,
well a couple of cases where it falls short.
One of these is with Dependent Gestures.
And what do I mean by Dependent Gestures?
Well where one gesture always has to
occur before a second gesture occurs.
An obvious example is tap and double-tap and
tap always happens before double-tap right?
And you might think from what I've just said that if
you attach both a Single-tap and a Double-tapRecognizer
to a view than the Double-tapRecognizer
is never going to fire.
Well if you were paying attention to the demo
you know that's not in fact what happens.
As a matter of fact, we have very
reasonable default behaviors in the UIKit.
When a single-tap comes in the single-tap
recognizes, it messages its target-action pair,
but it does not exclude a Double-tapGestureRecognizer.
When a second tap comes in the Double-tap recognizes it as a
double-tap but it does exclude the single-tap in this case.
So in a double-tap sequence we get our single-tap
firing and then we get a double-tap firing.
So two action methods and it works pretty well and it works
really well if you have stackable actions on your gestures.
For instance, take Notes: if you want to move the cursor,
you tap; if you want to select a word, you double-tap.
Well there's no reason you can't move the
cursor immediately before selecting the word
and so you can just stack these actions and
it works beautifully and there's no latency.
An example where it doesn't work so well
when you have nonstackable actions is Safari.
On tap we follow a link.
On double-tap we zoom into the content.
Well you don't want to zoom into the
content when you've just followed a link.
That doesn't work at all.
So there needs to be some other way to
deal with that and of course, there is.
On UIGestureRecognizer we have another
method requireGestureRecognizerToFail.
And requireGestureRecognizerToFail is
called by the dependee on the dependent.
That is first Gesture Recognizer to
recognize calls the method against the--
well you call this against the first
Gesture Recognizer to recognize
against the second Gesture Recognizer to recognize.
An example is a single-tap requires a double-tap to fail.
So what does that look visually?
Well our first tap comes in and the
Single-Tap Recognizer in some senses recognizes
because it knows it's matching its internal
definition but we've imposed this failure requirement
so it's not quite finished and it's
waiting for the double-tap to fail.
Then when another tap comes in the
Double-Tap Gesture Recognizer says, "Ah ha this is a match."
It messages its handlers and then by virtue of
being recognized it excludes the single-tap.
So when a double-tap comes in we immediately get our
double-tap action method and there's no action sent
for the single-tap, exactly what we want,
except consider just the single-tap case.
A finger comes in and after some duration a double-tap
fails and only then does a single-tap transition fully
to a recognized state and message the handlers.
And notice there that this introduces latency,
which I said earlier is really, really bad.
It makes me sad and you don't want to do this.
What you want to do is design your User Interface
so that you can work with stackable actions.
But if you absolutely cannot have stackable actions
for Dependent Gestures you can use this method.
So Dependent Gestures are one example where mutually
exclusive Gesture Recognition isn't perfect.
Another example is Compatible Gestures.
Gestures like pinch and rotate, why
not have them happen at the same time.
Well by default we're using the same
model we used for all gesture recognizers
and that's no Gesture Recognizer is compatible with another.
So if I put two fingers down on the screen at some point I'm
going to decide either it's a pinch or rotate but not both
and my action methods won't fire for both.
But this is something you can override.
Using a method on the Delegate
property what I didn't mention earlier,
gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer:
And if you return yes to this two Gesture
Recognizers can be recognized at the same time
and the target-action pairs will fire for both.
So visually if I put two fingers down
and start panning and pinching together,
it's entirely possible to get action methods for both.
So that's how Gesture Recognizers interact with each other.
I want to emphasize that the default behavior is
usually what you want but you can tweak it a little.
So let's move on to Hybrid event handling.
What happens when you have Gesture Recognizers
interacting with raw Touch event delivery?
You have an app say that's been written
before the advent of Gesture Recognizers.
Good for you.
And are you worrying perhaps you need to rewrite
everything from the ground up to use this better tool?
No absolutely not, the API was designed to mix
and match with responder delivery of Touch events.
And as a matter of fact, you can easily add
Gesture Recognizers to your existing app.
Take a particular view add a Tap Recognizer,
add a Swipe Recognizer it works great.
There is one thing I should caution you about
though, if you haven't systematically been dealing
with canceled touches you're going to start having a
lot of problems because when a Gesture Recognizer moves
to the recognized state and its target-action pairs
are messaged, then it actually stops delivering events
to the UIView when we do this, because we don't want two
distinct things handling the same input sequence that's
generally not a good behavior.
And so in order to stop delivery we
have to complete the state machine
so we cancel those touches, so be ready for that.
I also want to emphasize that Gesture Recognition is not
a replacement for raw event delivery, raw event handling.
For one it builds upon and exposes the
UIResponder touch delivery methods in UITouch.
But for another reason not everything fits into a gesture.
Not everything is a gesture.
You have free-form touch input in many cases.
If I'm multi-tapping into my keyboard, if I'm taking a
piano and typing into it I want free-form touch input.
So don't try to fit things that aren't
gestures into UIGestureRecognizers.
So if you're curious about this, if you're curious about
how Gesture Recognizers interact with raw event delivery,
if you're curious about how to subclass
Gesture Recognizers stay for the next session
because Josh is going to talk all about that.
Alright that's really it for the topics and we have one
last demo and Josh is going to come up here and offer it.
[Applause]
>> Alright thanks again Brad.
So one more time we've got another app that I've
downloaded off of developer.apple.com sample code.
This one you may have also seen.
It's a bit older but it's the Touches sample app.
It basically shows you how to track multiple touches in a
single view but manipulating multiple subviews while doing
that tracking so we can grab one and drag it around.
Or if we put two fingers down we can
actually grab two and drag them both around.
But this was of course written using just raw Touch
events because that's all we had at that time.
So let's, Oops, expand this out here
and take a look at what we were doing.
So we had-- Here we are-- a view subclass
view subclass, MyView and in that view
of course we implemented the touchesbegan, moved and
ended and canceled methods and we had a whole lot of code
that was stashing off different touch locations
and origins and all this different stuff.
We had helper methods to dispatch these Touch events, the
different subview in case we wanted to track different ones
and we also had to disambiguate which views
we were trying to touch all on our own.
So there's a lot of code in there
to handle that kind of stuff.
And, Oops, let's add line numbers
back in here and our text editing.
So we can see, Actually it's pretty small.
You probably can't see there's actually 213 lines of
code in this file just to implement what we just looked
at of dragging those single views around.
So the first thing we'll do is of
course grab all that and delete it all.
[Laughter] And we can even get rid of all these
defines we had up here and our helper methods
because we don't need any of that anymore.
Now we're starting out with 38 lines of code so we're kind
of back to almost an empty file and just a bit of stuff
for cleaning up our memory in dealloc
We begin by adding a PanGestureRecognizer.
But we actually want to be able to manipulate
all three these views and we want to be able
to manipulate all three at the same time.
So we'll use three different UIPanGestureRecognizers
and attach one to each of those subviews.
But in order to do that I'm going to add
this one helper method so I don't have
to duplicate the code all over the place.
So I'm just going to addGestureRecognizerstoView
method and in that we're going to take
in whatever view we want to attach the gestures to.
So we'll allocate UIPanGestureRecognizer the
same way we did with the tap end with target Self
and action will be PanPiece a new
selector we'll define in a minute.
We're going to attach it to that view that we just passed
in so ViewAddGestureRecognizer PanGestureRecognizer
and we don't have to hang on to it
again because the View is retaining it.
So PanGestureRelease is all we have to there.
Now because this view I know is being loaded
from a Nib I'm going to do the configuration
of the Gesture Recognizers in the awakeFromNib method.
So in awakeFromNib I will call AddGestureRecognizerstoView
for all three of those piece views.
These were connections that were set up in IB.
We didn't just do it now but it's
the same as the sample code.
I wanted to change it as little as possible.
So you can check it out for yourself.
So now we've attached all these Gesture Recognizers so
next we actually want to implement that PanPiece method.
So the strategy we're going to use
for panning the piece is very similar
to what was being done in the original Touch handling code.
Every view has a center point which is defined
in the coordinate space of that view's superview
and that determines where in that view that subview appears.
So we're just going to adjust the center point by
the amount that the user has panned their finger.
So to start out with we'll figure
out which view we're trying to pan
and because we've attached the GestureRecognizer
to each of the individual subviews we can just ask
for the GestureRecognizer's view and
that's the thing we're trying to pan.
Then we want to figure out when we want
to adjust the location and Brad talked
about the different states that
a Gesture Recognizer may be in.
So since we're a PanGestureRecognizer which
is continuous and we report changes over time,
we want to adjust our location any time
we're in stateBegan or stateChanged.
By the time we get to stateBegan, as Brad said, we
already know that we've ruled out any other possibilities.
We're really trying to pan a piece.
So in those two states we know we can update our location.
In order to figure out what to update we want to
start out by knowing where we are now and we can get
that by just pulling the center off the piece.
So we'll get the piece's center and stash that in a
local CG point, which we're going to adjust in a second
and in order-- and we want to know
how far we want to move it.
Now normally you'd have to calculate that yourself by
figuring out the delta between the last touch location
and the current touch location but
UIPanGestureRecognizer tracks it for you.
So we can just ask the PanGestureRecognizer
for its translation in any particular view.
And since the center is always in the coordinate space
of our View-SuperView we also want to get our translation
in the coordinate space of our View-SuperView.
So we'll get the GestureRecognizer
translation in View, piece in SuperView
and that's the amount that we want to move our piece by.
So knowing the center and how much we want to
move it we can adjust the center by that amount.
So our new piece's center is CGPointMake the original
center X plus translation X and the same for the Y.
Now this next part is a little bit odd.
So all of the continuous Gesture Recognizers
that you'll find as part of the UIKit pan,
pinch and rotate, they all report a cumulative offset.
So TranslationInView normally is the total accumulated
translation for the entire duration of that gesture.
So it's the translation from the original
begin point to the current location even
if your action method has been fired 500 times.
But as you just saw here, we're actually
adjusting our current center point by that delta.
So it would actually shoot out from
under our finger if we kept doing this.
But one nice thing about the property
TranslationInView and scale and rotation
on the other two is that they are writable as well.
So if we don't actually want a cumulative value
here, we can get deltas by just setting it back to 0
after we've pulled out the current value.
So in this case we'll call it
GestureRecognizerSetTranslation,
CG point 0 in that same piece,
in that same SuperView rather,
so that the next time our action method is called
we know it's going to be a delta from 0 instead
of a total cumulative translation causing
it to shoot out from under our finger.
An alternative is you could just stash away the original
point but there are cases where you may want to do that.
But for purposes of our demo it means we don't
have to keep any additional state around.
So that's actually all we have to do there.
So if we now build and run you'll find that
we still have the exact same stuff we had.
Uh oh, well we almost had it.
So, this is something that's bitten people alive.
So I wanted to actually make it by me so you would see it.
You have probably hit this in your own code at some point.
UIImageView is one of the few subclasses in
UIKit that has user action disabled by default.
Now as Brad was telling us UIGestureRecognizer only
receives touches if its view is receiving touches.
And since our user interaction is disabled
on these ImageViews their Gesture Recognizers
aren't going to receive touches either.
So, they are actually defined in our main window
Nib, just open that up and select, well maybe,
this is going to be a much shorter demo
if I can't open Interface Builder,
we'll open that up in there maybe this time.
Oh man, what's going on?
Hey there we go.
Good call thanks.
For some reason opening the XIB wasn't
working and opening the localization was.
So anyway now we can select all three of these guys and go
over to our inspector and turn on UserAction and if we build
and run now this time for sure
we can actually drag it around.
Now one thing I didn't point out when I originally did this
was that when I started dragging the piece actually jumped
to be underneath our cursor because we were directly
setting the center to the current touch location
but that we're using deltas it's much more what we want.
So actually if we're grabbing the corner it sticks
to the corner rather than jumping to the center.
We can still do two at a time because Gesture
Recognizers that are not related to each other
by a View hierarchy these are actually in
sibling views can recognize simultaneously.
So these pans will continue to track
independent Views at the same time.
And it would work for three fingers if I could do
3-fingers in the simulator but sadly no can do.
[Applause] Alright so that's pretty cool right.
There almost no code anymore.
We've got just these 10 lines of code
that we added and deleted 213 lines.
But now our design team has come to us and said, well
that's really cool I love that I can still do that
and you made it better, why do I care about that.
But what I really want is also to be able to pinch
in there to make them scale those views bigger.
Now if you had gone and already implemented
that touch tracking before and wanted to switch
to adding this new pinch ability as well, it'd probably
start to get pretty complicated and you'd have to figure
out how to look at multiple touches and figure out which
views they were in and then from touches figure out how much
to scale your view so you'd have to
do a bunch of math to calculate that.
And it becomes pretty complex pretty quick and you end
with a big state machine that you're trying to track
and a lot of additional state that you're keeping track of.
But with UIGestureRecognizer it's actually quite easy.
So we tell our design team, "Hey no problem.
We'll get back to you in five minutes."
And we allocate a UIPinchGestureRecognizer and end
at target:self selector:scalePiece add it to the View
and release it, the same as everything else, but now
because we know that our design team is a little bit fickle
and as soon as we implement this they're going to say I
want rotation too, we saw ahead and decide we're going
to do the same thing already to save ourselves
some time so we add a UIRotationGestureRecognizer.
So the action method for that will be rotatePiece.
So we can come down here and add these
two new action methods below our PanPiece.
So if we're going to add a ScalePiece method and this
is going to be implemented almost exactly the same way
as our PanPiece method except instead of
adjusting the center point we're going
to adjust the transform on the View.
So we'll get out the current piece the same way
and check to see if we're in stateBegan or Changed
so that we know we actually want to apply this.
Now instead of getting center we're going to get the
transform off of the View, so PieceTransform and instead
of getting translation from a PanGestureRecognizer
we'll get ScaleFromAPinchGestureRecognizer.
And we want to scale our transform the current transform by
the scale that the GestureRecognizer has calculated for us.
So we don't even have to look at the
touches and figure out how far they were
or how much they moved or how to get a scale from that.
It's already been done.
So we'll adjust the pieces transformed
by using CGOutlineTransformScale
and we'll scale the transform both horizontially and
vertically by the scale we got from the Gesture Recognizer.
And as I said PanGestureRecognizer is the same as sorry,
PinchGestureRecognizer is the same as
Pan in that it's a cumulative scale.
So we really want deltas so we're going to go ahead and set
the scale back to 1, 1 would be the identity for a scale
and we'll do the exact same thing for rotate
except instead of getting the scale out we're going
to get the pieces transformed and get
the RotationGestureRecognizer as rotation
and the identify value for rotation is 0.
With no rotation it would be 0.
And we'll have rotated with CGOutlineTransferAndRotate.
So if we were to build and run now you'd find that
we actually could scale and rotate these pieces
but we can't do them both at the same time because as Brad
said, they're mutually exclusive, if I could talk today,
so we can do both but that's not really ideal
but there is a very quick solution to this.
It's that-- long to say but quick to implement,
gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: method.
[Laughter and applause] Oh well thank you.
I've been practicing.
So we're going to go and set the
PanGestureRecognizer's Delegate to ourself,
set the PinchGestureRecognizer's Delegate to ourself and
set the RotationGestureRecognizer Delegate the same way
and then we'll implement GestureRecognizer
should then recognize simultaneously
with GestureRecognizer [Laughter] right
down here [Laughter] I had to say it fast
because it could take the rest of the session otherwise.
So what we're going to do with this one is by
default we want to keep the normal behavior.
No two gestures recognized at the same time so
by default we're going to continue to return no,
they should not recognize simultaneously.
How you decide whether two gestures should
recognize at the same time is kind of up to you.
In most cases you've probably stashed
pointers to the gesture so you can be explicit
about saying these two specific
ones should recognize simultaneously
but for simple implementation here we're just
going to say that any two gestures attached
to the same view can recognize at the same time.
So if the GestureRecognizer that we're being asked about
if its view is the same as the other GestureRecognizer
that we're being asked to view, we'll
say yes they can recognize together,
otherwise default behavior and nope they can't recognize.
So with that just form five more lines of code,
build and run and now we can pinch, rotate,
translate all at the same time and if we do in
two different views we can still move them around.
[Applause] There is one last piece that I didn't go over
here and it's, I'm not going to get into too much detail
about it, but we generally want to pinch
and rotate around the center of our fingers.
And the rotations and scales when you apply
transference to views are applied around the center
of the View not around the center of your fingers.
So CALayer has a property called an anchor
point that allows you to adjust the location
around which a scale and a rotation would be applied.
I've written a helper method we won't look at it right now
because it's not directly related to Gesture Recognizers.
But I'm going to adjust it by adjusting
anchor point to be between my fingers
so that the pieces would actually remain attached
to my fingers as we were manipulating them.
Oops, went a little too far in my setup things.
So I'll show you that in just a second too, but now that
we've got all that working our design team has come back
to us one more time and they've said, "Wow, that's
awesome you did it in five minutes, that's incredible.
What can you do in ten?"
[Laughter] Really we should have waited
a couple of days before we showed them.
Now they want us to be able to pinch and rotate
and pan all of these views all together at one time
if we're not in one view with both our fingers.
So the whole collection of views all at once, so
now you wrote all this not using GestureRecognizer.
You're like, "Aw geez, I have to
start over I don't know what to do."
No problem with GestureRecognizer though,
we can do this with one line of code.
So we're going to go back to the top and in our
awakeFormNib method, I added some warnings for you guys,
I hadn't declared this as a GestureRecognizerDelegate
so don't leave those in your code when you check it in,
it looks bad and it's embarrassing when you're on stage.
So in our awakeFromNib method we had our
addGestureRecognizersToView helper method
and we were adding them to the three pieces,
well, how about we add them to ourself as well.
We are UIView subclass and these
ImageViews are subclasses of us.
So add them to ourself and we're
the SuperView of these ImageViews.
One line of code, let's build and run.
Now we can pinch this guy, still rotate, we get a
finger outside, I can pinch and rotate all of them.
[Laughter and applause] So I happened to have already
built this for an iPad, I thought I'd show you here.
It's a little bit easier to see.
We can now pinch this guy and we can pinch this guy and
we can pinch that guy and we can get all this fingers down
and we can pinch all of them [Laughter]
and we can grab the whole set.
So that's all I've got.
Thanks. [Applause]
>> Thanks Josh that's awesome.
So just to recap Gesture Recognizers,
why you want to use them--
less code to write, you can spend your
time handling gestures, not detecting them
and most importantly you're going to get consistency with
the system-wide definition of gestures and how you use them.
It's simple.
You instantiate a Gesture Recognizer, you set your
target-action pairs, you configure it using delegate
or subclass properties and you
attach it to a view and that's it.
So if you want more information you can contact
our Evangelist, go online to docs or devforums.
Stay for the next session if you want a subclass
UIGestureRecognizer or learn more about how
to configure the Gesture Recognizers
interaction with normal touch delivery.
Cool, Thank you.