Transcript
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Silence ]
>> Aroon Pahwa: Good
afternoon everybody.
[ Applause ]
Thanks for coming here today to
Putting Map Kit in Perspective.
My name is Aroon Pahwa and I am
here to talk to you about some
of the new ways you can
look at Maps in iOS 7.
So last year with iOS 6 we
introduced an all new maps
application with
vector map rendering.
And along with that we
introduced a few new ways you
can interact with Maps.
We added the ability
to rotate the map,
as well as pitch
the map into 3D.
You get to see these
beautiful 3D buildings.
We also added the
ability to pan westward
across the Pacific Ocean.
You can just keep going forever
all the way around the world.
And to implement
printing support
in Maps we added the ability
to take static map snapshots.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now we had hundreds of millions
of users using the Maps.app
and when they left and
came to your app they found
that the Map Kit view
did not work the same.
They tried to rotate
it, it didn't work.
They tried to pitch
it and it didn't work.
And it's frustrating for them
and I'm sure it's frustrating
for you too because your
customers can't use the map the
way they do in the Maps.app.
So this year we brought all of
that to Map Kit and not only
that we brought the
whole thing to OS X.
So I'm really excited
to talk to you today
about how we make
that work in Map Kit.
So the first thing
I'm going to go
over is what do we
actually mean by 3D map?
After that I'm going to show
you how you can add perspective
views to your map.
Then I'm going to go over some
new API that we've introduced
to help you take
full advantage of 3D.
And lastly I'm going
to talk to you
about how you can
use the Snapshot API
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that we've introduced to create
static snapshots of a map
in a preformat manner.
So what is a 3D map?
What does it mean
to call a map 3D?
When someone says map
usually they're talking
about something flat.
Maybe like a piece of paper you
roll up and put in your car.
It's got a bunch of roads on it.
So how do we take that
flat thing and make it 3D?
Well we start with the
same 2D map you're used to.
Here a picture of one.
North is at the top.
Then we add pitched
views of that map.
And on top of that now we can
add things like 3D buildings.
And the result is something
that is sometimes
referred to as 2.5D.
That's because what's really
going on here is we're looking
at the same 2D map you're used
to seeing in a 3D environment.
And that exactly how you should
conceptualize what's happening.
What we're doing is
we're looking at 2D map
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in a 3D environment and
that allows us to move
around in a 3D environment
getting pitched views
and rotation, as well as,
putting 3D objects on top
of the map like these buildings.
So how do you get
around this new 3D map?
Well you're probably all
familiar with the Maps.app
and it works exactly
the same in Map Kit.
You can also use two
fingers to rotate the map
by rotating your fingers.
You can also use two
fingers and pan up
and down to pitch the map.
And in OS X we use all the
gestures you're used to using
with a track pad and if
you don't have a track pad
and you've got a mouse we've
added handy compass controls
and zoom controls so it's easy
to get around that way too.
And I know most of you use the
simulator to test your apps
so we've added the
ability to pitch and rotate
in the simulator, as well.
You can use these key
combinations up here,
now you can pitch and rotate.
Test your app, complete
functionality in the sim.
So how do you get these 3D
views in your Map Kit app?
It's basically three
things you need to do.
The first one is just
recompile with iOS 7 SDK.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You're going to find that
pitching and rotation just work.
They're on by default.
And for the most part your
app will probably just work.
But there are some changes
to the existing API behavior
that you should be aware
of and we're going to go
over the four major
areas of the MKMap API
that change a little
bit, you should be aware
of in case something
in your app breaks.
Lastly I'm going to go over a
new camera API we're introducing
that you should be
adopting in your apps
to take full advantage of 3D.
So I'm pretty sure you can
handle recompiling your app
so I'm just going to go ahead
and get started with adapting
to existing API changes.
So after you create
you MKMapView one
of the first things you're
probably doing is telling the
map what part of the world
that you want to see.
Here are the two APIs that
you're probably using.
The first one is
setVisibleMapRect
and the second one is setRegion.
Both of these take a rectangle
and tell the map
to just show it.
And this API behaves
exactly like it always has
in previous iOS releases.
It results in a 2D map
that's looking straight
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
down at the map with
no rotation.
So North is at the
top of your map view.
The difference in iOS 7
is now that we can pan
across the Pacific Ocean
it means we can pan
across the 180th Meridian;
this API will now returnRects
that span that as well, and
that's a little bit different
than in previous iOS releases.
So if you're doing math with
these rectangles that math needs
to handle a rectangle that's
spanning that meridian.
So here's an example of a
rectangle that you can now set
with these APIs that
spans the 180th meridian.
Here we have a
MKCoordinateRegion that's
looking directly at the
equator and the 108th Meridian;
that's the line that goes up
and down the Pacific Ocean.
And we have some
radius around it.
So in iOS 6 and before you would
have got an image like this
if you had tried to
set that in MKMapView.
We kind of got bumped
over and now we're looking
at North and South America.
But in iOS 7 we'll obey
the rectangle that you set
and now we're looking in the
middle of the Pacific Ocean.
Now the same two APIs
you can read from
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and they'll turn a rectangle
and they behave a little
bit differently now
that we have pitching.
So here's a pitched and rotated
view of downtown San Francisco
and intuition tells me that you
can't really describe what we're
seeing here using a rectangle.
So let's see if that's true.
We can just wash
this view with blue.
Now let's zoom out
and take a look at it
and indeed it's not
a rectangle it's
like some chopped off
rectangle or triangle.
And so these APIs do the most
sensible thing they can do.
They return a rectangle that
contains the visible region.
That means it's now
an approximation
to what your map view is
showing when it's pitched.
The other difference is
they may return a rectangle
that spans 108th Meridian and
like I said you need to be able
to handle that in your app.
You may wonder, "Well what
am I going to use this thing
for if it doesn't actually
tell me what the map view
is showing?"
Well it turns out its really
easy to trim or filter the data
that you're going to show
on a map using a rectangle.
The math is very simple
and you're probably doing
that today already
if you're trying
to show a lot of
data on the map.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So you should continue
using this API to do that.
So there's one more
API to change what part
of the map is being
shown in your map view
and that's setCenterCoordinate.
In previous iOS releases
and in iOS 7 this returns the
coordinate that's at the center
of the screen or allows
you to set the coordinate
which should be visible at
the center of the screen.
And what that does is basically
simulate the pan gesture.
That means when you change this
property the zoom level won't
change and now on iOS 7
the rotation or the pitch
of the map won't change either.
Annotations.
The annotations are a great way
to describe a single
point on the map.
In iOS7 they're really
easy to use.
They continue to be easy to use.
That's because they stay
upright as the map rotates.
They also always face the
screen even as the map pitches.
And as you push them
into the background
when you're pitched they
always stay the same size.
To the artwork that you're
using will never be distorted
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you don't have to
account for it changing size.
What's also handy is
that annotation views
always track the map.
That means as your user is
panning the map the annotation
view will be updated to
stay in the same position.
That means if you have
some UI that you want
to follow the map you should
try to add that as a sub-view
of the annotation view, or
just map it an annotation view
if that seems appropriate.
If you try to approximate
the way the map is moving
by capturing a gesture
recognizer
and overlaying your UI on top
of the map it's just not
going to work anymore.
When you pitch the
transformation is non-linear
so you can't kind
of interpolate that.
So in iOS annotation
views are still UIViews.
There's no changes there
so all the customization
that you're used to
doing continues to work.
And in iOS X we've
introduced them as NSViews
so if you're familiar
with NSViews you can use
annotation views right
out of the box.
There's a few small
changes to overlays.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Overlays now take the
shortest path across the maps.
So if you have a polyline
and you give it two points;
one here in Los Angeles
and the other in Tokyo.
In previous iOS releases
that would have drawn a line
all the way across the U.S. then
across the Atlantic,
Africa, India, then China
and now finally on to Tokyo.
That doesn't really
make that much sense.
And now that we can look across
the Pacific Ocean we'll now draw
that polyline over
the Pacific Ocean.
In red what you're seeing here
is the new MKGeodesicPolyline
and those behave just
like normal polylines.
They're going to take
the short path, as well.
The other thing to note is
now that we have 3D buildings
when you pitch a view if there
are 3D buildings available those
are going to occlude
your overlays.
That's just something
to be aware of.
If you're presenting data that
doesn't really make sense,
if it's partially occluded,
something you should
be aware of.
So in the Maps.app we have
this feature; if you long press
on the map a purple
pins going to drop.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And that's really
handy for figuring
out what a random
address is on the map.
You can place it over a house
and you'll see the address
that that house is at.
Well to implement that the
Maps.app uses this first API I
listed here.
Convert point to
coordinate from view.
We take the point where the
user rested their finger
and we convert that into
a coordinate in the map.
From there you can
make an annotation
and drop an annotation
view on the map.
These other APIs are useful
for similar kinds of features.
Now in this image here
we can see the sky
and if the user places
their finger there
and long presses well,
what coordinate represents
a point of the sky?
There isn't one.
And so these APIs now have
to somehow indicate that.
Which means they can
return invalid values now
and that's something that
you have to be aware of.
So if you're using these
APIs be sure to check
for these invalid values.
Anywhere there's
a coordinate check
for KCLLocationCoordinate2D,
that's a mouth full.
And if there's a rectangle
involved check for CGRectNull.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's an indication that we
just couldn't do the conversion
and so you can probably just
handle that by returning.
So that's all the
API behavior changes.
There wasn't much.
If you recompile your app and
you test it you're going to find
that it's mostly compatible.
Most of the ways people are
using Map Kit just continue
to work.
You don't really have
to think about it.
One thing that is
important to note is
that annotations track the map.
Annotation view moves
with the map.
And that's the only reliable
way to get that behavior.
So if you have some
UI that needs
to track the map use
annotation views.
The other thing to note is
that overlays like polylines
and polygons will now
cross the Pacific Ocean
when it makes sense.
And lastly if you're using any
of those geometry conversion
APIs just search in your code
for those and make sure you're
checking for invalid values
that come out of there.
So that's the existing API.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now I want to talk about a new
API we've introduced called
MKMapCamera that'll help you
take full advantage of 3D.
To understand what a
map camera is it'll help
to understand the coordinate
system that we're operating in.
Here we see a map that
you're used to seeing.
It's North at the top, South
at the bottom, West on the left
and East on the right.
And this is a 2D
coordinate system composed
of an X-axis and a Y-axis.
This is your high
school algebra.
And when we introduce 3D we're
really introducing a third axis,
a Z-axis that's coming
out of the maps.
So like I said before
that 2D map still exists.
There it is sitting at Z=0.
And now we can talk about how
we position a camera inside
of this 3D map.
So our camera is made up
of four basic properties.
The first property is
a center coordinate.
That's a latitude and longitude
that is positioned on the ground
and it's just like the center
coordinate API on MKMapView.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So it's the point
that you want to see
at the center of the screen.
The second property
is an altitude.
That describes how
high above the map
in meters you want
your camera to float.
Third property is heading.
That's the cardinal direction
in which the camera faces.
So is you set that to zero North
will be at the top of your map,
and if you set it to 180 South
will be at the top of your map.
And that's basically a rotation
around that Z-axis
we introduced.
And the last property is pitch.
That describes where the
camera is looking relative
to the ground.
So if you set that to zero the
camera's going to look straight
down and you're going to
get a normal map you're used
to seeing, and as you increase
that value you're going
to look more and more
towards the horizon.
Now there are some pitches
that just don't make sense.
Like it doesn't make sense to
look straight up at the sky.
User's not going to know
how to get out of that view
and look back at the map.
So Map Kit is smart and
it just clamps that value
to something reasonable for you.
So here's the interfacing
code of the four properties
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that I just described;
center coordinate,
altitude, heading, and pitch.
And we also have a
convenience constructor
that because sometimes these
four properties aren't the
easiest things to work with.
It takes a lot of
math to convert
from the way you're thinking
about placing the camera
into these four properties.
So I want to introduce one
more way of thinking about how
to position the camera.
You might think about the point
that you want to look at that's
on the ground, and
you might think
about positioning the camera
at another location on the map
with a latitude and longitude.
And then you might think
about raising that camera
up off the ground from
an altitude and looking
at that other point
in the ground.
Well given those three
properties Map Kit can compute a
camera for you and so we
introduced this convenience
constructor which
does the work for you.
You chose the ground point, you
chose the point where you want
to look from and
you pass that all
through the convenience
constructor.
Camera looking at center
coordinate give it the
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
ground point.
From EyeCoordinate; that's
the point on the ground
that you want to look from and
then you choose an altitude.
Here I'm choosing 100 meters.
Now that I have a camera
I can simply set it
on my map using the
set camera method.
And with that you can create
an image like this pitched view
of the Statue of Liberty.
[ Applause ]
So there's one thing
that you definitely want
to be using MKMapCamera
for and that's saving
and restoring the
state of your map view.
When users leave your app
and then they come
back they really want
to be seeing exactly
what they left.
And so MKMapCamera
makes that really easy
by implementing NSSecureCoding.
Here's a couple examples
of how to use it.
First is saving state the
last is restoring state.
This is saving.
You can use any archiver
you have
but here I'm using
NSKeyedArchiver
and saving the camera
off to a file.
And then to restore it its
one line of code again.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Use unarchiveObjectFromFile:
from that same file
and now I get my
MKMapCamera back.
So you can use this again
with any archiver you have
including the state restoration
APIs in UIKit.
That's exactly the
same API on OS X.
So you don't have to
learn anything else.
That's it.
[ Applause ]
So with that I want to show
you an example of MKMapCamera
in action just so we
can understand how those
properties work.
And after that I want to show
you how you can use MKMapCamera
to add a little bit
of flare to your app.
So I have a very
simple app here.
It's just a map view
in a window.
And what I'm doing
is I'm setting it up
and then anytime
the map moves --
I'm just going to update another
window that's showing the
properties of the map.
And the way I'm doing
this is by looking
at the description
of the camera.
So if you're trying to debug
your app this is a great way
to do that.
If you think you're
setting the camera correctly
and it's not showing what you
want just get the description
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
from the camera and it's going
to show you all the
properties of the camera.
So, let's run this
-- here we go,
Xcode has moved the
window to the back.
Okay. So here we are
looking at Moscone
and as I pan the map you can see
the center coordinates changing.
It's updating.
Okay. As I pitch the
map you can see now
where the pitch is at zero.
That means that we're looking
straight down at the map
and as I increase my
pitch it's reflected here
in the description.
Now we're at 57 degrees.
And this maximum amount
of pitch you can see
as I pitched it kind of bumps.
Now I'm at a maximum pitch.
As I zoom in that changes.
So now let's pitch some more.
Now I'm at 69 degrees.
So this is that kind of
limiting I was talking about.
Now, I'm using a gesture here to
rotate the map and you can see
that the heading is changing.
So we're going toward South,
it's getting closer
to 180 degrees.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now I'm going to use
these zoom controls here.
Let me pitch out.
Use these zoom controls
to zoom in and zoom out,
and you can see the
altitudes changing.
And what I want you to notice
here is that as I zoom in
and zoom out by one zoom level
the altitude is roughly doubling
or halving.
And so you can use that in your
app to also reproduce zooming in
and out by one zoom level just
half or double your altitude
as appropriate and you'll
simulate zoom in and zoom
out by one zoom level.
Okay, so those are
the properties
of MKMapCamera in action.
Pretty simple.
Now let's take a look at
how we can use MKMapCamera
to make your app a little more
interesting when you're trying
to move the map around.
So here I have an app
that just does a search
and shows the result for it.
So let's just run this and
see what it looks like.
Okay here we are in Moscone
and I've got a search box.
Let's go to a park nearby
called Golden Gate Park,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
search for that and great.
So I saw it, the search worked.
The map view took me
there but you know
that transition wasn't
doing a lot for me.
You know I can't really
tell what's happening.
Where am I going and
how do I get there
and where is Golden Gate Park
relative to Moscone West?
I have no idea.
So let's look at our code
and see what's going on.
So after I finish the search
I'm taking the top result
and I'm adding an
annotation to the map
that represents that
search result.
And then I'm using this
new API that we introduced
in iOS 7 called show Annotations
which is just going to show
that annotation on the map.
And so it's clearly
a no frills animation
when I say animated yes.
It just kind of took me there.
So what I'd like to do is see
how I can make this a little bit
better for my users.
What I want to do is implement
a bit of a two stage hop.
So I want the camera
to kind of lift out.
Maybe I'll have it look
at where we're going to go
and then zoom into there.
And you know what, I want to
add a little bit of pitch too
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
because I want the user to
see 3D buildings and want them
to just think it's really cool.
So how am I going to do this?
Well the first thing is I
need some infrastructure
to help me animate
through a series of cameras
so let's write that code.
All right.
Let me get my notes up.
Okay. It's really
easy to write code
when you can just
drag it in [laughter].
Okay, so I have a method
here called go to next camera
and as the name implies it's
going to go to the next camera
in the stack of cameras
I've created that are
at key intervals between
the start position
and an end position.
So this method proceeds
as you may expect.
If there's nothing left
on the stack we know
that we're done animating
and so we simply return.
Otherwise we pop the next camera
off of our stack and now we know
that we want to animate
to that camera.
And so here what we're seeing
is we're using the built
in animation APIs
on our platform --
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
on OS X that's
NSAnimationContext --
to animate to the next camera.
And the reason I can do that is
because the set camera method
on MKMapView is animatable.
And so I create my
NSAnimationContext
and I say run animation group.
That thing has a
block that comes back
with an NSAnimationContext
object.
And on that context
object I can tweak it.
So I can tell it
a custom duration
which I can compute based
on the transition I'm making
and I can also choose
a timing function
and here I'm using
ease in/ease out.
Now I have to be
sure to select --
to change allowed implicit
animation to yes to let the map
in map view know that it
can pick up this animation
and animate that camera change.
So if you leave that out the
camera change won't animate.
So just set that to yes.
Okay now that I fired
off this animation to go
to the next camera
I need to know
when that animation completes
so that I can then animate
to the next camera in our stack.
Well you might think of using
the completion handler here
but it's going to trip you up.
I know it will.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Don't use that completion
handler.
Instead you need to use
MKMapViews delegate method
which tells you when a
region change is completed.
So let's drag in that API here.
Drag it. Okay.
That's map view region
did change animated,
and this gets called anytime
a region change completes.
So even when the user interacts
with the map this
method gets called.
But when the user
interacts with the map,
the animated flags
going to be no.
So we can use that flag
to tell if we're animating
or if the user's been
interacting with our map.
So I just checked if we're
animated then go ahead
and go to our next camera.
Okay. Now that we have the
infrastructure in place
to animate between a series
of cameras we're ready
to implement a particular
animation.
So let's implement the
hop we were talking about.
I'm going to implement a
method called go to coordinate.
The reason is we're
doing a search,
we're picking the top
result off that search
and that search result lives at
a coordinate and we just want
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to go look at that coordinate.
Okay. So the first
step here is to figure
out where we want to go.
I'm choosing a camera
that's looking
at our search result coordinate
from directly above it
at a height of 500 meters.
And then what I'm doing
is I'm modifying the pitch
to 55 degrees.
So now we'll be tilted
a little bit,
we'll be able to
see 3D buildings.
Okay, what's next?
Well I said we wanted
to do a two-stage hop
so zoom out, stop, zoom in.
So I need to find a
mid-point between the start
and end coordinate
to animate to.
So here I am I do a
little bit of math.
Okay. I take our starting
point and our ending point
and I find the average
point in the middle.
I convert that to a coordinate.
And then I pick some
arbitrary altitude.
I know I want to zoom out
and here I've just
picked a magic number four
and I'm just going to take the
altitude we want to travel to,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
multiply it by four and that's
where I want our
intermediate altitude to be.
Okay now that I have those
properties I can create a
midpoint camera.
So I want it to look at
where we're going to go
from the middle point between
the start and end points
and I want it to sit
at our mid altitude.
That's something high above.
Now I just create our stack of
cameras so I add the mid camera
and our endpoint camera and
I kick off the animation
by calling go to next camera.
Okay, so that's our
animation and now we just want
to replace this animation
we don't
like with the animation
we do like.
All right, let's
see how it works.
All right, here we are
at Moscone West again.
Now let's type Golden Gate Park.
That's pretty sweet [laughter].
[applause] I like that a lot.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The whole map swings around.
Let's go back to Moscone.
That's awesome and look at
those 3D buildings pop up.
[ Laughter & Applause ]
Okay, so now let's do a
little bit more testing.
I mean I wish we were done and
we could just ship this thing,
but I'm pretty sure that hop
doesn't work all the time.
So let's try going
to the Metreon.
That's right across the street.
Yeah, that seems a
little bit heavy.
Now Target which
is right next door.
You know I'm pretty
sure if a user says
that they would wonder like,
"Why are we zooming out
and zooming in and
tilting and..."
It's just too much.
And let's try going
really far away.
Let's go to New York City.
I don't even know
what happened there.
I think New York is
south of San Francisco.
I'm not really sure.
Okay. Clearly one kind
of transition doesn't
necessarily work
for all distances.
So let's modify our program so
that we can account for that.
Okay let's stop this.
All right, the first
thing to figure
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
out is how do we build a filter?
So I think what I
want to do is figure
out what those distances are and
then build a filter around that.
So let's go to our go
to coordinate method.
This is kind of a funnel point
where we can start looking
at what's happening.
And let's drop in some
code here to figure
out how far we're
trying to travel.
Okay. So I pull the camera off
the map view before we begin the
animation and now I have a start
camera where we're starting from
and I've already computed
an end point camera
and using properties on those
cameras I can populate two CL
location objects.
One for the start location
and one for the end location.
And once I have two CL location
objects I can use the more API
on CL location to
get the distance
between those two points.
This is the distance
from location API.
And now I'm just
going to log that out
so we can inspect how
far we're traveling
and then decide what we
want to do from there.
So here we are back at Moscone
West after I ran the project.
And let's go bring up our
console, switch back to our app.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Let's go to Golden Gate Park.
We know we like that.
Okay, so that's about
7,000 meters
and that feels pretty good.
Let's go back to Moscone west
and we can try a
smaller animation.
Okay, let's go to the Metreon.
All right, that's
just 161 meters.
That's way shorter
than 7,000 meters.
That's good because it
means it's easy to filter.
And Target is just 60 meters.
Okay how about New York City?
Some huge number.
I think that's like
4 million meters.
Okay that's a long way away.
So this is great.
This means that we can
filter between these.
But that's kind of
an extreme example.
Let's try something in between
Dallas and San Francisco
or New York and San Francisco.
Let's try Dallas.
2 million meters.
That also didn't feel very good.
Okay so I think we have a basis
on which we can start filtering.
So let's go back and figure
out how we can filter this.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
well right now our animation is
kind of stuck inside of this go
to coordinate method
so I think what I want
to do is refactor
this to make room
for having a few
different animations.
So let's to drag in
a new method here.
This is called perform
short camera animation
and all this does is
produce the same animation
that we've already built.
So this is exactly the same code
we just wrote and what I'm going
to do is delete the code
that is inside the go
to coordinate method.
Okay. Now that we have
that I think I want
to do a different
kind of animation
when we're going all the
way to New York City.
I think what would be better is
if we zoomed out and then panned
over to New York City
and then zoomed in.
I think that'll produce
something that's a little bit
more logical to the user.
So let's drag in a method here;
it's called perform
long camera animation
and that's the animation we want
to use for long transitions.
So how does it work?
Well we get the starting
camera off of the map view.
Now we have a start and end
point just like we were using
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to inspect the distance
we were traveling.
And again I can go compute the
distance between the two points
and now I kind of want to just
use that distance to figure
out how far above the
map I want to zoom out.
Because if you're
going to let's say L.A.
from San Francisco
it's pretty different
than going to New York City.
So maybe I don't want to zoom
out as far if I'm just going
to L.A. Okay, so what I'm going
to do here is the naive thing.
I'm just going to use the
distance between the two points
to choose -- to lift
myself up off the map.
So I'm just going to
turn that on its side
and lift off the map
by the same distance.
So now I need to make
two midpoint cameras.
So that means I have now two
key intervals between the start
and end where I want
to animate through.
So I go ahead and
make those cameras.
The first one is looking
at our start point and from
above the start point.
So I'm just zooming out.
I'm going to zoom out to our
extreme altitude and then I want
to pan over to our destination.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So now I have a second
camera and it's going to look
at our second coordinate, our
end coordinate from directly
above it at the same
altitude so now
that means I'm just
going to pan over.
Then just like out short
animation I create a stack
of cameras.
So our first camera, second
camera and then the end camera
and then I can just call our
infrastructure that will animate
through those cameras
go to next camera.
Okay, so what's left
is back in my go
to coordinate method
I want to filter
out which animation I'm going
to use based on the distance.
So we've already
computed the distance.
I'm going to drag in some code
that filters based
on that distance.
Here it is, just a
bunch of if statements.
If the distance is less
than 2,500 meters I think I'm
just going to use the built
in MapKit animation we know
as a no frills animation
and I think that's appropriate
for really short distances.
For medium size differences
I think I want
to use our hop animation so
I chose a filtering range
of 50,000 meters and then
for anything else I consider
that pretty long
and I think I want
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to use our multi-stage
transition
to give the user a
little bit more context.
Okay. Let's run this
and see how it works.
Cool, so here's Moscone West.
Let's make sure our Golden Gate
Park transition still works.
Yeah it does.
Nice. All right, let's
go back to Moscone West
and let's test the
Metreon one more time.
So here we are at Moscone West.
Let's go to the Metreon.
Yeah that's great.
For a short transition I mean I
think that's really appropriate.
Now let's try New York City.
I think that's much better.
[ Applause ]
Now the user knows
what's going on.
Right? We saw that New York City
was to the East of San Francisco
and if I go back to San
Francisco, yeah great.
I'm still in the U.S. and I
went west to get San Francisco
from New York City and
that all makes sense.
Let's try Tokyo.
That's really far away.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Boom, all over the
Pacific Ocean.
And you notice how instead
of going the long way
around the world MapKit
was smart enough to go
over the Pacific Ocean which is
how a plane might go [laughter].
Great, so that's all
I'm going to do today
but I think you can
see that there's a lot
of opportunity here for you to
play with MKMapCamera and play
with the transitions
in your app,
and make your app feel
a little bit different
from all the other
apps out there.
So I hope you'll take
advantage of that.
So that was cinematic
camera motion.
What did we learn?
We saw that the crux of this is
that MKMapView's method
setCamera: is animatable.
So you can call that within
the normal animation context
on your platform.
So if you're on iOS
using UIView animations
and in OS X using
NSAnimationContext.
We also saw that we should
use map view region did
change animated.
We want to use that
call back to figure
out when an animation
has completed.
Don't use the completion
handler that's part of the built
in animation framework.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And lastly we saw
that it's pretty clear
that there isn't
necessarily one answer
for every type of transition.
Right? If we're just going a
few meters you really don't want
to be zooming all the way out
and then zooming
all the way back in.
It's a little bit long winded.
So play with it.
So that's MKMapCamera.
It's the one API you need to
know to take advantage of 3D.
There isn't anything else.
At minimum you should
be using it to save
and restore the viewport
state for your map view.
And clearly you can use it to
add a special touch to your app.
Now sometimes rotation
and pitching may not be the
right answer for your app.
If you're showing something
like weather data on top
of a map maybe that looks just
really weird if you're pitched
and buildings are showing
through clouds or something.
Well in previous iOS releases
we've given you the control
to disable and enable
gestures selectively
and in iOS 7 we're
doing the same thing.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We've introduced rotate
enabled and pitch enabled
so you can disabled
these properties
if that's what you
need for your app.
Now as I mentioned these are
all on by default in iOS 7.
That's because users who come
out of the map app, Maps.app
and see a map view in your
app really expect to be able
to rotate it and pitch it just
like they could in the Maps.app.
So think carefully
before you disable them.
Now some devices don't
support pitching.
And in that case the pitch
enabled property will always
return no.
So that means if you have a
feature of your app that depends
on pitching just be
sure to enable that only
if the pitching enabled
property returns yes.
So that's all the new API.
That's 3D.
Now I want to talk to you about
producing snapshots of a map.
Now why would you
want to do this?
Why do you want to create
a static snapshot of a map?
Well, there's some cases where
a fully interactive map just
doesn't make sense.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You may have seen an app like
this; it's a table view app.
You're scrolling it, it's full
of MKMapViews and its jittery,
the map is loading in pieces
and it doesn't quite feel right.
I mean it's just not what
you want your users to see.
Well with Snapshots you can
create maps images off the
main thread.
You get smooth table
view scrolling
and the whole map
comes in at once.
And because it's just an image
you can do all the effect you
want to do with images like
fade it in or apply filters
or anything like that.
The other reason you may want
to use them is the same reason
we use them in the Maps.app.
To implement a printing
feature in your app.
So how do you create a Snapshot?
It's just three steps.
The first step is create an
options object and configure it.
Second step is to create
a snapshot or object
and give it the options
object you configured.
And second step is to kick off
the asynch snapshot task wait
for it to complete.
The options object has
a few properties on it
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that you need to configure.
The first one is the
image size is the size
of the image you want.
Then you can tell it which part
of the map you want to see.
You can use the same
properties you're used to using
on MKMapView, that's
setVisibleMapRec and setRegion.
You can also use a camera which
means you can produce pitched
and rotated snapshots.
And then you can
specify a map type
and we support all
the same map types
that we support in MKMapView.
So that's standard,
satellite and hybrid.
And lastly on iOS only you
need to specify a scale.
The scale property lets you
support retina displays as well
as non-retina displays.
So for non-retina displays
you want to set this to 1,
for retina displays you
want to set it to 2.
You can always grab
this off of the view
that you're about to display.
So [[UIScreen mainScreen]
scale] will give you an
appropriate scale.
So here's what the
code looks like.
There's step one.
We created our options object.
Here I'm setting it to
a size of 512 points
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
by 512 points, a square image.
Then on iOS I need
to set a scale here.
I'm just using the
main screen scale.
That's the scale of the
display that's on your device.
So if you intend on showing this
image on a different screen,
maybe it's an AirPlay screen,
you want to make sure to set
that scale to what's
appropriate for that screen.
You can use the UIScreen
API to figure that out.
Then I'm setting a
camera to choose what part
of the map I want to see.
Then I'm setting
MKMapTypeStandard.
Step two I created my Snapshot.
Thats alloc init I think
you can all do that.
And the last step is
kick off async task
and wait for it to complete.
MapKit's going to
call your block back
when the async task is complete
and your Snapshot is ready
or if we get an error.
Generating an image of a map
just like using MKMapView
or the Maps.app requires
a network connection
so that can obviously
fail if you're
out in the boonies somewhere.
So be sure to check for an error
and if there is an error
have a backup plan.
Have a placeholder image or
something you can show your user
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in case the map wasn't
able to load.
But if you don't have an error
object then we know the Snapshot
was generated successfully
and the image
of the map itself is sitting
inside the MKMapSnapshot object
under the image property.
So I talked about printing.
If you're implementing
printing you might find yourself
implementing a method like this.
drawContentForPageAtIndexInRect.
So okay, you need a Snapshot.
Aroon told you that on stage.
So step one, create
your options.
Step two, create your snapshot.
Step three, kick
off that async task.
So the thing about
printing is that you have
to provide all the content for
your print out synchronously.
You need to draw all that
right there in that method
but if you have to kick off an
async task how are you going
to do that.
Well, the steps for
producing a map snapshot
for printing are a
little bit different.
A little bit longer.
But it's not too hard.
The first step is
configure options just
like you normally would.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Create a snapshot just
like you normally would.
The third step is to
create a semaphore.
This is something that's
going to allow us to wait
for a resource to
become available.
The third step is you need
to pick a dispatch queue
where you want to
receive a call back
when the snapshot is
done being generated.
That's because you're going to
block the thread that you're on
and if you block that thread
while waiting for something
to come back on that thread I
think we all know that's called
a deadlock.
You don't want that.
Then you create a couple
of result variables
where you can stash the
results of the async tasks
to use after it's generated.
Then you start the snapshotter
as usual and then wait
for that snapshotter
to complete.
So let's look at some code.
Here it is.
Let's just assume we
have the options object
in the snapshotter
object that we created
in the last few slides.
The only difference
is that for printing
on iOS you probably want
to set the scale to 2.
That's because most printers
are fairly high resolution
and so you want kind of
retina quality graphics
for your printout.
All right, into the differences.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The first step we're doing here
is using dispatch semaphore APIs
to create and initialize
the semaphore to zero.
The reason we initialize
it to zero is
because we don't have
any resources yet.
We need to wait for a
resource to become available.
Then we need to pick a queue.
Generally any global
queue will work
so you can use the get
dispatch global queue API
to pick a queue to run on.
A high, or low, or
medium priority queue.
Then we can create a
couple of variables
where we want to
stash the results.
We need to use the
block modifiers here
because we're going to
modify these variables inside
of a block.
So we need a place to stash
the map Snapshot and a place
to stash the error just
in case there is one.
Then we kick off our
async task using a variant
of the start API.
Here we're using the --
this is the startWithQueue:
API that lets us
tell the snapshotter
that we want the call back
not on the main thread
where we're starting this
request but on a different queue
where we're not blocking.
Then on that same
thread we're going
to use the
dispatch-semaphore-wait API
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to just wait for that
resource to become available.
Okay. After some time
that Snapshot will finish
or it'll error out and we
can stash those two variables
where we made space and then
call the dispatch semaphore
signal API in order to indicate
that the snapshot is finished.
So that will unblock
our main thread
and now we can handle
the snapshot just
like we normally would.
Check for an error, otherwise
you know we have an image.
So here's an example
of a snapshot.
It's great.
It looks just like MKMapView and
that's probably what you wanted.
Now you probably just don't
want a blank image though.
I mean you probably
want to put something
on it may be like pins.
So I want to show
you how to do that.
Here's a sample project that's
pretty bare bones right now.
What I've got is two
points that I want
to show in a static snapshot.
And for the purpose of this demo
I just want to save the snapshot
out and I'm going to
show you the result.
So I have London, I have
Paris, two coordinates.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I also have a little
helper method I wrote
which given two map points is
just going to return a rectangle
around those two map points.
Okay, so let's get started.
The first thing I want to
do is figure out what part
of the world I actually
want to show in my snapshot.
So let's make a region.
What I do is I make a
list of my two points,
pass it to my helper method.
It's going to return a rectangle
and now I have a
CoordinateRegion
so I can specify what part
of the world I want to see.
Step two is create
my options object.
So I created my options object.
I give it the CoordinateRegion
I just decided to show.
Then I want to add
some rotation to that
and so what I do is I copy the
camera off my options object.
I add a heading which will
rotate the cardinal direction
of a map and I set it back
on the options object.
Then I chose the
size of the image.
Here is one that is
relatively large.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
1680 points by 825 points.
Okay now I'm ready to
create a snapshotter.
So I created my snapshotter
object
and then I kick off
my async task
and I've got a block here
ready to receive the results.
So the first step is of
course handle an error
and I don't have an error
image to show you guys
so I'm just going to log
if there's an error
and I'll be very sad.
And otherwise we know we've got
the images so go ahead and pull
that image off that
snapshot object.
And now I'm going to draw a
couple pins on this and I want
to show you something neat here.
We can reuse the MKPinAnnotation
view API to draw the pin
on our static snapshot.
MKAnnotationViews all have
an image property on them
that you can use to specify
an image to show inside
of the MKAnnotationView.
MKPinAnnotation view uses that
property to set a pin image
and so you can just pull that
image right on out of there.
So here I am using the pin.image
property to get the pin image.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Okay. Now I have a pin image
and I have a map image.
I need to figure out where
on the image to draw the pin
so that they sit right on
top of London and Paris.
So MKMapSnapshotter has an
API called pointForCoordinate:
which will convert a latitude
and longitude into a point
in the coordinate
system of your image.
So I call those methods
for both London and Paris
to get the point of the image
for both London and Paris.
And those points sit right
on top of London and Paris.
But pins are actually
a rectangular image
and I don't want the, you
know, top left part of the pin
to be sitting on London and
Paris because then it's going
to look like the bottom of
the pin is somewhere off
in those respective countries.
So what I need to do is offset
that projected point to take
into account the area of the pin
and not just the
area of the pin.
MKAnnotationView also
has an offset property
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and MapKit normally uses this
to offset your annotation views
such that a particular
anchor point
in your annotation view
sits at the point of the map
that you're trying to annotate.
Now because you're doing
the drawing you need to take
into account the center offset.
So here we are.
We get the center
offset for a pin
and on OS X NSViews support
a flipped orientation.
That means the coordinate
system of the view can originate
in the top left instead
of the bottom left
which is what's normal in OS X.
So you need to account
for that as well on OS X.
If the view is in a flipped
coordinate system simply invert
that center offset.
So multiply y by negative
one and you're done.
Okay so now we adjust
our projected point
so that the center
of the annotation view
sits on top of that point.
Then we adjust for the
center offset property
on MKAnnotationView to make sure
that the bottom of that pin sits
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
on the point for
London and for Paris.
Okay now we're ready to draw.
So you can use your normal
system drawing APIs on OS X.
It looks a little bit
different that iOS.
We lock focus on the map image;
we draw a pin on top of London
and draw the same pin
again on top of Paris.
Then we unlock focus
and we're done.
Now I'm ready to write
this image out to a file.
Drop in some code here.
And let's run it and see
what the image looks like.
Hit play and its
running and it's done.
It's pretty quick.
And we can just open up
this file and preview.
Boom there you got it.
[ Applause ]
All right.
Now while we're here --
I mentioned that the APIs
exactly the same as the iOS
so I have this already
implemented on iOS
and I just want you
to take a look at it.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's almost exactly the same.
Here's that same helper method,
application did finish
launching.
I'm doing a little work
to show our window.
I started out with the same
London and Paris coordinate,
pick a region, set
up some options --
they're exactly the same --
create my snapshotter, same API,
start the asynch task,
same API, handle errors.
Now I'm using UIImage
instead of NSImage.
That's a platform difference.
But I can still use
MKPinAnnotation view
to drop pins on my image.
I still have to account
for offsets.
I don't have to worry about
flipped coordinate systems
because UIView has no notion.
I do the same math to
account for the offset.
I'm using UIKit's drawing APIs
to draw pins on top of my image
and then I'm using the same
APIs to write that out to a file
and I'm going to
get the same image.
So same API on both platforms.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So that's creating a Snapshot.
[ Applause ]
So what did we see there?
As you saw iOS and OS X
share really similar APIs.
Primarily where the
differences are it's due
to the platform differences
like CGPoint on iOS
and use NSPoint on OS X.
We use UIImage on iOS
and use NSImage on OS X.
We saw that you can use
annotation view classes
to draw annotation
views on your maps.
So you don't have to rebuild
anything, you don't have
to go digging through
the iOS SDK trying
to find a pin image
to redraw it.
You can just use
MKPinAnnotation view
to draw pins on your snapshot.
So, to recap.
Why do you want to
use snapshots?
Well you want to use
them for performance.
We saw those videos at the
beginning where if you try
to stick an MKMapView inside
of a table view and then scroll
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that table view it really
didn't perform like we liked.
Instead use snapshots; it's
going to be a lot better.
We saw that we want to use
snapshots for printing.
It's really the only reliable
way to generate a snapshot
of a map while blocking
the main thread.
And if you're use -- trying
to generate Snapshots today
by using the rendering concepts
API on CALayer just throw
that code away and
use snapshots instead.
So basically anytime
you want an image
of a map try using
snapshots first.
It's probably going
to do what you want.
So that's putting Map
Kit in Perspective.
We introduced Map
Kit on OS X this year
and the APIs pretty
much the same.
We saw that if you just
recompile your app you're going
to get pitching and rotation for
free and it mostly just works.
But there are some
changes to our API
that if you're really
using some corner
of Map Kit it may affect
you and so you just need
to change your app a little
bit to account for that.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Then make sure you
adopt MKMapCamera
to take full advantage
of 3D in your app.
Lastly, use MkMapSnapshotter
anytime you want an image
of a map but you don't
need the interactivity
of a full blown MKMapView.
We have a wonderful evangelist
to help answer your questions
if you need any help after WWDC.
His name is Paul Marcos
and there's his email
address up there.
We also have a whole bunch
of documentation you can
find inside of Xcode.
It's a long URL but if you just
go to Xcode help and search
for MKMapView you're
going to find that.
We also have a location
awareness programing guide
that's chalk full of
goodies on how to use Map Kit
and Core Location together.
And there's also the
developer forums.
I highly encourage
you to use those.
A lot of the Map Kit
engineers hang out in there
and we're there to answer
your questions and help you.
So we had a session
this morning.
It was a great session.
There's a lot of new
things in Map Kit
that I couldn't go
over here today.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We introduced a ton of
new APIs on overlays.
We also introduced search
APIs and directions APIs.
So if you missed that
session catch the video later.
It's really worth your time.
That's it.
Thank you for your time.
[ Applause ]
[ Silence ]