Transcript
[ Music ]
[ Applause ]
>> So, good morning.
Welcome to our session on What's
New in ARKit 2.
My name is Arsalan, and I am an
engineer from ARKit team.
Last year, we were really
excited to give ARKit in your
hands as part of iOS 11 update.
ARKit has been deployed to
hundreds of millions of devices,
making iOS the biggest and most
advanced AR platform.
ARKit gives you simple to use
interface for powerful set of
features.
We have been truly amazed by
what you have created with ARKit
so far.
So let's see some examples from
App Store.
Civilisations is an AR app that
brings historical artifacts in
front of you.
You can view them from every
angle.
You can also enable x-ray modes
to have mode interaction.
You can bring them in your
backyard, and you can even bring
them to life exactly they look
like hundreds of years ago.
So this is a great tool to
browse historical artifacts.
Boulevard AR is an all right app
that lets you browse the work of
arts in a way that's never been
possible before.
You can put them on ground or
wall, and you can really go
close to them, and you can see
all the details.
It's just a great way to tell
you story of arts.
ARKit is a fun way to educate
everyone.
Free reverse is an app that
places immersive landscape in
front of you.
You can follow a flow of a river
going through landscape and see
communities and wild life.
You can see how human activity
impacts those communities and
wildlife by constructions.
So it's a great way to educate
everyone about keeping the
environment green and through
sustainable development.
So those were some of the
examples.
Do check a lot more examples
from App Store.
So some of you are new to ARKit,
so let me give you a quick
overview of what ARKit is.
Tracking is the core component
of ARKit.
It gives you position and
orientation of your device in
physical world.
It can also track objects such
as human faces.
Scene understanding enhances
tracking by learning more
attributes about the
environment.
So we can detect horizontal
planes such as ground planes or
tabletops.
We can also detect vertical
planes.
So this lets you place your
virtual objects in the scene.
Scene understanding also learns
about lighting conditions in the
environment.
So you can use lighting to
accurately reflect the real
environment in your virtual
scene.
So your objects don't look too
bright or too dark.
Rendering is what actually a
user sees on the device and
interacts with the augmented
reality scene.
So ARKit makes it very easy for
you to integrate any rendering
engine of your choice.
ARKit offers built-in views for
SceneKit and SpriteKit.
In Xcode, we also have a
Metal template for you to
quickly get started with your
own augmented reality
experience.
Note also that Unity and Unreal
have integrated full feature set
of ARKit into their popular
gaming engines.
So you have all these rendering
technologies available to you to
get started with ARKit.
So let's see, what is new this
year in ARKit 2.
Okay. So we have saving and
loading maps that enables
powerful new features of
persistence and multiuser
experiences.
We are also giving you
environment texturing so you can
realistically render your
augmented reality scene.
ARKit can now track 2D images in
real-time.
We are not limited to 2D.
We can also detect 3D objects in
a scene.
And last, we have some fun
enhancements for face tracking.
So let's start with saving and
loading maps.
Saving and loading maps is part
of world tracking.
World tracking gives you
position and orientation of your
device as our six degrees of
freedom pose in real world.
This lets you place objects in
the scene such as this table and
chair you can see in this video.
World tracking also gives you
accurate physical scale so you
can place your objects up to the
correct scale.
So your objects don't look too
big or too small.
This can also be used to
implement accurate measurements,
such as the Measure app we saw
yesterday.
World tracking also gives you 3D
feature points so you can, you
can, you know some physical
structure of the environment,
and this can be used to perform
hit testing to place objects in
the scene.
In iOS 11.3, we introduced
relocalization.
This feature lets you restore
your tracking state after AR
session was interrupted.
So this could happen, for
example, your app is
backgrounded or you're using
picture in picture mode on iPad.
So relocalization works with a
map that is continuously built
by world tracking.
So the more we move around the
environment, the more it is able
to extend and learn about
different features of the
environment.
So this map was only available
as long as your AR session was
alive, but not we are giving
this map available to you.
In ARKit API, this map is given
to you as ARWorldMap object.
ARWorldMap represents mapping of
physical 3D space, similar to
what we see in this, on the
visual, on the right.
We also know that anchors are
important points in physical
space.
So these are the places where
you want to place your virtual
objects.
So we have also included plain
anchors by default in
ARWorldMap.
Moreover, you can also add your
custom anchors to this list
since it is mutable list.
So you can create your custom
anchors in the scene and add
them to World Map.
For your visualization and
debugging, World Map also give
you raw feature points and
extend, so you know the real
physical space you just scanned.
More importantly, World Map is a
serializable object, so it can
be serialized to any data stream
of your choice, such as file on
local system or to a shared
network place.
So this ARWorldMap object enable
two powerful set of new
experiences in ARKit.
The first is persistence.
So just to show you an example
how it works, we have a user
starting world tracking, and he
places an object in the scene
through ARKit hit testing.
And before leaves the scene, he
will save World Map on the
device.
So some point, sometime later,
the user comes back, and he is
able to load the same World Map,
and he will find the same
augmented reality experience.
So he can repeat this experience
as many times he wants, and he
will find these objects on the
table every time he will start
his experience.
So this is persistence in world
tracking
[applause].
Thank you.
[applause]
ARWorldMap also enables
multiuser experiences.
Now your augmented reality
experience is not limited to a
single device or single user.
It can be shared with many
users.
One user can create World Map
and share with one or more
users.
Note that World Map represents a
single coordinate system in real
world.
So what it means that every user
will share the same working
space.
They are able to experience the
same augmented reality
experience from different point
of view.
So this is a great new feature.
You can use World Map to enable
multiuser games, such as the one
we saw yesterday.
We can also use ARWorldMap to
create multiuser shared
educational experiences.
Note that we are giving
ARWorldMap object in your hands,
so you are free to choose any
technology to share with every
user.
For example, for sharing you can
use air drop or multipeer
connectivity that relies on
local Bluetooth or WiFi
connection.
So it means that you don't
really need an internet
connection for this feature to
work.
:15 So let's see how ARKit API
makes it very easy for you to
retrieve and load World Map.
On your AR session object, you
will need to call get current
world map at any point in time.
This method comes with a
completion handler in which it
will return you and ARWorldMap
object.
Note also that it can also
return and other in case World
Map is not available.
So it's important to handle this
error in your application code.
So once you have ARWorldMap, you
can simply set initial World Map
property in world tracking
configuration and run your
session.
Note that this can be
dynamically changed as well, so
you can also reconfigure AR
session by running a new
configuration.
So once AR session is started
with ARWorldMap, it will follow
the exact same behavior of
relocalization that we
introduced in iOS 11.3.
It is important for your
experience that relocalization
works reliably.
So it is good to, it is
important to acquire good World
Maps.
Note that you can call get a
current world map at any point
in time.
So, it's important to scan your
physical space from multiple
point of views.
So tracking system can really
learn about physical structure
of the environment.
The environment should be static
and well textured so we can
learn, extract more features of
it and learn more about the
environment.
And also, it's important to have
dense feature points on the map,
so it can reliably relocalize.
But you don't have to worry
about all those points.
In ARKit, API really makes
things easier for you by giving
you WorldMappingStatus on
ARFrame.
WorldMappingStatus is updated in
every ARFrame and can be
retrieved by WorldMappingStatus
property.
So let's see how this works.
So when we start world tracking,
WorldMappingStatus will be not
available.
As soon as we start scanning the
physical space, it will be
limited.
The more we move in the physical
world, world tracking will
continue to extend the map.
And if we have scanned enough
physical world from current
point of view,
WorldMappingStatus will be
mapped.
So note that if you point away
from a map physical space,
WorldMappingStatus may go back
to limited.
So it will start to learn more
about the new environment that
we are starting to see.
So how you can use
WorldMappingStatus in your
application code.
Let's say you have an app that
lets you share your World Map
with another user, and you have
a shared map button on your user
interface.
It's a good practice to disable
this button when
WorldMappingStatus is not
available or limited.
And when WorldMappingStatus is
extending, you may want to show
an activity indicator on UI.
So this encourages your end user
to continue moving in the
physical world and continue
scanning it and extending the
map, because you need that for
relocalization.
Once WorldMappingStatus is fully
mapped, you may enable your
share map button and hide your
activity indicator.
So this will let your user to
share the map.
So let's see a demo of saving
and loading World Map.
[ Applause ]
Okay. So can we switch to AR 1.
Okay. So for this demo, I have
two apps.
In one app I will retrieve and
save World Map to a local file,
and in my second app, I will
load the same World Map to
restore the same augmented
reality experience.
So let's start.
So as you can see,
WorldMappingStatus on top right
corner.
It was not available.
As soon as I start to move in
the environment, it is now
extending my World Map.
So if I continue to map and move
in this environment, the
WorldMappingStatus will go
mapped.
So it means that it has seen
enough features from this point
of view for relocalization to
work.
So it is a good time to retrieve
and serialize World Map object.
But let's make this augmented
reality scene more interesting
by placing a custom anchor.
So through hit testing, I have
created a custom anchor, and I
am overlaying this object,
basically it's a old TV.
I think most of you may have
seen in the past.
So of course I can still
continue mapping the world, and
let's save the World Map.
So when I saved my World Map, I
could also show raw feature
points that belongs to this
World Map.
So those blue dots that you see,
they are all part of my World
Map.
And also as a good practice, I
saved a screen shot of my point
of view where I saved World Map.
So now we have serialized World
Map to our file.
We can now restore the same
augmented reality experience in
another app.
So let's try that.
I will start this app from a
different position, and you can
see this is my world origin.
It is defined on this side of
the table, and my world tracking
is now in relocalizing state.
So this is the same opening
relocalization behavior that we
introduced in iOS 11.3.
So let me point my device to the
physical place where I created
World Map.
So as soon as I point to that
same space, it restored my world
origin back to where it was, and
at the same time it also
restored my custom anchor.
So I have the exact same AR
experience.
[ Applause ]
Thank you.
So now note that I can start
this app as many times I want,
and it will show me the same
experience every time I start.
So this is persistence.
And of course, this can be
shared with another device.
So back to slides.
So this was saving and loading
map.
It's a powerful new feature in
ARKit 2 that enables persistence
and multiuser shared
experiences.
In ARKit 2, we have faster
initialization and plane
detection.
World tracking is now more
robust, and we can detect planes
in more difficult environments.
Both horizontal and vertical
planes have more accurate extent
and boundaries, so it means that
you can accurately place your
objects in the scene.
In iOS 11.3, we introduced
continuous autofocus for your
augmented reality experiences.
IOS 12 comes with even more
optimizations specifically for
augmented reality experiences.
We are also introducing 4 by 3
video formats in ARKit.
Four by three is a -angle video
format that greatly enhances
your visualization on iPad
because iPad also have 4 by 3
display aspect ratios.
Note that 4 by 3 video format
will be the default video format
in ARKit 2.
So all of these enhancements,
they will be applied to all
existing apps in the App Store
except 4 by 3 video format.
For that, you will have to build
your app with the new STK.
So coming back to improving
end-user experience, we are
introducing environment
texturing.
So this greatly enhances your
rendering for end-user
experiences.
So let's say your designer have
worked really hard to create
these virtual objects for you,
for your augmented reality
scene.
This looks really great, but you
need to do more for your
augmented reality scene.
You need to have position and
orientation correct in your AR
scene so that object really
looks like it is placed in the
real world.
It is also important to get the
scale right so your object is
not too big or not too small.
So ARKit helps you by giving you
the correct transform in world
tracking.
For realistic rendering, it is
important to also consider
lighting in the environment.
ARKit gives you ambient light
estimator that you can use in
your rendering to correct the
brightness of your objects.
So your objects don't look too
bright or too dark.
They just blend into the
environment.
If you are placing your objects
on physical surfaces such as
horizontal planes, it is also
important to add a shadow for
the object.
So this greatly improves human
visual perception.
They can really perceive that
objection is on the surface.
And last, in case of reflective
objects, humans wants to see
reflection of the environment
from the surface of the virtual
objects.
So this is what environment
texturing enables.
So let's see how this object
looks like in an augmented
reality scene.
So I created this scene
yesterday evening when I was
preparing for this presentation.
So while eating those fruits, I
also wanted to place this
virtual object.
And you can see, it is correct
to scale, and you can see more
importantly you can see a
reflection of the environment in
the object.
On your right side of this
object, you can see this yellow
and orange reflection of those
fruits on the right, and on the
left, you can notice the green
texture from the leaves.
And in the middle, you can also
see reflection of the surface of
the bench.
So this is enabled by
environment texturing in ARKit
2.
Thank you.
[ Applause ]
So environment texturing gathers
scene texture information.
Usually it is represented as a
cube map, but there are other
representations as well.
Environment texture or this cube
map can be used as a reflection
probe in your rendering engines.
This reflection probe can apply
this as texture information onto
virtual objects, such as the one
we saw in the last slide.
So it greatly improves
visualization of reflective
objects.
So let's see how this works in
this short video clip.
So ARKit, while running world
tracking and scene
understanding, continues to
learn more about the
environment.
Using computer vision, it can
extract textured information and
start to fill this cube map.
And this cube map is accurately
placed in the scene.
Note that this cube map is
partially filled, and to set up
reflection probes, we need to
have a fully completed cube map.
To have a fully completed cube
map, you will need to scan your
full physical space, something
like a 360-degree scan you do
with your panoramas.
But this is not practical for
end-users.
So ARKit makes it very easy for
you by automatically completing
this cube map using advanced
machine learning algorithms.
[ Applause ]
Note also that all of this
processing happens locally on
your device in real-time.
So once we have a cube map, we
can set up reflection probe and
as soon as we place virtual
objects in the scene, they start
to reflect the real environment.
So this was a quick overview of
how this environment texturing
process works.
Let's see how ARKit API makes it
very easy for you to enable this
feature.
So all you have to do in your
world tracking configuration to
set environment texturing
property to automatic and run
the session.
So this is as simple as this.
[ Applause ]
AR session will automatically
run this environment texturing
process in the background and
will give you environment
texture as an environment probe
anchor.
AREnvironmentProbeAnchor is an
extension of AR anchor, so it
means it has a six degrees of
freedom position and orientation
transform.
Moreover, it has a cube map in
the form of metal texture.
ARKit also gives you physical
extent of the cube map.
So this is area of the influence
of the reflection probe, and it
can be used by rendering agents
to correct for parallels.
So such as in case your object
is moving in the scene, it will
automatically adapt to new
position and new texture will be
reflected in the environment.
Note that this follows same
lifecycle as every other anchor
such as AR plane anchor or AR
image anchor.
Furthermore, it is fully
integrated into ARSCNView.
So in case you are using
SceneKit as your rendering
technology, you just need to
enable this feature in world
tracking configuration.
The rest is done automatically
by ARSCNView.
Note that for advanced use
cases, you may want to place
environment probe anchors
manually in the scene.
So for this, you will need to
set environment texturing mode
to manual, and then you can
create environment probe anchors
at your desired position and
orientation and add them to AR
session object.
Note that this only enables you
to place the probe anchors in
the scene.
AR session will automatically
update its texture as soon as it
gets more information about the
environment.
So you may use this mode in case
your augmented reality scene has
a single object.
You don't want to overload
system with too many environment
probe anchors.
So let's see a quick demo of
environment texturing and see
how we can realistically render
augmented reality scene.
[ Applause ]
So we can switch to AR 1.
Okay. So for this demo, I am
running world tracking
configuration without
environment texturing feature
enabled.
So as you can see on the bottom
switch controller, it's just
using ambient light estimate.
And let's place the same object
that we have seen before.
And you can see it is, it looks
okay.
I mean you can see this on the
table.
You can see the shadow of it,
and it looks like a really good
AR scene.
But what we are missing is that
it does not reflect wooden
surface of the table.
So moreover, if I place
something in the scene such as
real fruit, we don't see a
reflection of it in the virtual
object.
So let's enable environment
texturing and see how it can
realistically represent this
texture.
So as you can see, as soon as I
enable environment texturing,
the object started to reflect
the wooden surface of the table
as well as the texture from this
banana.
[ Applause ]
Thank you.
So this greatly enhances you
augmented reality scene.
So it looks as real as possible,
as if it is really on the table.
Okay. Back to slides.
So this was environment
texturing.
It's a powerful new feature in
ARKit 2 that lets you create
your augmented reality scene as
realistic as possible.
Now, to continue with the rest
of the great new features, I
will invite Reinhard on stage.
[ Applause ]
>> It's working?
Oh, okay, great.
Good morning.
My name is Reinhard, and I'm an
engineer on the ARKit team.
So next let's talk about image
tracking.
In iOS 11.3, we introduced image
detection as part of world
tracking.
Image detection searches for
known 2D images in the scene.
The term detection here implies
that these images are static and
are therefore not supposed to
move.
Great examples for such images
could be movie posters or
paintings in a museum.
ARKit will estimate the position
and orientation of such an image
in six degrees of freedom once
an image has been detected.
This pose can be used to trigger
content in your rendered scene.
As I mentioned earlier, all this
is fully integrated world
tracking.
So all you need to do is set
once in your property to set it
up.
In order to load images to be
used for image detection, you
made load them from file or use
Xcode's asset catalog, which
also gives you the detection
quality for an image.
So image detection is already
great, but now in iOS 12, we can
do better.
So let's talk about image
tracking.
Image tracking is an extension
to image detection with a big
advantage that images no longer
need to be static and may move.
ARKit will now estimate the
position and orientation for
every frame at 60 frames per
second.
This allows you to accurately
augment 2D images, say
magazines, board games, or
pretty much anything that
features a real image.
And ARKit can also track
multiple images simultaneously.
By default it only selects 1,
but in cases, for example, the
cover of a magazine, you may
want to keep this set to 1, or
in case of a double-page
magazine inside a magazine, you
want to set this to 2.
And in ARKit 2 on iOS 12, we
have a brand-new configuration
called the AR Image Tracking
Configuration that lets you do
stand-alone image tracking.
So let's see how to set it up.
We start by loading a set of
reference images, either from
file or from the asset catalog.
Once I'm done loading such a set
of reference images, I use this
to set up my session that can be
of type world tracking by
specifying its detection images
property or of type
ARImageTrackingConfiguration by
specifying the tracking images
one.
Once I'm done setting up my
configuration, I use this to run
my session.
And just as usual, once the
session is running, I'll get an
ARFrame at every update.
And such and ARFrame will
continue an object of type
ARImageAnchor, once an image has
been detected.
Such an ARImageAnchor is now a
trackable object.
I can see this by conforming to
the AR trackable protocol.
This means it comes as a boolean
isTracked, which informs you about
the tracking state of the image.
It's true if it's tracked and
false otherwise.
It also informs about which
image has been detected and
where it is by giving me it's
position and orientation as a 4
by 4 matrix.
So in order to get such image
anchors, it all starts by
loading images.
So that's good.
Let's have a look at what good
images could be.
This image here could be found
in a book for children, and in
fact, it works great for image
tracking.
It has a lot of distinct visual
features.
It's we'll textured and shows
really good contrast.
On the other hand, an image like
this, which could also be found
in a textbook for kids, is not
recommended.
It has a lot of repetitive
structures, uniform color
regions, and a pretty narrow
histogram once converted to gray
scale.
But you don't have to identify
these statistics yourself as
Xcode is here to help.
So if I import these two images
to Xcode, I see the sea life one
without any warning, which means
it's recommended, and the one
about the three kids reading
showing a warning icon, meaning
it's not recommended.
If I click this icon, I get a
precise description why this
image is not recommended to use
image tracking.
I get information about the
histogram, uniform color
regions, as well as the
histogram.
So once I'm done loading images,
I'm left with two choices of
configurations.
First one is
ARWorldTrackingConfiguration.
So let's talk about that.
When we use image tracking with
world tracking, image anchors
are represented in a world
coordinates system.
This means that image anchors
optionally plane anchors.
The camera and the world origin
itself all appear in the same
coordinate system.
This makes their interaction
very easy and intuitive, and
what's new in iOS 12 now images
that could previously only be
detected can now be tracked.
And we have a new configuration,
the
ARImageTrackingConfiguration,
which performs stand-alone image
tracking.
This means it's independent from
world tracking and does not rely
on the motion sensor to perform
the tracking.
This means this configuration is
not to initialize before it
starts to identify images and
could also succeed in scenarios
in which world tracking fails,
such as moving platform like an
elevator or a train.
I think in this case ARKit would
estimate the position and
orientation for every frame at
60 frames per second.
And implementing this can be
done in four simple lines of
code.
So all you need to do, I create
a configuration type
ARImageTrackingConfiguration and
specify a set of images I'd like
to track.
In this case, I specified a
photo of a cat, a dog, and a
bird.
I tell the configuration how
many images I'd like to track.
In this case, I specified this
to be 2.
In my use case, I imagined only
2 images will interact but not 3
at the same time.
Note that if I'm tracking 2
images and a third comes into
its view, it won't be tracked,
but it will still get a
detection update.
And then I use this
configuration to run my session.
And as I mentioned earlier, you
can also do this using world
tracking by simply switching out
these two lines.
The only difference between
image detection and tracking is
the maximum number of tracking
images.
So if you have an app that uses
image detection, you could
simply add this, recompile, and
your app may use tracking.
So in order to show you how easy
this really is, let's do a demo
in Xcode.
[ Applause ]
Can we go to AR 2?
Yes. So for this demo I'd like
to build a AR photo frame, and
for this, I brought a photo of
my cat from home.
So let's build this using Xcode.
So I started by creating a iOS
app template using Xcode.
As you can see by now it's
pretty empty.
Next, I need to specify which
image I'd like to attach.
For this I imported the photo of
my cat, Daisy.
Let's open her up here.
That's my cat.
I need to specify a name.
I give it the name Daisy, which
is the name of my cat, and I
specify here the physical width
of the image found in the real
world, which is my photo frame.
I also loaded a movie of my cat.
So let's bring this all
together.
First, I will create a
configuration, which will be a
configuration of type
ARImageTrackingConfiguration.
I load a set of tracking images
from the asset catalog by using
the group name photos.
This will contain only one
image, which is the photo of my
cat, Daisy.
Next, I set up the configuration
image tracking by specifying the
tracking images property here,
and I specify the max number of
tracked images that we want.
At this point, the app will
already start an AR session and
provide you with image anchors
once an image has been detected.
But let's add some content.
I will load the video
by trading a AV player from the
video by loading it from the
resource panel.
Now let's add it on top of the
real image.
So for this, I'm checking
whether the anchor is of type
image anchor, and I create an
SCN plane having the same
physical dimension as the image
found in the scene.
I assign the video player as the
texture to my plane, and I start
playing my video player.
I create an SCN note from
geometry, and I counter rotate
to match the anchor's coordinate
system.
So that's it.
This will run.
Let's see it live.
So once I bring the frame of my
cat into the camera's view, the
video starts playing, and I can
see my cat interact.
[ Applause ]
Since ARKit estimates position
in real-time, I can move my
device freely, or I can move the
object.
So I can really see that there's
an update at every frame.
Oh, oh, she just left.
I guess it's the end of the
demo, let's go back to the
slides.
[ Applause ]
So as you can see, it's really
easy to use image tracking in
ARKit.
In fact, it's much harder to
make a video of your cat.
So image tracking is great at
interacting with 2D objects, but
we're not limited to plainer 2D
objects, so let's talk next
about object detection.
Object detection can be used to
detect known 3D objects in the
scene.
Just like image detection here,
the term detection means that
this object needs to be static
and can therefore, or should
therefore not move.
Great examples of such objects
could be exhibits in a museum,
certain toys, or household
items.
And like image detection,
objects need to be scanned first
using an iOS app running ARKit.
For this we offered the full
source code of a full-featured
iOS app that allows you to scan
your own 3D objects.
Such objects have a few
properties such that they need
to be well textured, rigid, and
nonreflective.
And they need to have roughly
the size of a tabletop.
ARKit can be used to estimate
the position and orientation of
such objects in six degrees of
freedom.
And all of this is fully
integrated into world tracking.
So all you need to do is set one
single property to get started
with object detection.
So let's have a look how it can
be set up.
I load a set of AR reference
images from file or from Xcode
asset catalog.
I will talk about the reference
objects in a second.
Once I'm done loading these
reference objects, I used them
to set up my configuration of
type
ARWorldTrackingConfiguration by
specifying the detection objects
property.
When I'm done setting up my
configuration, again I run my
session with it.
And just as image detection,
once the AR session is running,
I get an ARFrame with every
update, and in this case, once
an object has been detected in
the scene, I will find an AR
object anchor as part of my AR
frame.
Such an AR object is a simple
subclass of AR anchor.
So it comes with a transform,
which represents its position
and orientation and six degrees
of freedom as well as it tells
me which objects has been
detected by giving me a
reference to the AR reference
object.
And implementing this can be
done with three simple lines of
code.
I create a configuration of type
ARWorldTrackingConfiguration and
specify a set of objects I'd
like to detect.
In this case, I envision to
build a simple AR museum app by
detecting an ancient bust and a
clay pot.
And I use this to run my
session.
So in fact, at the office, we
build a very simple AR museum
app, so let's have a look.
So once this bust gets into view
of my iOS app, I get a six
degrees of freedom pose and can
use this to show the scene, very
simple infographics floating
about the statue.
In this case, we have simply
added date of birth, the name of
this Egyptian queen, which was
Nefertiti, but you could add any
content that your rendering
engine allows you to use.
In order to build this app, I
had to scan the object first.
So let's talk about object
scanning.
Object scanning extracts
accumulated scene information
from the world.
This is very much related to
plane estimation in which we use
accumulated scene information to
estimate the position of a
horizontal or vertical plane.
In this case, we use this
information to gather
information about the 3D object.
In order to specify which area
to look for the object, I just
specify a transform and extend
in the center.
This is essentially a bounding
box around the object just to
define where it is in the scene.
Extracted objects are fully
supported by Xcode's asset
catalog, so it makes it really
easy to port them to a new app
and reuse them as many times as
you want.
And for scanning, we added a new
configuration, the
ARObjectScanningConfiguration.
But you do not need to go ahead
and implement your own scanning
app as the full sample code is
available for full-featured
scanning app called Scanning and
Detecting 3D Objects.
So let's have a look how this
app works.
I start by creating a bounding
box around the object of
interest, in this case, the
statue of Nefertiti.
Note that the bounding box does
not need to be really strict
around the object.
All we care is that the most
important feature points are
within its bounds.
When I'm satisfied with the
bounding box, I can click press
scan, and we start scanning the
object.
I can see the progress going up
and this tile representation
indicating how much of the
object has been scanned in a
spatial manner.
Note that you do not have to
scan the object from all sides.
For example, if you know that a
statue will be facing a wall in
a museum, and there is no way
that you could detect it from
one specific viewpoint, you do
not need to scan it from that
side.
Once you're satisfied with the
scan, you can adjust the center
of the extent which corresponds
to the origin of the object.
The only requirement here is
that the center stays within the
object's extent.
And lastly, the scanning app
lets you perform detection
tests.
So in this case, detection was
successful from various
viewpoints, which means it's a
good scan.
And our recommendation here is
also to move the object to
different location to test
whether detection works with
different texture and under
different lighting conditions.
Once you're done scanning, you
will obtain an object of type
ARReferenceObject, which we have
seen earlier in the diagram.
This object can be serialized to
usually and AR object file
extension type.
It has a name, which will also
be visible in your asset catalog
as well as the center and the
extent used for scanning it.
And you will also get all the
raw feature points found within
the area when you performed your
scan.
So this was object detection.
Keep in mind, before detection
object, you need to scan them,
but there is the full source
code available for you to
download it right now.
So let's talk next about face
tracking.
When we released the iPhone X
last year, we added robust face
detection and tracking to ARKit.
Here, ARKit estimates the
position and orientation of a
face for every frame at 60
frames per second.
Here we can use the, this pose
can be used to augment a user's
face by adding masks, hats, or
replace the full texture of a
face.
ARKit also provides you with a
fitted triangle mesh coming to
form of the ARFaceGeometry.
This type, the ARFaceGeometry,
contains all the information
needed to render this facial
mesh, and it comes in the form
of all vertices, triangles, as
well as detection coordinates.
The main anchor type of face
tracking is ARFaceAnchor, which
contains all information needed
to perform face tracking.
And in order to render such
geometry realistically, we added
a directional light estimate.
Here, ARKit uses your light as a
light probe and estimates this
ARDirectionLightEstimate, which
consists of the light intensity,
direction, as well as the color
temperature.
This estimate will be sufficient
to make most apps already look
great, but if your app has more
sophisticated needs, we also
provide the second-degree
spherical harmonics coefficients
that gather lighting conditions
throughout the entire scene for
you to make your content even
look better.
And ARKit can also track
expressions in real-time.
These expressions are so-called
blend shapes, and there's 50 or
more of them.
Such a blend shape assume a
value between 0 and 1.
One means there's full
activation.
Zero means there is none.
For example, the jaw
open coefficient will assume a
value close to 1 if I open my
mouth and a value close to 0 if
I close it.
And this is great to animate
your own virtual character.
This example here, I've used the
jaw open and eye blink left and
eye blink right to animate this
really simple box face
character.
But it can do better than that.
In fact, when we built Animoji,
we used a handful more of such
blend shapes.
So all the blue bars you see
moving here were used to get
over the head post to map my
facial expressions on the panda
bear.
Note that ARKit offers
everything needed for you to
animate your own character just
like we did with Animoji.
Thank you.
[ Applause ]
So let's see what's new for face
tracking in ARKit 2.
We added gaze tracking that will
track the left and the right eye
both in six degrees of freedom.
[ Applause ]
You will find these properties
as members of ARFaceAnchor as
well as a look-at point, this
corresponds to reintersection of
the two gaze directions.
You may use this information to
animate again your own character
or of any other form of input to
your app.
And there's more.
We added support for tongue,
which comes in the form of a new
blend shape.
This blend shape will assume a
value of 1 if my tongue is out 0
if not.
Again, you could use this to
animate your own character or
use this as a form of input to
your app.
Thank you.
[ Applause ]
So seeing myself sticking my
tongue out over and over is a
good time for summary.
So, what's new in ARKit 2.
Let's have a look.
We've seen saving and loading
maps, which are powerful new
features for persistence and
multiuser collaboration.
World tracking enhancements
simply shows better and fasting
plane detection as well as new
video formats.
And with environment texturing,
we can make the content really
look as if it was really in the
scene by gathering texture of
the scene and applying it as a
textured object.
And with image tracking, with
image tracking, we are now able
to track 2D objects in the form
of images.
But ARKit can also detect 3D
objects.
And for face tracking, we have
gaze and tongue.
All of this is made available
for you in the form of the
building blocks of ARKit.
In iOS 12, ARKit features five
different configurations with
two new additions, the
ARImageTrackingConfiguration for
stand-alone image tracking and
the
ARObjectScanningConfiguration.
And there's a series of
supplementary types used to
interact with the AR session.
The ARFrame, the ARCamera, for
example.
And this got two new additions,
the ARReferenceObject for object
detection and the ARWorldMap for
persistence and multiuser.
And the AR anchors, which
represent positions in the real
world, the anchor types.
Got two new additions, the
ARObjectAnchor and the
AREnvironmentProbeAnchor.
I'm really excited to see what
you guys will build with all
these building blocks in the
ARKit available in iOS 12 as of
today.
[ Applause ]
There's another real cool
session about integrating
ARQuickLook into your own
application to make your content
look simply great.
With this, thanks a lot, and
enjoy the rest of the
conference.
[ Applause ]