Transcript
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> NICK PORCINO:
Hello, everybody.
Welcome to managing 3D
Assets with Model I/O.
I'm Nick Porcino in the image,
media, and graphics group.
Today, I'm really excited
to introduce the new Model
I/O framework to you.
We are raising the bar on --
or actually we are enabling you
to raise the bar on interactive
and realistic graphics.
Up until now, the graphics
frameworks and hardware
that you've been dealing
with come from an age
when hardware was a lot
more limited than it is now.
The thing you have in your
pocket is absolutely amazing.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now we got new frameworks that
enable you to get the power
out to your users, but you
have to feed that framework
with really good looking stuff.
So in the new world of
really high performance
and low overhead graphics APIs,
you need a way to get things
that are realistic, realistic
means physically derived
and physically based things.
And so there's a lot of research
and knowledge that you need
in order to pull that off.
You need to understand
the physics of cameras,
the physics of materials,
the physics of light
and you can certainly go out
and read all of that stuff
and I encourage you to do so.
And implementing each one
of those things requires a
fair amount of heavy work,
a fair amount of reading and
then a whole lot of integration.
So what we have done with
Model I/O is we've gone out,
we have done a whole
bunch of research,
we have done a whole
bunch of integration,
and we've prepared a unified
set of data structures
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and we've prepared a unified
set of data structures
to describe these things
in a consistent way
and an easy to use API.
So as it says, Model
I/O is a framework
for handling 3D assets and data.
So at its most basic level,
you can use Model I/O to bring
in common file formats
and export them.
You can describe in a physically
realistic way lighting,
materials, environments.
You can get assets and
art work from your artists
into Model I/O, do some
interesting processes
which we'll get into
as the talk progresses.
And there's a lot of
tools now that you can get
that are focused
on physically-based
rendering and materials.
And Model I/O gives you
access to those things
in your own pipelines.
Model I/O is integrated into
Xcode 7, and the GameKit APIs,
it's on iOS 9, and OS 10.11.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
it's on iOS 9, and OS 10.11.
So in a nut shell, the
big green box there is
where Model I/O fits
in your framework
or in your application.
We start in the contact
creation tool.
Your artist does some great work
and you import it
into Model I/O.
Model I/O then does the annoying
and tedious and error-prone step
of creating buffers for various
frameworks to render quickly.
So those frameworks
that we're supporting
out of the box are
SceneKit, Metal, and OpenGL.
Now Model I/O doesn't just
load files and save them.
It also lets you
perform useful operations
that are time consuming
or whatever,
that serve to improve
the look of your assets,
and you can take an asset.
You can do one of these baking
operations which we'll get into,
bring it back into the unified
representation and Model I/O
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
bring it back into the unified
representation and Model I/O
and get it off to your
hardware ready buffers.
Also, you can just complete the
loop here, export your asset,
out of Model I/O after
you did something exotic
that Model I/O provides for
you, send it all the way back
to the content creation
tool, let the artist noodle
around on it and perhaps
add some, you know,
special sweetness or whatever,
and then you can just
keep iterating the cycle
until you achieve the
look and level of quality
that you need for your app.
So what we're going to talk
about today is what
are the features?
What are the data types?
Where does the physical
motivation
for those data types come from?
And we're going to
talk about support
for various geometrical
features and voxels
and some advanced lighting
and our baking steps.
All right.
So here we go, bread and
butter, reading things in.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So here we go, bread and
butter, reading things in.
So import formats,
we start with some
of the most commonly
used formats.
Alembic is a very high
performance format and it comes
from the film industry
and is heavily uses
in games now as well.
And it is the most modern of
the formats listed up here.
It includes information on
animation and material bindings
and all kinds of
interesting things.
Polygon is the standard polygon
format that you are going to get
out of a lot of academic
research.
Triangles are commonly
originated and ingested by CAD
and CAM applications
and Wavefront .obj
is universally
available everywhere.
And for exporting at the
moment, you can export
out to your CAD/CAM
stuff and (inaudible).
So importing is that easy.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You get an MDLAsset,
by initWithURL.
And you can send it back out
somewhere with exportAssetToURL.
So just a few words
about physical realism
without getting super detailed.
Lights, historically have had
like a position, and, you know,
a cone angle and falloff
and some other physically
unrealistic parameters
that just are mathematically
united to the way hardware
and graphics APIs used to be.
What we are providing here
is access to IES profiles.
Now, if you go to the
hardware store and find, like,
a light fixture that
you really like.
Like, say I really like
that light up there
and I found it in the store.
I can go to the manufacturers
website after reading the label
on the side and I can
find an IES profile file
which is a data file, where they
have done measurements all the
way around the light
to get the irradiance
from every different angle.
We read that in into a so-called
light web, which is a set
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We read that in into a so-called
light web, which is a set
of data that's pretty easy
to load up into a shader.
So if you want, you can have
physically motivated lights
that match real world
lights in your shaders.
Now, when you buy a light
at the hardware store,
the light isn't specified
as an RGB value or whatever.
It's specified as
a temperature like,
you know 4,000K or whatever.
So you can specify these heights
in degrees Kelvin as well.
We also provide image based
lighting which is in play
on the excavator there.
The materials, historically,
everything has been Lambertian,
which means fall off
according to angle,
and with the Blinn-Phong
specular
which means a plastically
shiny highlight.
We also provide you
with a baseline physical
bidirectional reflection
function, or a BRDF which
is what you really need
if you want real
world materials.
And once again, that excavator
has got physical BDRF on it.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And once again, that excavator
has got physical BDRF on it.
Cameras, historically
have been pinhole cameras.
We are describing them all the
way from the lens to the sensor
and we are providing
you some utilities
for processing environments
from photographs and
procedural skies.
Now, you are going to see
tools in Xcode to do baking,
and what I'm telling you here
on this slide is that the tools
that you have in Xcode
to perform these operations are
available to you in Model I/O
through the framework.
So if you want to
make your own line --
your own offline pipeline
to bake your own assets
from beginning to end, in
giant batches or on a farm,
all of those things
are available
through the frameworks API.
We have introduced voxels.
So you can take a big old mesh
and turn it into giant cache
of indexes that you can
associate your own data with.
More on that later.
Once again, it's
very straightforward,
create your voxels from
an asset and find voxels
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
create your voxels from
an asset and find voxels
in a particular region.
You can do constructive
solid geometry
and you can turn the voxels
back into a mesh using some type
of smoothing algorithm.
Some kind.
So system integration, Model
I/O is integrated directly
into SceneKit and it's
utilized in MetalKit and GLKit.
Model I/O is used to do preview
in the Finder and in Quick Look.
So you can go find an Alembic
.abc cache hit the space bar
and it'll pop up in the Finder
and you can tumble your asset
and preview it without
even opening up any type
of auditioning environment.
You can edit assets
in the Xcode.
The excavator here is loaded
in Xcode in this picture.
And Model I/O works in
Playgrounds and with Swift.
All right.
Down to the nitty-gritty
of data types.
So MDLAsset is the thing
that you get from a URL
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So MDLAsset is the thing
that you get from a URL
and it's the big
overall container
that you will often
be working with.
It's an index container, you
know, for fast enumeration
and stuff, it has transform
hierarchies in it, meshes,
cameras and lights and you
can make them using the API
or loading from a URL.
So a typical asset that
you might pull out of
like a big old Alembic file,
is going to have a whole bunch
of interesting things in it.
This particular example here
has got a couple of cameras
and a light, a root
transform, the blue box
over on the right there.
And underneath it, are
the bits of the excavator
and the orange boxes
just indicate that, yes,
you can indicate your
material assignments
and things like that as well.
So that's what's in an asset.
It's all the stuff that
logically belongs together
and you will get
a bunch of assets
and compose them into a scene.
So an asset has allocators
in case you need
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So an asset has allocators
in case you need
to manage memory yourself,
you can add an allocator.
It has descriptors and
which teach about the things
that are inside of the asset.
There's the import
and export facilities
and a bunch of children.
And the children are MDLObjects.
MDLObjects themselves
can comprise a hierarchy.
Now, a typical scene
graph, of course,
has transformation hierarchies.
And so an MDLObject has
got a transform component.
We are not implementing
the transform in, you know,
the standard way of
putting a matrix everywhere.
We actually have a
transformation component
and components are
very interesting,
because it allows us
to make scene graphs
that aren't limited just
to the parent of or child
of type relationships.
Instead you can define
your own components.
Now, I guess I should have
mentioned that the nice thing is
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, I guess I should have
mentioned that the nice thing is
that the API is designed
that you can write your own
importers and exporters.
You can write -- if you have
your own custom file format,
you can implement that
and so back to this,
I've got a custom component
which is a TriggerComponent
like a volume your character
enters a region and some sort
of action should occur.
You can just make that yourself
and define what the behavior is
and what it connects to,
the API lets you do that.
Now, a mesh contains one
or more vertex buffers.
That's positions and normals
the thing that has to go off
to the GPU for rasterization.
And submeshers.
Submeshes, to get an idea
of what exactly it is,
you might have, like, a
character who is going
to drive the excavator
and he might have some optional
components like a hard hat,
it shouldn't be optional,
but it is.
So in one index buffer, I
might have the whole character
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So in one index buffer, I
might have the whole character
without the hat, and in
another index buffer,
I might just have
all indexes referring
to the original mesh
vertex buffers
that have got his hat in it.
So by rendering or not
rendering that submesh, he does
or does not have a hat.
Submeshes can share the
data in the vertex buffers
so this just allows you to
have a single submission
for the hardware.
So the mesh, besides holding
vertex and index buffers,
also has utility functions, and
there's generators to make all
of your usual things, like
boxes and spheres and what not.
There's modifiers.
So if a mesh that didn't have
normals or tangent bases,
or any of those things, you
can generate those on demand,
thinking back to that
bake and export cycle
that I showed earlier.
And there's, of course
the bakers.
Now, a mesh buffer is the thing
that has to go to the hardware.
It's got the data in it.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The actual data.
How big the buffer is,
how you allocated it.
And you have to describe
those vertex buffers.
You have to say what the
intention of the buffer is,
like is this a position?
How big is it?
How many bytes does it take
and so on and so forth.
So finally the stride from
one vertex to the next,
that's what the hardware
needs to know.
The same sort of
thing for the submesh.
You say what kind of
indexes do you have.
Are they 16s or 32s or whatever.
The geometry type.
Is it a triangle strip, or
is it a point or a line,
and finally a material.
Materials, as I said are going
to be physically motivated
if you use these APIs.
And to tell you what that means,
we have got bidirectional
reflectance functions
with ten simple parameters
that are designed
by artists to be intuitive.
So one of the more
important features is just
to specify whether the
material is dialectic
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to specify whether the
material is dialectic
like clay or if it's a metal.
If you set this value all the
way to one end, it's dialectic
like clay all the
way to the other end,
it will behave like a metal.
Here I have combined the two to
put an acrylic clear coat on top
of the metallic base and here I
tweaked up one of the parameters
to give a satin finish.
And here's an actual
artist-prepared piece
of spaceship with all kinds
of different materials on it,
just to give you an
idea that a small number
of parameters can give you
a wide variety of looks.
So, materials have a
name, just like everything
in Model I/O has properties.
You specify the scattering
function, whether you want it
to be Lambert Blinn-Phong
because you need compatibility
with an older system, and,
or physically plausible
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with an older system, and,
or physically plausible
if you're going into
this brave new world
of physical motivation.
The materials are singly
inherited and so what
that means is you might
have some kind of a uniform
that only varies by a number
or something on each character.
So you can specify a base
material and override properties
and subclass materials.
The material properties have
names, and they have a semantic
which means how are
they to be used, a type,
like is it a floating point
value or a color and a value.
And lights.
And lights have physical
parameters
and physical properties
and they have geometry.
They have, you know,
an extent and a width.
And the light itself emits
light in a certain way.
You can specify it with
lumens and color temperature.
One thing I'm really
excited about overall
in Model I/O is we've
got support for color
that can be specified using
modern color pipelines.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that can be specified using
modern color pipelines.
So if you want to use SRGB,
we have a well-specified SRGB.
If you want to use Rec.709
or the new Asus CG color
profiles you can ensure
that your color started
in a content production
app a certain way
and it got all the way
to the end, without going
through odd transformations
that might give you some,
you know, color surprises.
So there we have our
physically plausible light
and various subclasses,
procedure area light,
I should say it procedural
description of an area light.
And the photometric light from
an IES profile, and light probes
that take reflective
maps or irradiance maps
and you can use them to compute
spherical harmonics and stuff
like that to compactly
represent what's going on.
And our camera is no longer
merely a pinhole camera
with an infinite in
focus projection.
We describe a camera from
one end to the other.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We describe a camera from
one end to the other.
My picture here is supposed
to have a lens, shutter
and sensor plain there.
We describe what the lens can
see, the properties of the lens,
what kind of distortion,
barrel distortion,
or chromatic aberration that
sort of thing, the geometry
of the lens and how big is the
glass, how long is the barrel.
The exit aperture.
How tightly have you closed the
aperture, how big is the sensor
and what is the exposure
characteristics of the sensor.
So the end result of all of that
is, if you specify your cameras
in this way, if you mount a
35 or say a 50-millimeter lens
with an F1.8 aperture
and go check Wikipedia,
what are the characteristics
of a lens like that,
what is the field of view, and
what's the out of focus light,
highlight size and things like
that, the utility functions
on the MDLCamera will agree with
what you find in a textbook.
So that's a handy and fun thing.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So that's a handy and fun thing.
And I encourage you to
incorporate those kinds
of calculations into your
shaders and pipelines.
Just a quick example here.
When I first load this
thing up in my viewer,
these are the default
exposure settings.
Some of the detail is lost,
it's a bit washed out.
We can't see a lot of detail on
the dark areas of the excavator.
Sorry for the brightness here,
but I underexposed this image
and then flashed the
sensor in order to bring
out shadow detail, and raise
the overall levels while
reducing glare.
Just like a photographer
might do, if you are playing
around with settings on
your own real camera.
Now, skies.
We have two ways
to create skies.
The first way is to use a
procedural sky generator.
It uses physics.
You give it time of day,
essentially how high is the sun.
What are the atmospheric
conditions.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
What are the atmospheric
conditions.
Is there a lot of back
scatter from the ground,
is there a lot of
junk in the air.
We calculate exactly
what the sky would look
like through some
fairly heavy math.
So the other way
you can create --
it creates a cube that you
can then just, you know, use.
Now, the other way you can
create sky for illumination
in this physically based
realm is through photography.
You can take a spherical
panorama
with your phone or a DSLR.
I made this picture
with my iPhone.
Then you can prepare it for
rendering using the MDLTexture
and initWithURL API,
create a cube map
for reflectance and irradiance.
Irradiance is the incoming
light that we can deduce
from the image, so there it's
been converted into a cube.
And then, from that, you
can compute the irradiance
and what these three strips
show is the original image,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and what these three strips
show is the original image,
the middle line is a texture
map showing the irradiance
at a certain convolution
or level of blur
and the third one is really fun.
The middle one is a texture and
it uses a fair amount of memory.
And the third one is actually
spherical harmonic coefficients
and so it's 27 floats
that reproduce the look
of the irradiant environment
that previously took
several dozen K to represent
in the middle slide
or the middle strip.
So putting that all together,
this excavator is fairly well
situated in its environment,
in that I photographed
and that's, I think,
a pretty exciting result,
and I hope you guys can think
of cool things to do with that.
Now, I just want to
talk a little bit
about how Model I/O
integrates with SceneKit.
There's essentially a
one-to-one correspondence
between top level Model I/O
elements and SceneKit elements.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
between top level Model I/O
elements and SceneKit elements.
MDLAsset corresponds
to SceneKit root node
and MDLMesh coordinates with
SCNNode with SCNGeometry.
MDLLight with SCNLight camera
camera, material material.
Where exact analogs don't exist
between a SceneKit parameter
and about a Model I/O
parameter, we translate
to get a close approximation.
Now, Model I/O doesn't
actually do rendering for you.
You probably gathered
that since I mentioned all
of these other APIs
that do do rendering.
So in MetalKit, you are
much closer to the metal.
The correspondence that
you get between Model I/O
and MetalKit is an
MDLMesh can be converted
into a model kit
array kit of meshes.
Once you have that array
of meshes, it's up to you
to write shaders, traverse
the mesh, find the materials,
the lighting condition,
and do the rendering.
But getting those
metal buffers prepared
for you is absolutely
transparent.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for you is absolutely
transparent.
So with that, I would like to
pass the mic to Claudia Roberts
to talk about geometry
and voxels.
(Applause).
>> CLAUDIA ROBERTS:
Hello, everyone,
my name is Claudia Roberts
and as Nick mentioned,
I will give you an
overview of some
of the different
ways you can describe
and characterize
geometry in Model I/O.
The motivation being to help you
create games and applications
that have a more physically
plausible look and feel.
To give you all some context
of where we are headed,
first I will discuss how Model
I/O supports normal smoothing
then go into subdivision
surfaces followed
by a discussion on
voxels and a quick demo.
Okay. Let's get started.
Normal smoothing.
Normal smoothing is a cool
technique that tricks people
into believing that your
models have way more geometry
than they actually do.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
By default, the vertices
of a polygon all share
the same face normal.
And thus all points on the face
of the polygon have the
same normal as well.
This creates a perceived crease
between adjacent polygons
which is the result
of the abrupt change
and vertex normals during
the rasterization process.
This sharp contrast in
colors can be mitigated
by introducing a shared normal
whose value is the average
of the vertex normals that share
the same coordinate position.
Now, during the GPU's lighting
calculations the normal
at each point on the face
of the polygon will be the
interpolation of vertex normals
that are no longer all
the same, giving a nice,
smooth shading effect.
Using the MDLMesh API, you
can add smoothed out normals
to your object, by calling the
addNormalsWithAttributeNamed
method and you can control when
normal smoothing is applied
by setting the crease
threshold value.
With our spaceship, we
see our default mesh
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
With our spaceship, we
see our default mesh
with the flat shading
on the left
and the smooth shading
on the right.
Next, subdivision surfaces.
Surface subdivision
is a common technique
for using low detailed
geometry to generate
and render a smooth
surface for rendering.
This technique allows you to use
a simple polygon control mesh
to create varying levels
of detail as needed.
For instance, it would
make a lot of sense
to render a character at a low
polygon count when further away
and increase the level of detail
as the character gets closer
and closer to the camera.
By varying the subdivision
level of a model,
you can generate these
different meshes without needing
to manually create them all.
In Model I/O, you can
create subdivision surfaces
by calling the newSubdividedMesh
routine, also found in MDLMesh.
Here at the bottom, we achieve
the smooth mesh on the right,
by setting the subdivision
level to two,
significantly increasing
the number of polygons.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
significantly increasing
the number of polygons.
Finally, voxels.
In addition to providing support
for various advanced techniques
for polygonal representations
of 3D models,
Model I/O also supports
volumetric representations.
By representing a model
as a close approximation
of how it is actually found in
the real world, that is a set
of particles or discreet
points in space
with inherent properties such as
volume, mass, velocity, color,
the door becomes
wide open to a range
of physically realistic
techniques,
analysis, and manipulations.
Where as with polygon meshes
it's difficult to model
and represent surfaceless
phenomena
such as clouds, water, fire.
It becomes much easier with
the volume representation.
Now, instead of trying to
mingle and twist a rigid shell
of polygons, the model
becomes a deformable mass
that can change its
properties at any time.
Along those same lines
this representation allows
for procedural generation and
modeling, meaning it can make
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for procedural generation and
modeling, meaning it can make
for exciting and novel
opportunities and game play.
Think modification and
destruction of objects
and terrain on the fly.
Because a voxel model is a
more accurate representation
of the real world, it lends
itself to being analyzed,
explored, and operated on in
a more natural and real way
like slicing and cutting.
This fact is proven to
be particularly useful
in the medical imaging
field where, lucky for us,
scientists have proven than our
skulls are not really comprised
of an empty shell of triangles.
And finally, given that you have
a few solid voxelized models,
you can perform Constructive
Solid Geometry Boolean
operations on them in
order to create a slew
of more interesting
and complex models.
In Model I/O, we
expose the support
of voxels via the
MDLVoxelArray API.
Our implementation
represents volume models
as a sparse volume grid,
where voxels can be accessed
by using a simple spatial index.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
by using a simple spatial index.
This representation allows
for quick neighbor finding
and neighborhood traversal.
In addition to the grid
coordinates each voxel contains
a shell level value
which indicates how close
or far a voxel is to the
surface of the model,
both in the positive
exterior direction
and the negative
interior direction.
And Model I/O also
supports the creation
of closed meshes model cleanup
and conversion back
to a polygon mesh.
I will now show you the handful
of API calls you will need
in order to get started
with voxel and Model I/Os.
So given an initialized
MDLVoxelArray you can generate
its voxel data from
an MDMMesh object
by calling the setVoxelsForMesh
method.
The divisions parameter is
used to set the resolution
of your voxel model by
specifying the number
of layers your model
will be divided
into on the vertical extent.
You can also specify how
thick you want the interior
and the exterior walls
of your model to be
with the last two parameters.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with the last two parameters.
Once you have your voxel
array objects set up,
you can perform various
operations on them,
such as intersect, union,
and differenceWithVoxel
which perform expected
basic Boolean operations.
To actually retrieve your
voxel data for processing
and inspection, simply call
the getVoxelIndices method
and once your done with your
processing convert your voxel
model back to a polygonal
representation,
using the meshUsingAllocator
routine.
Simple as that.
And now, I will show you voxels
in action with a quick demo.
>> CLAUDIA ROBERTS: So
here we have this demo.
It actually took about one
hour to create from start
to finish using SceneKit editor.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to finish using SceneKit editor.
We simply dragged and dropped in
the ground and we did the same
for our red panda which
you actually saw yesterday
at the state of the union.
It's just a regular polygon
mesh and you can use any mesh
that you want for this.
And then using the MDLVoxelArray
API, it only took two lines
of code to turn this
mesh into voxels.
And then for each voxel,
we creates an SCN box,
and this is what it looks like.
And now that we have our voxels
in SceneKit, the exciting thing
about that is we
can take advantage
of all the really cool things
that SceneKit has to offer.
For instance, with one line
of code, we can turn all
of these SCN boxes
into SCN spheres.
And just for fun, we
will apply SceneKit body
to all the nodes
and explode him.
Wee! I am hand it over
to Remi now who will talk
about advanced topics in
lighting and Xcode support.
(Applause).
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> REMI PALANDRI: Hey,
everyone, and thanks, Claudia.
So hi. As Chris said today, I
will be talking about advances
in baking and how Model I/O
does all of that for you.
So what exactly is is
advanced lighting and baking.
What's the goal here?
The goal is to introduce
to your frameworks
and your rendering pipelines
in your games is something
called global illumination.
So what is that?
It's not the way of saying,
all right I have a point height
and I have a triangle
and let's light it
up using dot products
and be done with it.
We are going to try to pretend
that that scene is real.
We're going to try to simulate
how light would actually move
in that scene how light
will reflect off the walls
and occlude because it
can't go through triangles.
The issue is that
it's very expensive.
It's been used a long
time in the movies
because you can take a half
hour to reframe if you want,
but that doesn't work for us.
If you look at the picture
here, you can see, for example,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
If you look at the picture
here, you can see, for example,
that the wall here, the red wall
on the left irradiates a bit
of red light on that sphere.
The wall isn't really the light
per se, but lights as it does
in real life reflection off the
wall and irradiated the sphere.
In the back of the
sphere, it's a bit dark,
because this sphere occludes
some lighting from going there.
It's not a real direct a shadow
but there is still
something going on.
The issue is that this is
really hard to do in realtime.
So we are going to show you
ways to actually get some
of that precomputed before
you even launch your game
during precompilation.
So that you can get a really
realistic rounded look
without having any of
the performance drawbacks
and we will balance
performance and quality
so you can get the look
that you want with very,
very few performance overheads.
So global illumination today
will have two different heads.
So we are going to first
show you ambient occlusion
and then light maps.
And to introduce
ambient occlusion,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And to introduce
ambient occlusion,
I would like to show
you an example.
If you look at the spaceship
it's the default SceneKit
spaceship, it looks good.
It's a great spaceship.
I love to play the game,
but it's a bit flat.
If you look at the wing or the
engine, it's not extremely clear
where the wing ends
or the engine starts.
If you look at the two
fins on the right image,
it's weird because you have
the same light as you have
on the front of the ship
but the fins should block
out the lights.
If you were to add ambient
occlusion it would look
like this.
Same shader, but the look
is a bit more realistic,
because there is now a
shadow between an occlusion
between the wing and the engine.
If I were to add ambient
occlusion to the second one,
it would look like
this, same thing,
but you can see light
that's occluded
and it's a more compelling
experience.
It's a better looking picture.
What exactly is ambient
occlusion?
What ambient occlusion is
is very simply a measure
of geometry occlusion.
What that means is, for
my point or my mesh,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
What that means is, for
my point or my mesh,
how much of the light
that arrives
from my world can actually
go to my point and how much
of my light is actually
blocked by my mesh
and its surrounding meshes.
So it's basically a
signal, one, for white,
saying I have absolutely
no blocking.
The whole light arrives
and/or commuting,
most of my light can't go there.
If we were to look
at the signal,
that's what it looks like.
Mostly white, because
most light can go there.
But you see some (inaudible)
physical data there.
How do we compute that.
We compute that using
offline raytracing.
So your mesh gets
into a (inaudible)
and we send rays everywhere.
And we calculate how much rays
if I send rays all
around my points.
How much rays hit the mesh and
how much rays go to the sky box.
Difference between both is
my ambient occlusion signal.
So what do we require
from you guys?
An input. It's a
mesh, my spaceship,
and a set of occlusion meshes.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and a set of occlusion meshes.
So here it's only a spaceship.
If I were, for example,
to represent
that scene right there, and
I wanted to make the ground.
So it would compute ambient
occlusion for the ground.
I would also need all the chairs
and the people and the floors
and all of that to stop the rays
so that I have a very
nicely looking mesh.
So that would be a nice set.
And what do I get as output,
a set of occlusion values
just for every point.
What is the occlusion
of that point.
So how exactly do we store that?
We have two ways, either
vertices or textures.
So if my mesh has
lots of vertices
for example a big
spaceship with plenty
of triangles we can just
store that in vertices.
It works very well because its
very cheap it's nearly one float
per vertex and using
rasterization for your metal
or pipeline, it's
extremely cheap to render,
but for example, that spaceship
is actually fairly low triangles
and so we need a texture.
So we actually built for
you guys inside Model I/O,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So we actually built for
you guys inside Model I/O,
a UV mapper that creates
2D texture and wraps it
around the 3D mesh, and
so that corresponds.
And then for every pixel
of that texture we have an
ambient occlusion.
If we were to look at what
that texture looks like for
that spaceship, it
looks like this.
So you can see it's
basically the texture wrapped
around the spaceship.
You can see the wings and the
engine base and all of that.
How do we compute that?
It's very easy.
It's literally a one liner.
You can look at the
top one for example,
shipMesh
generateAmbientOcclusion,
here it's vertex.
And so we have two parameters
quality and attenuation factor.
If we increase quality, what it
will do, it will send more rays
to get a better looking
signal but it's going
to take a bit more
time to compute.
Because it's baking, it's
before you even launch the game,
but still.
And if we have a textural bake,
then the texture will be bigger
so it will increase a
bit your memory cost.
And then that attenuation will
simply attenuate the signal
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And then that attenuation will
simply attenuate the signal
so only the darker
parts stay dark.
The really cool thing here
is we actually partnered
with the SceneKit team to
integrate those features
in both SceneKit but even
in the SceneKit editor
that you guys saw at
the state of the union.
And I would like to
show you that right now.
So it works.
So I just loaded
here a little scene
that literally has a
spaceship and a sky box around.
The ship is very
nicely flying in space.
I added no lights here,
which is why is is very flat.
The only thing that we are
visualizing right now is
ambient lighting.
And as you can see the ambient
lighting does not take what the
ship looks like into
account whatsoever.
It doesn't give the user
nice feedback in terms
of where the geometry
is, so its very flat.
We're going to change that.
So I'm going to click
on my mesh and first,
I will see that we actually have
a fair amount of vertexes here
so we will do a vertex bake.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I will bring up the
geometry tab.
I'm going to go here
and under occlusion bake
and choose the vertex,
those values are
perfect, and press bake.
So what's happening here?
For every little vertex of that
ship, we will send between 100
to 200 rays around and
then it looks like this.
See? Way better!
We had this and now
we have got this.
And it makes perfect sense.
If you look at, for
example, here, the top deck,
created occlusion
on the bottom one,
because the light can't
arrive there easily.
If we look at the windows
inside here, here the inner port
of the windows have more
occlusion than the outer parts.
If we look here.
Let me zoom in.
If we look here at the
cannons underneath,
the top of the cannons
are really dark
because the whole ship stops
the light from arriving there.
If we were to look
at the bottom parts,
all white, makes sense, right?
So by just adding one float
per vertex, we were able
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So by just adding one float
per vertex, we were able
to give our ambient light
away to light up our scenes
and give a better look.
And that's available in the
(inaudible) of SceneKit.
So let's go back to the slides.
So that was ambient occlusion.
I will finish today
by introducing you
to advanced lighting
with light maps.
So what is light maps?
What light maps are is a way
to get your diffused lighting
into the game and precomputing
how the diffused lighting
affects your textures to not
have to do that in realtime.
As you guys know, if you have
done some game programming,
lighting is extremely expensive.
If you have ten lights then for
every frame you need to compute
that lights' aspect and how
that interferes with your game.
That's very expensive.
So what we have here, I just
dragged and dropped a plane
and two boxes inside a simulator
and put eight lights there
and I computed, before I
even launched the game,
using light map baker how those
light maps light up my scene
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
using light map baker how those
light maps light up my scene
and light up the texture.
And if we were to
look at the scene,
that's what it looks like.
It's really realistic.
We've got the shadows
and the lights
but this costs me
literally one texture fetch.
That's it.
Usually rendering eight lights
especially if you've got shadows
and shadow maps they
are very expensive.
That's what the texture
looks like.
Very straightforward,
you see my boxes
in the middle and my shadows.
The cool thing is this is
just a texture fetch, right?
So it supports lots of lights.
I could have 100,000
lights if I wanted
and it would have the
exact same runtime costs.
Even the shadows
look really cool.
When you do spotlights like
this, that are really close
to the ground then the
shadow maps you reach kind
of precision issues but
with this, for every pixel
on the thing, we send rays and
see which ones are in the light
and which ones are not.
So your shadows look
really realistic.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So your shadows look
really realistic.
Calculate it offline.
And the cool thing is we
support super complex slide
that you couldn't even
dream of doing at run time.
For example our real lights have
for a long time been really hard
to do at runtime because
they are really hard to do
with normal point light to
triangle illumination processes
but here we are using
ray tracing.
So we just send the ray and see
which ones arrive (inaudible)
and which ones don't.
We also support the
cool IS lights,
that Nick talked to you before.
And that was light maps.
So to close this talk, I
would like to summarize a bit.
Today we introduced
a great new framework
and we are super happy about it.
First it does the basics imports
and exports 3D asset files.
But is actually does
so much more for you.
It introduces concepts for
physical basis for rendering,
with models, lights,
cameras, materials,
and skies that aren't
just defined with floats
but are actually based
on stuff in real life.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We have integration of system
in the frameworks and tools
in Xcode that you guys can
play with that and have fun.
For more information, we invite
you to look at our documentation
and videos and forums
and our technical support
and for any general inquiries,
you can contact Allan Schaffer.
Related sessions
are "Enhancements
to SceneKit" tomorrow
and "What's New
In Metal" on Thursday.
And we would like to see you
at Model I/O Lab right now
and tomorrow morning at 9AM.
Thank you all for your
time and have a great WWDC.
(Applause)