WWDC2014 Session 509

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> Hi, everyone.
I'm Dean. In a few minutes,
my colleague Brady will
go come up on stage.
We're both engineers
on Apple's WebKit team.
How often have you heard a
presentation start with "Today,
I'm really excited to talk to
you about blah, blah, blah?"
And I told myself I
didn't really want
to introduce this
session that way
and then I realized I
actually am really passionate
about this topic because
over the past few years,
I've worked on a lot of graphics
technologies on the web like SVG
and Canvas, CSS transforms,
animations, filters.
And I really think like WebGL
is the next significant leap
in the type of graphics you
can do in a web browser.
WebGL takes the power of
the OpenGL ES standard,
which is popular on mobile chips
and combines it with the speed
and convenience of JavaScript,
the web's programming language.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and convenience of JavaScript,
the web's programming language.
This means-Well, because
you got this proliferation
of really powerful
GPU hardware combined
with this incredible performance
improvement in JavaScript,
we've hit this sort
of sweet spot
where you can do these
amazing graphics.
This is going to give you the
full power of a configurable
and programmable pipeline,
as well as performance,
both because you're
talking directly to the GPU
and because JavaScript
is super-fast nowadays.
This is going to allow
you to write web content
such as showing an interactive
3D model while still maintaining
the flexibility and ease
of use of having your text
and interactive controls
in HTML.
Or maybe you want to take that
3D model a little bit further
and do something like an
architectural walk-through flow
through a building site.
You can see here we've got more
advanced lighting, shadows.
There's also data
visualization and mapping.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
There's also data
visualization and mapping.
We all know 3D mapping
is becoming more popular.
But it's not just 3D.
Let's say you want to do a
2D-You want to provide something
like an image editor
that's doing 2D operations
on your content.
And here, you get
to do something
that previously wasn't
available or was difficult to do
in regular JavaScript.
Well, something that's
very popular
on the web is just
doing image transitions.
So here, we've got
something a bit more exciting
than a normal image slide where
we can do a 3D ripple effect.
And of course, there's games.
This is a demo, AngryBots,
by Unity where you've got
this console level game engine
which has things like realistic
lighting, particles and shadows
and also the ability
to destroy evil robots.
Or maybe you want to do
something like casual gaming
and this is Swooop
by PlayCanvas.
And it's a really
great innovative take
on the infinite 2D runner where
instead of like sliding along
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
on the infinite 2D runner where
instead of like sliding along
in 2D, you're actually flying
this sort of nice stylized plane
around this 3D island;
it's quite fun.
So, what are you
going to learn today?
We'd start by how to set
up-how to get access to WebGL
and set it up in your web page.
And then, we're going
to show how
to do basic drawing with WebGL.
And this is going to be sort
of like crash course into how
to draw something with
WebGL and you get an idea
of how powerful the
rendering system is.
Once we get that, we're
actually going to move
on to advanced drawing and
how to do simple animation.
And lastly, because
it's a web technology,
we want to tell you
how WebGL fits
into other parts of
the web platform.
But the important topic is
where is WebGL available?
And we're happy to say that
WebGL is available in Safari,
on OS X Yosemite, and that
was announced on Monday.
[ Applause ]
And what wasn't announced,
but I'm happy to say,
is it's also available
in Safari on iOS.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
The even better news is
this-WebGL is available
on every device that can
install these operating systems.
If you're a programmer and
you want to use WebGL content
in your app, you're
going to want to know
about the modern WebKit API
and its WKWebView class.
Now, one of the many benefits
of using this new modern API is
that you get full benefit of
the JavaScript Nitro engine,
which means your content has
got to be running super-fast
which we all know is Craig's
mom's favorite feature
and she's updated all her
apps to use the modern API.
Something else is that the
API surface area between iOS
and OS X is identical
and this means
that your content should run
the same on both devices with,
of course, the understanding
that some devices don't run
or don't have as powerful GPUs,
but otherwise, it's identical.
And similarly, because
it's a web standard,
that same content should run
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that same content should run
on other browsers
that support WebGL.
Now, creating great 3D
content is made a lot easier
if you have a good tool system.
And even though WebGL is a
relatively young technology,
it does have a sort of
thriving ecosystem of tools.
And there's a couple I want
to call out in particular
and these are big
vendors, Epic Games,
the makers of Unreal Engine,
and Unity Technologies,
the makers of Unity, both
have announced WebGL export
from their systems.
This means not only do you get
the state-of-the-art 3D engines
and editing environments,
you also get access
to their marketplace where
you can purchase 3D models
or materials or other assets
to help you make your content.
Another example is the
company called PlayCanvas,
who also have a 3D engine and
editing tool, but they do it all
within the web browser,
and this means
that you can have distributed
teams working inside
of a browser editing the same
content; it's really cool.
If you're a developer, there's
a bunch of open source libraries
and I've just listed a few here.
Most of these do wrap
the low-level WebGL API
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Most of these do wrap
the low-level WebGL API
and something higher which
allows you to program in terms
of spheres and cubes
and materials rather
than buffers and triangles.
But today, we are going to talk
about buffers and triangles
because we think it's important
that you understand that level
of programming especially
if you are using some
of these high-level tools.
You have some hints as to
what might be going wrong
and what you can do to
improve your content.
Before I get into that, I just
want to talk about one thing
which is motivation,
why we're doing this.
So, Apple has always considered
rich, powerful graphics
to be super important
to web developers
and to the Safari engine.
And in fact, this is
why about a decade ago,
we invented the  element
which is what the
basis of WebGL is.
As soon as WebGL was announced,
we joined the working group
and to this day, we
volunteer ourselves
as the editor of
the specification.
So, next question is
why do we choose OpenGL?
OpenGL is the most important
standard graphics API
that's around.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that's around.
It's already been demonstrated.
It can run on a billion
mobile devices.
And the content you can create
there is still just amazing
on other devices or more
powerful devices as well.
So, again, it sort
of made a no-brainer
that we choose the best
standard and that way,
all browsers can implement
it and we end up with WebGL.
OK. So, let's get coding.
Like all programming examples,
we want to start with "Hello,
world" and we got to the
"Hello, world" of WebGL.
So, imagine you're
opening your text editor
and we're starting
with a blank slate.
We're just going to
type a few commands
and create something that's
this 3D interactive environment.
We're actually going to start
with something very simple
which is just a triangle.
But while that might sound
disappointing, if we go back
to the 3D environment,
we look at it
and think actually-let's take
a look-it's actually made
up of millions of
little triangles and each
of those triangles have color
or texture applied
or some lighting.
And then we're rendering
it again with another pass
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And then we're rendering
it again with another pass
where we might be doing blurs
or glows or shadows or whatever.
And when you combine
them all together,
you actually do get
the advanced rendering.
So, you learn a lot of detail
from how to draw one triangle
and the power that you learn
goes on to create better things.
So, let's start creating,
configuring and drawing.
And for that, we need 4 things.
First, we're going to
need somewhere to draw to,
something to draw on to.
Then, we're going to need
something to draw with.
Then, we're going to
configure that thing,
choose what paint we
want to paint with.
And lastly, we got
to do the drawing.
We'll go through each of
these steps one by one.
So, let's start with
something to draw onto.
And artists like myself
call this a canvas
which is super convenient
because HTML already has
an element called .
So, we'll use that.
And this is like a regular
image element except instead
of getting the graphics to draw
from say, a file on the web,
you provide the commands
in JavaScript that draw
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
you provide the commands
in JavaScript that draw
into the image and then
the browser renders that.
You might already have one in
your page as a  element
or you can create one through
JavaScript via createElement.
In my example, I'm
going to pretend
that I've already got one
in the page and I'm going
to select it using the DOM
API and I'm going to store
in the local variable
"canvas" 'cause I want
to reference to it later.
Now, before I can draw into it,
I need to tell the system
how big an image it is
or how many pixels it
needs to allocate so that
when I draw the rendering
happens into that image.
And I do that by
setting the width
and height variables
on the canvas.
Here, I want to set it to 600
by 400 but also I want to take
into account if I'm
on a Retina display,
I want a higher resolution
image,
so I'm querying
window.devicePixelRatio.
That's all I need for
something to draw.
The next thing is I need
something to draw with.
And in WebGL, that is the
WebGLRenderingContext.
This is the object that
exposes the entire WebGL API.
In code form, you do
that-you get one quite easily.
You just call getContext passing
the string parameter WebGL.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You just call getContext passing
the string parameter WebGL.
If you're familiar with
2D Canvas rendering,
you would have seen someone
call this with a 2D string
and you get the 2D API.
So here, we've-we'd
code it with WebGL
and we have a variable
called GL,
which is the thing
we got to draw with.
If you're familiar with
Native OpenGL programming,
you might be wondering where
did I set my pixel format
and create my render buffers
and frame buffers, et cetera,
you don't have to
do that in WebGL.
The previous step allocated the
image that you got to draw into
and this step is giving you
the context that you got
to draw with, that's
all you have to do.
Next, we're going to
need something to-We need
to configure the
system and this is
where it gets a little
bit tricky
so if-with only a few
lines, we've got something
to draw with, now we're
getting into the native system.
We got to give-This
is where we're got
to start the crash
course in WebGL rendering.
Before we are able
to render something,
we need to do a few things.
We need to create buffers.
And buffers are just
a set of data
that we got to upload
to the GPU.
And that data can be
any types of things
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And that data can be
any types of things
but they're almost certainly
got to contain the geometry
that we want to render.
Next thing we need is a
program which is going
to be the actual way
that WebGL renders it.
Now, you're going to
be a bit confused here
because we're already making a
program what's-is this another
program and the answer
is it is, we got to get
into the details of
what it is later.
But just imagine that you got to
be writing some specialized code
that gets uploaded also to
the GPU and executed there.
Let's start with the buffers.
I want to draw this triangle,
and the triangle is just
made up of 3 points.
In WebGL, the coordinate
system goes from minus 1,
minus 1 on the bottom left to 1,
1, on the top right and I want
to create a buffer
out of these 3 points.
So, what I'm going to
do is allocate 6 values,
an array of 6 values
and I'm going
to map those points
to those 6 values.
So here, I've got (X1,
Y1), (X2, Y2), (X3, Y3).
This is all I need
to upload to the GPU.
So, I'm going to show you how to
do that in WebGL.In JavaScript,
we just start with
the array of 6 values.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we just start with
the array of 6 values.
This is a JavaScript array.
And I'm going to assign
it into a Float32Array.
And this is a special
type array in JavaScript
and that's telling the system
that I want it to imagine
or allocate this data
as a fixed-length array
where each value is a full
byte floating point number.
This comes in handy because
when we upload it to the GPU,
we've already told the system
what the type of data is
and that way, it-so
it doesn't have
to do another conversion
from JavaScript.
So, they actually
create the buffer.
I'm going to call it
createBuffer command.
And now I'm going to provide
the data that is going
to be uploaded through the GPU.
So, I just tell it, "That
buffer you just created?
Send that vertex-vertices
variable up there."
And that's all we have
to do to create a buffer.
Now, we're going to
talk about the program.
Now, conceptually, what
we're doing is we've got some
JavaScript commands we're
executing and then we're going
to end up with pixels
on the screen.
But what really happens is that
we process some JavaScript,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But what really happens is that
we process some JavaScript,
it gets sent to the
WebGL rendering pipeline
and it's the thing that draws.
So, we've really got to
understand what's happening
in the WebGL rendering pipeline.
Now, you can look
up OpenGL textbooks
and they all explain the
same thing, but it's made
up of basically 8 steps,
and each of these 8 steps you
have different configuration
options you can pass to them.
But these 2 that you have
almost complete control of over
and they're the 2
most important ones
and the ones we're going
to talk about today.
And that's the Vertex
Shader step
and the Fragment Shader step.
If we take them in isolation,
we can really consider
that for the sake of
this presentation,
I'm executing JavaScript.
I'm going to send the commands
into the Vertex Shader.
The Vertex Shader is going
to do something with it,
send the command-send the
output onto the Fragment Shader
which is going to do
something to it and eventually,
we get the pixels on the screen.
And this combination
of the Vertex Shader
and the Fragment Shader
is what we were referring
to as the program before.
Now, Shaders are these little
programs that you got to write
in another language
which we'll get to later
and they're the things
that execute on the GPU.
And the reason there's two
of them is they have two
different operations.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of them is they have two
different operations.
The Vertex Shader is
mostly about geometry.
So, you're passing in
points to it and it's got
to output converted points.
The Fragment Shader is
really about what color
of pixels you're going to do
based on the input of points.
If we rotate this diagram
clockwise 90 degrees,
we'll look at it another way.
Here, I've got the buffer
that I've allocated before
and I've uploaded to the GPU.
I'm going to send it
into the Vertex Shader.,
except it doesn't
quite work this way.
And this is where the power
of GPUs comes in to play.
I'm actually going to separate
each of those by buffering
to a set of three
vertices and it gets sent
to a different instance
of the Vertex Shader.
And these were all executed
in parallel on the GPU.
And this is where you get
this great performance.
So, given a Vertex,
which is just the (x,
y) point in this case,
the Vertex Shader is going
to do something and
create another point
and send it back to the system.
And when the system has
collected all the points,
it's going to do what's
called rasterization.
So, it now knows where the
geometry on the screen is going
to be displayed and which
pixels are going to be touched.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to be displayed and which
pixels are going to be touched.
But it still doesn't know
what color to draw the pixels
and it's-this is the next
step where it's very similar
to the Vertex Shader steps.
It's going to take all those
pixels and then send them
out to a bunch of
parallel instances
of the Fragment Shaders.
And the Fragment Shaders
just have the one task:
given a pixel, what
color should it be?
Let's look at the code for this.
I'm going to start by creating
a vertexShader object which I do
by createShader,
passing in the parameter,
telling it that it's going
to be of type, vertex shader.
And next, I'm going to tell it
to provide some source
code for the shader.
I'm not showing you the
source code at the moment
but you can just imagine I'm
getting it from somewhere.
It might be I create
it by JavaScript
or I might have preloaded it
or got it from the internet
and we'll get to that later.
I'm going to compile it, which
is turning it into commands
that we can use later
on the GPU.
I do the same thing with
the Fragment Shader.
It's pretty much identical.
Of course, I'm going
to use different source
code, which we'll see.
Once we have those two
objects, the vertexShader
and the fragmentShader, I
want to create a program
and we tell the program that
it's two objects that it needs
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and we tell the program that
it's two objects that it needs
to talk to, or the two
shaders we created.
I'm going to link it and then
lastly, I'm going to tell WebGL
that this is the
program I want you to use
when you do your drawing.
So, that's all we have
to do for configuration.
So, we now have a setup where
we have something to draw,
we have something to draw with
and we've configured it to draw
and the last thing we need to
do is render our masterpiece.
Now, the next tricky step.
I haven't shown you
any source code
but what we-what the general
idea is here, we have a bunch
of WebGL-we have a
bunch of JavaScript
and we have some buffers on
the GPU that are what I want
to render and I've got these
programs that I got to render it
and I need to, through
JavaScript,
tell the system how I'm binding
the data in those buffers
to variables in my program.
And you'll see the variables
in the program later.
But the first thing I'm going
to do here is say, when you come
to execute the program, there's
going to be a variable called
"aPosition" and I want you
to associate every vertex
in the-that you've uploaded
as a buffer to that variable.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in the-that you've uploaded
as a buffer to that variable.
Next, when you actually
go to use the buffer,
I have to tell the system that,
well, I've uploaded X, Y, X, Y,
X, Y so I want you to assume
that when you're processing this
buffer, take it two at a time
and that they're
floating point values.
Then, I just have to draw.
I've sent a buffer.
I'm going to draw the vertices
in the buffer starting position
zero and I've got three of them
which makes the three points
in the triangle and eventually,
we end up with a
triangle on the screen.
Now that-if we have to look at
the source code all at once,
you might be a little
bit worried
that it was actually a
fair bit of source code.
I've skipped some in the
slides because I wanted
to add some error
checking or whatever.
But the important thing is
actually while you only drew a
red triangle, there's
an insane amount
of power behind that
red triangle.
That power comes
from the shaders
and that's what we're
going to look at next.
So, I didn't show the
source code to the shaders,
but we'll get into that.
Shaders are written
in a language called GL
Shading Language or GLSL.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's a C-like language
designed for parallel graphics.
What this means is that
it's got-it looks like C
but it's got some extra
primitives for vectors
and matrices and also some
operations on those primitives
so that you can multiply
matrices and whatever.
You don't have to do
the math yourself.
It also has a bunch
of built-in functions,
such as trigonometry
functions or other operations
on the matrices, like
dot products and normals
and some other sort of helper
functions to make the -
that are common in
graphics operations.
Let's go back to the view
of the rendering pipe.
So, I have the buffer
that I was sending off
to multiple Vertex
Shaders that we're sending
on to Fragment Shaders.
But we'll simplify it
again and come back.
Now, the data I was sending
in, the buffer at the top
that you're familiar with,
I-at the moment, I only have X,
Y positions but really you
can send any data into it.
So here, I've just
added some other data.
And again, this is the-these are
your input to the Vertex Shader.
Each part, chunk of the buffer
is going to be associated
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Each part, chunk of the buffer
is going to be associated
with the vertex and sent in
to a Vertex Shader instance.
But you might want to send data
into the Vertex Shader that's
shared across all the instances
that are running and you
do that by using uniforms.
And these are global constants.
So, good examples of this
might be the current frame
of the animation that you want
to run or the mask position
or the time-the rendering time
or maybe the camera position
on matrix that you want to
do as a viewing position.
So, the Vertex Shader is going
to operate on those two sets
of inputs, one's
coming per vertex
and the other one that's
coming as global variables
and that only has one task
and that's to produce a point.
And it produces that
point by writing
to the global variable
called glPosition.
The Fragment Shader
is quite similar.
It's got to use the
position that was passed
by the Vertex Shader
and any other data
and the global constants and
it's going to write to one thing
which is the color of the
pixel which it does by writing
to the global variable,
GL fragment position.
Let's look at the-finally,
look at the source code.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Let's look at the-finally,
look at the source code.
So, my Vertex Shader,
I've picked basically the most
simple Vertex Shader I can do.
Now remember, we saw
that we were binding
in JavaScript the
value aPosition-sorry,
the variable aPosition to the
buffers that we passed in.
Here's where I actually
get to do it.
So here I am in the Vertex
Shader saying the data
that comes in from the vertex,
I want you to associate it
with the variable aPosition and
I'm doing the one thing I have
to do which is writing
to gl-Position
and I'm just writing the
same value that I got in.
It's just sending the
inputs straight through.
At this step, normally, you
would do something like map
from your world coordinate
system
into the camera coordinate
system,
which then the camera maps it
into the screen coordinate
system.
But because I-the data I send
is actually really and already
in the screen coordinate
system, I can just pass it
through for convenience.
The Fragment Shader is equally
simple-the Fragment Shader
example is equally simple.
I'll start with some
boilerplate.
And the boilerplate is
telling the system what level
of precision I want it to use
for floating point operations.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of precision I want it to use
for floating point operations.
And then I'm going to write
the color of the pixel and,
in this case, I'm
writing to gl-FragColor.
I'm going to write
every instance
that the Fragment Shader is
writing the same value which,
in this case is four-a vector
of four values which is the red,
green, blue and alpha values.
So here, I am writing
100 percent red, 0 green,
0 blue and 100 percent alpha
and this is why every
pixel came out as red.
Now that was pretty simple.
And to take it and show
you a little bit more power
of the shaders, I'm going
to show a live demo.
OK. So, here's our triangle.
Now, this is running in
Safari and it's a web page
and what you've got
is a-the top half
of the screen is a WebGL canvas
that's drawing the triangle we
did before.
And the bottom half
of the screen is showing the
source code to the shaders.
So, in this case, it's
showing the Vertex Shader,
and here's the Fragment Shader.
And this whole environment
is live.
So, if I make an edit
in the page here,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, if I make an edit
in the page here,
it's going to grab the
source code out of the page,
recompile the program, upload
it to the GPU and render again.
It's actually rendering
constantly here.
You just don't see it
because nothing's changing.
So here's an example.
Let's say here's the-me
writing the color of the pixel
and I've set it to 1, 0, 0, 1.
If I change this to just
1, I get full red, green,
zero blue and I get yellow.
Let's reset that and go
back to the Vertex Shader.
So, you can see here is the
attribute that I'm passing in
and I'm also passing
in some uniform values
which is the time, as
in, every time I render,
I update that value so that
I can read it in the shader.
So, I could do something tricky
here like, well, maybe I want
to do some kind of
coordinate transform.
I want to make the
triangle twice as high
so I just multiplied the Y
position or I can do something
like if I take the attribute
in and I say I want the X value
to be the Y value and the
Y value to be the X value,
we've got this flipped triangle.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we've got this flipped triangle.
I've got a preloaded one
which is doing it here.
So, in this case, what I'm doing
is, I've got the input variable
that I've post in called
time and I'm just mapping
that between 0 and 1 and
calling it-assigning it
to the variable called progress.
And then when I come
to write the position,
I'm just telling the position
that I want to interpolate
between the X and
the Y positions using
that progress value and that's
why you get this nice reflection
across the diagonal axis.
Let's reset again and go
back to the Fragment Shader.
Now, we can do some cool
things in the Fragment Shader.
For example, I've got
this communication
between the Vertex Shader
and the Fragment Shader
where I've alloc-of telling it,
the shader, where the position
in X and Y is of the fragment.
So, if I say instead of the
green value, I set it to be,
say, fragPosition.x,
then we get a gradient
because the value moves from
0 to 1 across the triangle.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Again, I've got a preset one
so I don't have to type it out,
but I'm doing something
similar here
where the red value
is the X position,
the green value is
the Y position
and then the blue value
is oscillating over time
so you get this nice
triangle that's moving.
I'm kind of getting sick of the
triangle so let's have a look
at it in wireframe mode.
Now, we said that really GL is
about drawing lots of triangles.
I want to draw a rectangle,
which is really just two
triangles joined together.
And if we go back
to the solid mode,
you see that the same
animation is still running.
Now, what's really impressive
is this program is running
and calculating the value of
every pixel every time we draw.
And this really blew my
mind when I first saw it
but even this is a
pretty simple example
and we can do way
more cool things.
So, here's a little
bit more code.
But what it's really doing is
just taking some sine waves
and with slightly different
offsets and adding them
up to get this interactive
thing.
So, there's no images here,
it's all being calculated live.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, there's no images here,
it's all being calculated live.
And the cool thing is you
can play around with stuff.
So, here is where I
basically choose the frequency
of the plasma so I can make it
a little bit higher by dividing
by less and, let's say I
don't really like the colors,
this is the-here's the point
where I'm assigning
the color value.
Let's say, instead of
minus 4, let's do plus 4,
I like those colors
a little bit better.
And I can go up here and
say, "Well, here's the number
of iterations that
I'm adding up.
So let's say, go down to 7.
Let's do something like
3, kind of like that."
This looks pretty cool.
Now, as programmers you'll know
that you can do cool
things like here.
Let's say I want to
change the value of Pi,
something that's quite hard
to do in the real world
as far as I'm concerned.
We will say something like 5, 6,
or we can go-that's
kind of a nice effect.
You can even do something
like crazy-what happens
if I, hold on a second.
Now, I came across
something earlier in the week
which I really liked,
which was-I saw on the web
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which I really liked,
which was-I saw on the web
and it was a guy and
his name is Israel
and he saw the WWDC
branding and said, "Hey,
I could write a Shader that
does this," and I asked him
if I could use it
and here's the code.
This is really cool, so this is
again a program that's running
for every pixel every
time we draw and it's sort
of this interactive WWDC logo.
And you see, scroll down,
there's a fair bit of code.
Amazing that it's all
running every step.
Let's say I want to comment out
the final one and get black.
You could sort of see what
he was doing in each step,
so there's the gradient.
There's the balls
that he was animating.
And he sort of masked
them out to that
and then eventually it
gets the square grid.
I think this is really cool.
So, I wrote this whole
system in a couple of hours,
what's important to you is
that there's actually a couple
of communities out there that
have something very similar,
shadertoy.com and
the GLSL workspace,
or becoming what is
called a playground, maybe.
And if you look this up,
you'll see whole examples
of amazing shaders that
will really blow your mind.
So, in wrap up, shaders are
C-like programs that you write
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, in wrap up, shaders are
C-like programs that you write
in GLSL and upload to the GPU.
You get complete control
over the vertex positions
that you pass in and the
color of the pixels you render
to the screen and they're
extremely powerful.
So with that, I'm going to pass
it on to my colleague Brady.
Who's going to talk to you about
how to do advanced rendering.
>> Thank you, Dean.
So, so far, we've seen
the Hello World program
of WebGL, the basic triangle.
And yeah, there was a
little bit of effort to get
that basic triangle
on the screen.
But once we'd gone through that
effort, with just a few lines
of shader code, we start
to achieve some pretty
fancy things pretty quickly.
And there's a lot more to be
said about shaders and we'll get
into that more very soon.
But I want to start out focusing
back on that red triangle.
So, what is that triangle?
The triangle is three
points in space.
I can rearrange those three
points and move this triangle
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I can rearrange those three
points and move this triangle
and reshape it however
I'd like to.
Make it really skinny and tall.
Now, I have two triangles,
two slightly different colors.
This is starting to look very
familiar to me for some reason.
Oh, that's why.
OK. So, that's part
of the needle
of the compass in
the Safari logo.
So, let's build up
on this a little bit.
We're going to take the Safari
logo and we're going to bring it
into the third dimension
using WebGL.
So, this is the most
basic example
of a 3D compass you could say.
But except for that
picture on top,
it's basically just a gray disc.
So, this gray disc is
actually very similar
to that red triangle
that we started out with.
And by that I mean, it's nothing
but a whole bunch
of triangles itself.
As Dean has already mentioned,
even the most complex scenes
in WebGL are just
hundreds, thousands,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in WebGL are just
hundreds, thousands,
maybe even millions
of triangles.
So, for each of those triangles,
we have three points of course.
Let's go back to the code that
Dean has already showed us
where we take three points
to make a basic flat triangle
and upload it to the GPU.
And for our disc, our
basic little gray disc,
we're just going to
do more of the same,
a lot more of the same.
So, how did I get all
these coordinates here?
I'll tell you what I didn't do.
I didn't calculate them by hand.
I didn't type them out by hand.
I used a tool.
As Dean has already touched on,
your toolbox is very important
when programming with WebGL.
Unless you're doing the
most basic of examples,
a handful of triangles, you're
probably going to want to rely
on 3D modeling tools,
preexisting 3D models
to shape your geometry and
the appearance and get them
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to shape your geometry and
the appearance and get them
into your WebGL program.
There's great native tools.
There's great web
tools out there.
Dean touched on a few.
But what they all
have in common is
that they'll export vertex data.
And that is any data you want.
That's what a vertex is.
It's any data you want
for any point you want.
We've already touched on the
most obvious bit of this data,
which is the coordinate:
the X, Y and Z coordinate
of that point in space.
We can also directly include
the color of the point.
But then as we get into more
advanced graphics programming,
we'll want to include the
normal vectors of the point.
This tells WebGL which
direction the point is facing,
which is important for things
such as lighting later on.
And then we can also
include texture coordinates.
So, what are textures?
Textures are just flat bitmap
images, an array of pixels
and each pixel has
a color to it.
You know this as
an image, right?
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You know this as
an image, right?
So, here's the Safari
icon; it's just an image.
But what texture coordinates
do is they map those pixels
from the image onto our
three-dimensional shape.
So, we can have the
basic, uncolored 3D shape
and use a flat image to define
what colors it will show.
So, how does this look in code?
Back to our example of uploading
the geometry of our shape
onto the GPU for use
in our shaders program-our
Shader programs,
here's the first 10
pixels from the disc.
So, for each of these
10 vertices,
we have an X, Y and
a Z coordinate.
And then our tool can also
output the texture coordinates.
These are just X and Y
coordinates into a texture image
to map the pixels
onto our geometry.
Instead of working from
the native pixel count
of the image, it
works from 0 to 1.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, once we have that
data from our tool,
we need to get it onto the GPU.
So, you've already seen the code
that Dean showed us about how
to get the position
vertices up onto the GPU.
We're going to do a
little bit more of the same
to get the texture
coordinates to the GPU.
We're going to specify
a new attribute.
Remember, an attribute is a way
to specify to the GPU the inputs
into the shader programs.
And we're going to
say that the input
to the texture coordinate
attribute is our texture
coordinate buffer.
And then, we'll go ahead
and upload the data
from that JavaScript
array onto the GPU.
Now, back to our Vertex
Shader source code.
This is the most basic
Vertex Shader example
that Dean had showed
us, where we have
that position attribute
as an input.
We'll just go ahead and add the
texture coordinate attribute
as an input as well.
And now, it's available
to the Vertex Shader.
One of the examples Dean showed
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
One of the examples Dean showed
in the demo had what's
called a varying variable
in the shader program.
He didn't touch on what that
is, so I'll tell you now.
A varying variable is
a quick and easy way
for the two shader
programs to share data.
So, by declaring the
vTextureCoord variable,
we can pass data from the Vertex
Shader to the Fragment Shader.
And then, since we've already
pre-calculated what the texture
coordinates are, we don't need
to transform them in any way.
We're just going to pass them on
directly to our Fragment Shader.
So, over in the Fragment
Shader source code,
you'll make a similar change.
We'll declare that texture
coordinate attribute
and now it's available
in the Fragment Shader.
So, this is the texture
coordinates.
We've gotten them from our tool.
We've gotten the
JavaScript array for them.
We've uploaded them to the GPU.
Now, the coordinates
are available
in that Fragment Shader program.
Now, we need to worry
about the texture itself.
So, the way WebGL gets the
pixel data from an image
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, the way WebGL gets the
pixel data from an image
that is your texture and uses it
in the shader programs
is by using a sampler.
So, back on our JavaScript where
we're configuring our shaders,
we'll just declare
a sampler variable.
This is uniform variable
as Dean already mentioned,
it's a global variable that
JavaScript can assign to,
to pass some data into
the shader programs.
So, once it's been declared
in JavaScript, we can go back
to our Fragment Shader and
declare it there in GLSL.
The type here is sampler2D,
that's one of the few types
in the GL language that
operate on textures.
And once we have that
sampler, we'll change
that straight red color where
we're saying every pixel is red.
And now, we'll use the sampler
with this quick function call.
What this function
call does is it says,
for the texture source
represented in the sampler,
I want the color of the pixel
at this texture coordinate.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I want the color of the pixel
at this texture coordinate.
And then we assign
it to gl-FragColor
and that's what's going
to show up in the scene.
So, Texture Source.
What is a texture source?
In OpenGL, it means one thing.
Here in WebGL, we're working
with web technologies.
There's a few different
options for your texture source.
The most obvious is
the  element.
If you have an image in your
HTML page, in the markup,
and your page is
finished loading,
you can use that image
element as a texture source.
You can also create an
image element dynamically.
And as long as you've
waited for it to load,
those pixels are ready to
be uploaded to the GPU.
You can also grab data
from a server directly
using XMLHttpRequest.
You can-XMLHttpRequest has the
ability to grab the raw bytes
of the response and
that is used in WebGL
to get those vertex
points into the shader.
Then there's a  element.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Then there's a  element.
The video element is a great way
to display video in your webpage
without using any plug-ins in
a native web technology manner
that interacts with all the
other web technologies created.
But what a video really is,
is just a sequence of images.
So, if you use a video
element as your texture source
when you're drawing a
frame of your scene,
it'll grab the freeze frame
of whatever is being shown
in the video element
at that point in time
and that freeze frame
will be used
as the image for the texture.
Last but definitely not least,
some pretty cool possibilities
with the  element
being used as a texture source.
You can draw whatever you
like into a canvas
element: an image, text.
You can use the canvas 2D
drawing APIs to draw a 2D scene.
You can also use the WebGL API
to draw a three-dimensional
scene into a canvas,
and then use that canvas
as a texture source
for a different WebGL scene.
This way you can render
one 3D scene to be used
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
This way you can render
one 3D scene to be used
in another 3D scene for a
movie screen or a billboard
or television or much
more creative ideas.
But how this looks in code,
we're just going to stick
to the basic image element.
Here's an image element
I have in my HTML markup
and it's pointing to an image
that represents the Safari logo.
Now, in JavaScript, first,
we ask the GL context
to create a texture and then,
similar to what we've
done a few times,
we bind-do some binding
voodoo to specify
which texture we're working on.
TEXTURE0: this constant
might seem a little weird.
The story behind TEXTURE0
is that each program,
each set of shader programs
can access up to 32 textures
and there's a constant
for texture 0, 1,
2, all the way up to 31.
We're just using
1 in this example,
so we'll stick with the first.
Then we get our texture source.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Then we get our texture source.
Using this basic DOM API,
we grabbed a reference
to the image element.
Now, this line of code is
where the magic happens.
In this line of code, we're
updating the raw pixel data,
the RGBA bytes, 8
bytes per component,
and we're uploading it
to the GPU to be used
in our shader programs.
And the key in this line
is the texture source,
and that's the image
element you've grabbed,
and this is where you might put
the XMLHttpRequest, the
or the  element
as a texture source,
if that's what you're doing.
And then, we're going
to go ahead and interact
with that uniform variable that
we created earlier, the sampler,
and now we actually
need to set its value.
And the value we're
setting here is zero
because we're working
on TEXTURE0.
Behind the scenes, WebGL
translates that into an object
that says, "I'm going to
be sampling pixel data
from texture image zero."
Now, we're ready to go and that
Vertex Shader can put those
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, we're ready to go and that
Vertex Shader can put those
pixels onto the screen
from our texture.
So, using textures, we
can map a flat 2D image
onto our 3D geometry.
In this example I've been
talking about so far,
it's a very basic disc
and a very flat image that's
just mapping one to one.
But using our tools, we can have
a much more complicated texture
where different regions
of the texture represent
different parts of the geometry
and then we can have much
more geometry as well.
So, I'd like to show
you a live demo
of what we've talked
about so far.
So, I'm not going to show
you any code in this demo.
I just think it helps to
visualize what I've been talking
about with these
texture coordinates
by building up an example.
So, here's our very basic
three-dimensional disc.
You can see it's got a
wireframe which is nothing
but a whole bunch of triangles
that build up this round shape.
But as I alluded to in the slide
right before I started the demo,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But as I alluded to in the slide
right before I started the demo,
we can have a much more
complex version of this.
We can build up the geometry
to represent the features
of the compass in
three dimensions.
And then, we can go ahead and
apply that complicated texture
onto that geometry, and now,
we have a live 3D
representation of a compass.
Now, to really convince you it's
live, let's start animating.
So, this is a really quick
little routine that's just
animating a camera
around the compass,
following some sign waves
and the time, just to kind
of give an ooh, aah, view of it.
To further convince you that
this is a live 3D model,
I can show you that parts of it
are independent from another.
So, let's go ahead and
start that needle spinning.
So, all that data generated
using a tool, its output,
the coordinate information,
the texture-the position
coordinate information,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
the texture-the position
coordinate information,
the texture coordinate
information,
it's also outputted
a whole bunch
of other vertex information
that we've uploaded
to our shader programs and can
be using to show this compass.
Now, the same code was being
executed both in JavaScript
and on the shaders no
matter which geometry
and vertices I'm
passing into it.
But vertex information does
not need to come from a tool.
We can also procedurally
generate vertex information.
So here, we have a terrain
underneath the compass
that we're generating
in JavaScript.
It's just a few dozen lines
of code to generate this strip
of terrain underneath.
So, we can move the
compass over to terrain,
the needle is still spinning,
and some more advanced
things we can do, too,
right now I haven't talked
about yet but we'll get
into in a little bit
more detail later.
So, we can add some lighting.
So now, we have some lights
animating over the terrain.
You can see how they
affect the entire scene
and the compass itself.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and the compass itself.
So, in that demo, we showed a
live representation of a few
of the concepts we've
been talking about so far:
outputting complex
geometry from a tool,
outputting texture
information from a tool.
But the code that are
in the demo really was only a
few dozen lines of JavaScript
and Vertex Shader programming
that we've already gone over.
There's some additional
JavaScript to animate things.
But it barely scraped the
surface of what WebGL can do.
Even with that procedural
terrain generation
and the lighting effects which
I haven't told you any details
about, that's still
barely scraped the surface
of what WebGL can do.
It is immensely powerful.
It is a toolbox unto
itself and trying
to describe everything
would take a lot more
of these sessions.
So, I'm not going to go into
much more detail on any of that,
but I am going to talk about
a different toolbox now,
like to shift gears and touch
up on the web platform
a little bit.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
up on the web platform
a little bit.
The web platform is pretty
mature at this point,
it's been around for dozens
of years and WebGL is just one
of the newest star children
tools in the web platform.
But there's also some very basic
tools that are still there.
HTML is what started it all.
HTML specifies the
content and structure
of the document in your webpage.
And then a little bit
later, we introduced CSS,
which specifies how that
content is presented.
CSS can do simple things like
change the font of some text,
but it can also animate
the transitions of an image
or any element from
different points on the page,
including 3D transforms.
That's already been available
in CSS, a native web technology
that preexisted WebGL.
And then of course,
there is JavaScript.
We've talked a lot
about JavaScript today
because you use JavaScript
to drive WebGL
but JavaScript also has native
DOM bindings to the HTML content
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
but JavaScript also has native
DOM bindings to the HTML content
and can transition
styles on the page.
So, using just this bottom
tier of technologies, the HTML,
the CSS, and JavaScript,
we've already been able
to do some pretty cool things.
For example, if I
wanted to go into
and create a 3D image gallery,
I wouldn't need to jump
into WebGL just to do that.
The HTML can specify a series
of images and their relation
to each other, the order
in which they appear.
The CSS can define a 3D
presentation of those images
and JavaScript can drive some
CSS animations between them.
Something else that's already
been possible is really advanced
text operations.
Using HTML and CSS and things
like the font-face rules,
you can add your own
fonts to content,
you can really finally tweak
how the font is rendered
and it's pretty simple
to do versus,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and it's pretty simple
to do versus,
if you try to do font
rendering in WebGL,
you might find it
much more difficult.
Also, in this little
video I just showed,
you can see that HTML
and JavaScript have
built-in event handling
for the mouse pointer and a
whole bunch of built-in controls
and built-in hit testing for
different elements on the page.
These are all built in
and easy to use already.
And then at an even
more basic level,
what HTML does is
lay out content.
It lays out texts and
other elements on a page.
You can see in this
iBook example,
the text flows beautifully
around elements on the page
and that was basically all free
for whoever wrote that content
and put that image
into that document.
And then very importantly,
sticking with the native web
technologies whenever you can
gets you accessibility for free.
So today, we're talking
a lot about tools,
and the point I'd just
like to drive home here is
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and the point I'd just
like to drive home here is
to use the appropriate
tool whenever you can,
it's an old programming
adage and it's really true
when we take a platform as
mature as the web platform
and introduce as something
as powerful as WebGL.
And finally, I have one more
thing we need to talk about
and that's when to draw.
So far, Dean and I
have described a lot
about how you render an
individual frame in your scene,
you set up the geometry
of objects and the colors
and you set up your shaders
and then you make a call
to draw triangles and, boom,
you've rendered a still frame.
Now, each of the demos we showed
you had animation involved.
How did that animation happen?
Well, as you know by now,
JavaScript drives drawing
in WebGL, but JavaScript
is not always running.
Take this beautiful web page
here, it's clean and looks nice,
but it's also very static
and as long as I'm the user
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
but it's also very static
and as long as I'm the user
and I'm not touching
the mouse or keyboard
and I'm not interacting
with the page at all
and the page is very static,
it doesn't have timers
or any other things going on,
no JavaScript is executing.
So, how can we render WebGL
if no JavaScript is executing?
Well, but then if I start
moving the mouse over the page,
and selecting things and
dragging and bringing up menus,
now, JavaScript is executing
a whole bunch, except,
it's responding to all these
events that are happening.
It's executing asynchronously
hundreds of times a second.
So, in one of those little
JavaScript executions,
you could do some drawing, but
you probably shouldn't draw
in every single EventHandler
that's called
because that'd just be crazy.
You'd be drawing-trying to draw
hundreds of times a second.
That can't possibly work because
drawing takes a long time
compared to how quickly
these events would normally
be handled.
You can only get bits to the
screen 60 times a second.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So basically, you just
slow down responsiveness.
You'd start chewing
through CPU and battery life
and you wouldn't even
gain anything out of it.
But there are times when
you might want to draw
in direct response to
one of these events.
Imagine if you're rendering
a 3D button using WebGL
and the user clicks on it.
You might immediately
want to redraw your scene
to update the state there.
That's great; that makes sense.
If you have a complex scene
that is animating a lot
of geometry though, you probably
want a smooth animation.
You probably are going for that
60 frames per second animation.
Now to get that, I can tell
you one steadfast rule:
please don't use timers!
JavaScript timers are a way
to execute a chunk of code
at some point in the future
that's based on a time delay.
This, it turns out,
is not appropriate
for rendering animations.
One example where
it's inappropriate is
that the system might be under
load and you might not be able
to keep up with 60
frames per second.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to keep up with 60
frames per second.
So, if you set a timer to
run 60 frames per second,
not knowing the system is under
load, you're going to be trying
to draw more often than your
drawing can be presented
onto the screen.
This is just wasteful.
It's going to waste CPU, heat
up the user's mobile device,
burn through the battery.
So, what can we use instead?
There's an API specifically
for drawing
that you should use
instead of timers
and it's called
requestAnimationFrame().
Much like a timer,
you pass a callback
to request animation frame.
So that's the first
thing you do to use it.
Now, when is your
callback called?
Your callback is invoked when
WebKit Safari or the application
that runs WebKit, knows
that it's time to draw.
So, if the system load is light
and your drawing is
simple enough, you can keep
up that 60 frames a second,
so requestAnimationFrame() will
be called 60 times a second.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
so requestAnimationFrame() will
be called 60 times a second.
If the system is under a
little bit of a heavier load
and you can't keep up with that,
it'll call request-it'll call
your callback less often.
If your web content is in
a background tab in Safari,
for example, or the canvas
your WebGL is painting
into is offscreen, request
animation frame might be called
much less often or not at
all because WebKit knows
that drawing a scene that
can't be seen is not important.
So, here, we have a
drawingCallback function
and we set it up to be called
by calling requestAnimationFrame
with the drawingCallback.
Inside our callback,
we do some drawing.
This can be updating
physics based on the amount
of time that's passed,
responding to queued
up user events that we've logged
as the user was moving the mouse
around and pressing
keys and such,
and then we can draw
the individual elements
for our scene: the
compass, other entities,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for our scene: the
compass, other entities,
the terrain in the background.
And then when we're
done drawing,
we request the next callback.
We're telling WebKit, "Hey,
we finished drawing one frame.
Now it's time for me to be told
when to draw the next frame."
And that's it.
So, that's all we
have to talk about,
about these nitty-gritty
topics, the code
and how things fit together
with the web platform,
but I want to show
you one final demo.
I'll call it the
requestAnimationFrame() demo
because this demo certainly does
use requestAnimationFrame().
But it also uses
a whole lot more.
So, our friends at
Epic Games were happy
to let us use this demo
from the Unreal Engine,
and this is just a really cool
little temple thing we have.
Let me go ahead and
take it full screen.
So, this is rendering in Safari.
This is executing JavaScript.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
This is executing Fragment
Shaders and Vertex Shaders,
and what we're seeing
is just amazing.
So, there's a lot going on here.
We have light, reflection.
A lot of these surfaces
are really interesting:
marble and glass.
We have fire casting
reflections and light.
As we move around, you can
see the background scene being
reflected off the shiny walls.
Let's climb these
stairs over here.
So, as I enter this
hallway-let me go back
and forth just a few times,
I love this-you can see this
orange reflection on the wall.
So, I'm wondering where
that orange is coming from.
Something also interesting
is you see this room
over here is a lot brighter,
the engine is doing HDR
for dynamic lighting
effects to great effect.
So, we can see that these
fires on these podiums here,
they are casting
shadows into the hallway.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
they are casting
shadows into the hallway.
Here's more examples
of the HDR contrast.
I mean, there's a bright room,
so it's dark off
in the distance.
This is millions of
triangles, millions of vertices.
It was generated with some
pretty advanced tools,
but then the actual code
that drives it isn't nearly
as advanced as the
data that's coming in.
It's just relying on the
power of GL and the power
of the web platform to do
previously impossible things,
using the tools of
the web platform.
And to wrap us up, I'd like to
invite my colleague Dean back
on stage.
[ Applause ]
>> Thanks, Brady.
That's pretty awesome and,
like I said at the start,
WebGL is insanely fun
technology to play with.
So, while you might
not get quite
to the Unreal Engine straight
away, you can certainly play
with stuff right away and
get some amazing input.
Let's wrap up.
So, WebGL provides rich, fast,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, WebGL provides rich, fast,
powerful graphics
inside the web browser.
It's available in Safari, on
both OS X Yosemite and iOS 8.
And it's also available in the
modern WebKit API, WKWebView,
if you're a developer.
With that, I want to
tell you-direct you
to more information.
There's an email address you
can get for contact with Apple.
There's a few websites
and, of course,
WebKit is an open source
project so you can follow along
with that development
on webkit.org.
There's some related
sessions: the one yesterday
on the modern WebKit API,
which is definitely
worth checking out,
and we've got one tomorrow
on the Web Inspector
and Modern JavaScript which, of
course, is important to WebGL.
And we're looking forward
to seeing whatever you do.
Have a great rest
of the conference.
[ Applause ]