WWDC2003 Session 212
Transcript
Kind: captions
Language: en
ternoon everyone
imaging evangelist it's probably seen
enough of me by now but welcome to
session 2 12 which is cutting-edge
OpenGL techniques and we sort of have
this with the last session content
session their graphical imaging track
this year because this is really sort of
an interesting culmination of the GPU
theme that we have running through our
content at this year's conference what
we're going to do in the session is
we're going to invite a hardware partner
ati to really show us the latest
techniques they use in terms of
programmability and various different
special effects that they can accomplish
with their Radeon products to sort of
really let you know that what the outer
boundaries of what you can do with
visual effects or the state of the art
currently is because it's also very
interesting we've had sessions and
programmability from vertex programming
and fragment programming and it was sort
of an entree into this whole topic the
interesting thing is we're fortunate
enough to really gain all this program
ability and fantastic graphical power to
the efforts of our hardware partners
right Katie odd who really truly out
there creating the silicon that does
these incredible digital effects so it's
my pleasure to invite Alex Flacco's to
the stage to take it to the presentation
thank you all right good afternoon how's
everyone doing excellent all right so
where are we so I'm already at that
slide so my name is Alex bellicose i'm
from ati on part of the 3d application
research group my primary role the api
is on the lead program or in general
team lead of the demo team so we're
responsible for making all the demos for
the latest radeon cards the 9700 and
beyond and in the past on my left is
Rhys Rab garage he's part of our Mac
development team and he's responsible
for doing the Mac specific code in our
demo engine we're going to be showing a
few different demos today running on OS
10 and basically our demo engine like
like some game engine Gerard
cross-platform cross API so our demos
run on both on three different sort of
combinations we run on on them the max
through OS 10 we run on the OpenGL and
we run on the pc through open jail and
also on the pc through direct x so all
our demos run
all three platforms consistently which
is interesting so the thing that we're
talking about today are these few topics
the first thing I'm talk about is normal
mapper so in the radeon 9700 car paint
demo we used what we call a normal map
and we had this normal map tool which
will be available on on the mac soon
sometime the next week or two then I'm
going talk about a few of video effects
and morphine post-processing effects so
as graphics hardware gets more and more
interesting and the api's are there we
you can post process your scene and do
some really interesting things so we're
going to show a some post processing
effects on live video we just have a
camera set up that was all handed out
the new a nice new camera so we're going
to be showing the effects on live video
then the last thing we talked about are
the shadow techniques were used in the
an amusing pipedream demo how many
people have seen that pipe dream demo ok
how many people have seen the car paint
demo a few okay so we're talking about
how we do the shadow techniques in there
except we didn't force new stencil
shadow bombs we did some more
interesting things so we talk about
those three things all right so the
normal mapper tool the normal map / tool
basically is it's a tool to generate
high res normal match it's a way to make
your low res models look really high res
and so when we jump into the demo now to
the demo machine and we got the we're on
a demo mission to ok thanks so this is a
car paint demo it's nice draw a the car
I want some yeah I think Carmack guys
this car tonight so so okay so what
you're looking at is this model looks
very high-rez right when in essence it's
really not that high res you jump to hit
number two right so this split screen
shows the before and afters so the model
on the left is if we just use the vertex
normals of our model to environment map
and to light this car what's on the
right is what is instead of using the
vertex normals we use our normal map
above man right and so it makes it look
like your models a lot more high raised
than it actually is in essence we
actually use this car is a couple
thousand polygons maybe three or five
thousand polygons
the model what it looks like it's
generated from model that's a over
million polygons so as we watch the hood
of the car suite by here as you watch it
sweet bike so on the hood you can really
see the detail jump in we'll see all
this this interesting shape that's
actually not there and great it's gonna
okay so all right so let's jump back to
the slides actually uh one second okay
turn on wire frame per second rev yeah
everyone's go back to the slide this is
our our beta r beta build of the demo
engine we just got this port up and
running last week I believe okay so
normal mass normally when you generate a
normal map your artist will author a
height map the one on the upper left
there just to height map grayscale black
you know darker pixels refer low low
height larger pixels or high height
based on that you can pass the filter
over and generated normal maps which is
an RGB norm map which is basically
encodes vectors you're in your normal in
the RGB Channel right that's generally
normal ass that's how we're used to
doing normal maps now there's just new
massive doing normal maps which a lot of
games are starting to do that will be
released soon and there are a few games
out there already that use this
technique but essentially what you want
to do is your artist generate this
there's a low res model right that's on
the left here which is about this is our
car model that you know a couple
thousand polygons that's what we meant
that's what we're going to render at at
runtime that's the thing you just saw
running what's on the right here in the
middle is this high res car model that
has all the detail that model is like a
million polygons probably over a million
i forget the exact exact number but what
we do is based on these two models we
can generate a normal map the one on the
right there that encapsulates all the
details from this high res million
polygon model and and we can then use
that normal map that has all that detail
in it on our low res model and so by
combining the first and last thing there
at run time we get what the car paint
demo looks like is really highly is high
res model that's not really that high
res so so the normal map / tool is
basically this we have this command line
utility that will be available for OS 10
sometime the next we have been a week or
so in bed alone or so the developer was
right okay so it's basically a command
line tool and the the the tool takes in
as input essentially this low res model
this high res model and it
output this texture right now fume
points about the the low res model is
the one that we're going to render it
we're at runtime so it needs obviously
positions is going to need normals and
it's going to need one set of one set of
texture coordinates which define how
that normal map is set up how it's
actually not down there the hybrid model
just needs positions and normals because
we're not going to actually text them at
that that's not for real time right this
just for this pre-processing stage so so
essentially that we have that as
important we can output these these
normal maps now here's some background
on sort of how it works so we have this
low res model which is represented by
these these two blue lines here is this
imagine this as sort of looking at edge
on to two polygons meeting up here right
and and this this red line is sort of
what a high-rise model might shaking
might might have it in so the idea is
that we basically interpolate the vertex
normals of the low res model right those
vectors coming off the blue line when we
interpolate those vectors and then we
treat them as raised and we cast Ray's
out and intersect the high res model and
wherever we intersect that high res
model we then take the normal of that
surface normal of the high res model and
encode that into a texture this is what
happens internal to the tool one of
things you'll notice here is again this
blue is the low res model the right is
the high res model is it it does support
what we call negative raised right it
doesn't have to your low res model
doesn't have to be circuit like does
them to be inscribed in your high res
model they can intersect in different
ways so so basically as you as you cast
array out from right here it actually
where it intersects is behind where the
right originated so that's so that's
that's fine that that definitely works
the other thing that we're going to do
is we switch super sample these Ray's so
your normal match look nice and an anti
alias basically ride you shoot out we
can do up to it I mean we've done
hundreds of rays per pixel let's
actually you know 36 raise for
protection I mean it's roughly good
enough to get some good results the tool
lunch pretty fast and this is this slide
if you're just so anyone who's doing
their own normal map development making
their own tool this is how we sped our
to lob briefly is if you think about the
amount of rays amount of calculations
that have to have
brute force it's a lot if you have think
for a minute about a if you're looking
if you're low res model essentially has
doesn't matter how many polygons are
there but the texture of that you're
going to output is say 2024 2k on a side
to two thousand pixels by two thousand
texels roughly right that's going to
give you roughly four million individual
pixels for each one of those pixels you
have to cast array and intersect every
polygon in the high res model which is
say a million polygon model that's a lot
of races four million raises at that
have to do ray intersect tests with over
a million polygons takes a long time so
we've done some optimizations to
basically say for each low res not for
each polygon in your low res model which
polygons my high res model potentially
can I possibly intersect with so we
basically just came up with this
technique to use sort of a skewed beam
in a sense instead of having a beam it's
a volume of you know of three clip
planes we have these six clip planes
that defined as lying so we just have
the optimization there that for each
edge in your low res model you paired
the edge with each vertex normal and
come up with two planes you do that for
all three edges and you end up with
these six planes than any polygon in
your high res model that lives inside
that volume we actually test again so
that gives us a huge perk do so anyone
right in their own own tool for this for
whatever reason that's a good technique
now normal maps some developers are
using this to actually generate normal
maps further their characters faces
right it's not just for cars and other
things they want to actually use this
for their their character models and
four faces the thing they realize they
don't want their artist having to model
every little bump on the face every
little scar they don't wanna have to
model all the micro detail they want to
use this their their normal maps and
their normal mapper tool to generally
get the overall nice smooth shape it
gets a nice shape of the face without
haven't have all those polygons runtime
right so they want to use the normal
mappers that normal mapper to let's
become normal map should do sort of this
there's more rough sort of nice smooth
looking geometry right it's nice smooth
looking bumps and then they want to
combine that with a height map and in
the height map that artists are used to
painting up they can put into their
scars the little holes you know whatever
kind of problems they like little micro
detail basically and you can our tool
combines these two things and
is our final normal map that has this
nice sort of different like the micro
detail the rough detail then that gets
mixed in so here's a note for artists
any artists that are going to use any
tool that does normal mapping has to be
aware of this this is one of most
important points for artists is that if
you look on the left our low res model
now if these black vectors coming off
here are the vertex normals now what
happens is since we cast our raise based
on the interpolated normal and our low
res model if the if there's no normal
that gets interpolated across a part of
the high-res model you'll never
encapsulate that piece of the high res
geometry so that that blue circle up
there actually gets completely missed
those pixels that that part of the
high-res model never gets raycast into
because there's no normal story cast
into it with right what you want to do
on the right here is have a shared
normal up top as opposed to an unshared
normal right shared normals will allow
us the actual tool to ray cast into the
entire high-rise model and encapsulate
all that geometry and all that odd all
the curvature there another note for
artists is texture map spacing you have
to so normally when artist page textures
you it's only done for sort of the color
of your character right you want to have
one texture on your entire character and
they pages we have the head here in the
arms there and whatnot and having them
be not too you know close or not for far
apart isn't that big a deal because it's
just color and it believes it's not it's
not that that terrible when it comes to
normal match though it's really bad you
have to make sure that there's enough
space in these textures so if you if if
your text will bleed the color of those
textures lead you're going to change
your lighting equation is those your
surface normals that you're going to
pull out of there it's not just a color
it's a vector you're pulling out of it
that's going to then have mass applied
to it so essentially the tool does
basically this so the artists have to
leave this sort of these gaps in here if
you look at up here in the upper left
corner the back of the car there's a lot
of space there on the right this is the
actress oh we passed a dilation filter
over this and we basically dilate out we
basically grow out all the normals in
the image to all the unused area and
this is important because as you fetch
at exel from your texture you're going
to you're going to get a bilinear fetch
right so whether you like it or not
you're going to be
pulling outside of the actual area that
is truly texture mat by the artist
because you can get your violin or fetch
so what we have to do is grow this out
so the filter on the right is our
dilation filter we take this filter pass
it over the image that's that that's the
the tool spit out and and grow all those
areas in so if you look at the back part
of the car here you can see on the upper
left this big hole gets mostly filled in
there we patch this filter over a couple
times and this helps everything look
right in it in the end blast note about
normal mapper to lose tangent space that
the tangent space calculations that you
need for bump mapping the tool has to
match however the tool does tangent
space cow calculates the tangent space
has to match exactly with how your
engine calculates tangent space if it's
they're not exactly the same you're
going to have you're just it's going to
look wrong it's going to look like
there's errors everywhere we burned a
good I think I burned a week of my life
trying to figure this one out I mean
there was literally we did one thing
where we had one 64-bit float type cast
or 32-bit float somewhere and it gave us
a little bit of inconsistency between
the two implementations the normal
mapper tool our actual engine where we
would get all these these a little
artifacts so with the tool ships a
source code for it so you can either use
our or our tangent space code or drop
your own in because they have to match
exactly or it's not going to work you're
going to get errors so we basically
talked about for the normal map how what
these low and high res models are used
for how we do the the Ray casting some
optimizations and whatnot the next thing
we're going to talk about the next
section is some post processing effects
we have talked about four different
effects one is sepia tone sepia tone is
a way to do sort of which we're antique
pictures basically and the next one is
going to be syllable edge detection edge
detection filters are really useful for
a lot of things especially NPR rendering
knots or no non-photorealistic rendering
then we're going to do a show posterized
effect which is an NPR effect that will
also use the edge detection and just
going to touch on four minutes fast
real-time FFT is that story or check
that transforms so sepia tone can we
jump over the demo machine okay so here
are some live video
of me great so this it'll basically what
the cameras got to apologize for the
resolution we had to use this camera
cause it's so cool but it's only at
640x480 um so there's sepia tone so I
can just sort of switch back and forth
all right so secret own is meant to look
like a sort of antique picture and this
is can be used in games there's a lot of
times or you want to do flashback scenes
or what not to make it look like you
know just wanna go grayscale that's kind
of boring there's a way to instead of
going gray scale it sort of sepia way of
doing things so we'll jump back to the
slide deck okay so so here's the thing
about sepia tone if anything literature
you read about sepia tones they always
talk about this magical lookup table for
every RGB for every you basically the
idea is to take your image make a great
scale and then and then based on that
you put you map it into this different
color space now everyone talks about
this color space in this lookup table
that exists somewhere it's really hard
to find we've searched high and low and
we can't find this this lookup table no
one can it sort of always talked about
this lookup table I'm sure it's in
literature somewhere but the point is
having a lookup table is expensive you
have to store the lookup table you have
to fetch into and all that stuff and
there's memory bandwidth issues there so
what you want to do is we we found a way
to do sepia tone without having to
having the need for a lookup table and
what we do is we utilize color space
changes we can change from the RGB
colour space we do color space
conversion from RGB into y IQ space and
then back into RGB now why accuse space
is basically a different color space
used in broadcast video and whatnot and
so the interesting thing about about
doing a color space conversion is that
it's just a matrix multiply you have a
matrix that that transforms your RGB
value into a weii q value and you have a
different matrix like it takes you from
my IQ back to RGB so that's really
interesting so we're going to do so
here's the algorithm that we do so we
the interesting thing about why IQ
spaces you can if you hard coat certain
values into the i and q components you
can get some pretty interesting looking
output so what we do is we instead of
computing we take our input color our
RGB color and instead of just
transforming the whole thing into y IQ
space we just calculate the Y component
right and why the Y component takes into
account the RGB values and then the
and cue components we hard code to be
point 2 and 0 and but but and by doing
that it basically gives you sepia tone
and then you then take that why I Q
value and transform a back into RGB
space and you get sepia tones it's that
simple this is something fun to play
with because if you hardcore different
values in there you'll get different
effects and you'll get some interesting
different looks that you might want to
use in your game four different
different effects if you get the power
of going and whatever you can do
different things so as we optimize it as
we look at the math to do all this it
turns out to just be a multiply in an ad
if it's really simple to actually do
this so here's the code the shader code
for use the fragment shader code
essentially if we look down after the
sample texture this text opt right dead
center there right below it is the dot
product and an ad those are the two
operations that we need to generate
sepia tones based on an RGB value it's
really simple we do a dot product with
this with a constant vector that's
up top somewhere and and the actual
color value and then we add a different
constant to that these slides will be
online you can rip do that later the
next effect is a soul bell edge
detection ooh double edge detection can
we jump over to the demo machine alright
okay so you want to explain the
difference between sort of the vertical
and the horizontal yeah so basically
there's the civil edge detection when we
turn flip the whole thing on the final
result is a way to pass to look at your
image and find all the edges we have
this camera is a little noisy right now
so you got a lot of extra little dots up
there but essentially there's it passes
the filter over to different filters one
to find the horizontal edges and want to
find the vertical edges why are we
jumping to just show the horizontal
edges right ok so the horizontal edges
if you looked at see that thing right
behind me there you see just these sort
of horizontal lines here and at the top
and bottom if we go to the vertical
edges you get those vertical but the two
left sides and then and then and when
you got both of them up there then you
get the whole all the edges basically in
your scene we go back to the slides
please ok so the edge detection is just
the Sobel edge detection filter is
really simple it's just two kernels your
horizontal Colonel down here and your
vertical
so for the pixel in the middle if you're
trying to calculate what the value is
there you take into account the three
pixels above and below it and you wait
them with 1 2 negative 1 negative 2
appropriately and you just combine those
pixels and you end up with your
horizontal colonel you're sorry your
horizontal edge edge detection value do
the same thing to your vertical kernel
and you get the vertical value and
here's sort of the output again if you
look at the picture down here on the
lower left you can see just a horizontal
sorry that the vertical edges work
should the horizontal ones up top and
they get combined so this is used a lot
in NPR NPR looking effects and here's a
soul bell code I mean it's pretty
straightforward you basically do eight
textile stretches up front down here at
the bottom and then based on those eight
fetches which are the eight surrounding
Valley pixels around your main pixel you
can then take the top and bottom three
in the left and right three
appropriately on the right and combine
them to do to come up with that value
okay so the next effect is an H it's an
itch and NPR posterized style now we're
going to use the edge detection on this
and this is sort of this is sort of
inspired by this movie walking life
there's just edged preserving that's
called a kuwahara filter that that
basically does gives you this nice look
right that gives you the sort of this
sort of NPR posterized look which I'll
talk about in a second what we're gonna
do is we're going to composite our our
edges on top of that so what can we jump
over to the demo here we go alright
alright so here's the non posterized
output we're going to turn it on gonna
have to apologize it shouldn't be hard
to see the lighting conditions on ideal
there you go so unpolarized posterized
can do a couple different passes it's
sort of soph Alex and just sort of jump
around you kind of see him jump around
I'll pass on the jumping there you go
that's posterization and real-time okay
so we go back to the slide so the so
this is post arrives output when we have
really good lighting conditions this is
part of our Mac driver team coming out
of a meeting i'm just an academic sat
stand there we need an image harsh line
so that's what the output looks like
okay so koha result there is this
essentially for the picture low middle
of this kernel here that we're trying to
calculate here's what we have to do we
take four quadrants of pixels each
quadrant is a 3 by 3 pixel area the
upper left quadrant is the sort of the
pink area and we have the green area the
lead i guess it's orangish and blue and
they all overlap by one row of pixels so
each three by three colonel each three
by three quads in there what we do is we
calculate the mean and the variance for
each one of those they actually the mean
color and the the actual and the total
variance of color in that in that three
by three claw hue by three quadrant once
we have those four values for each one
of these quadrants we have the mean and
variance we then look at them and say
and we we figure out which one has which
one of those four quadrants has the
smallest variance which ever one does
that that we just used the main color
from that section it's that simple we
just look at those four quadrants figure
out which one has the smallest variance
and pick that one's color done it's that
simple so here's basically that's that's
the idea the implementation is basically
that we do this to pass effect and
essentially the way we speed this up
instead of doing these brute force
calculations is rude we basically do two
passes over the final image the first
pass calculates the mean and variance
for for each unique three by three area
in the whole image that's one path we
store that often and we render that into
an on-screen texture then we do another
pass through it and we we fetch from
that texture to get the four quadrants
out and those four mean and variance
values and in our shader we then do the
comparison and just write out the right
color so we can get it down to two
passes really simple it's all image
space is the constant overhead
interesting thing about close processing
effects is that it's that there really
is just constant oh right it doesn't
matter what you rendered that seen
whether you're wondered one polygon or
two million polygons it's the same
overhead because you're just doing the
same math it's a fixed number of pixels
so that's great for games don't have to
worry about the complexity of your game
as long as you have a few frames to
spare you can you can do that kind of
stuff so here's here's the here's the
the fragment shader code
which basically shows how do we compute
the mean in burien you can refute this
later the slides are online but
basically you just kind of go through
that your your full three by three area
of pixels and do your computations and
then we have the bearing selection how
we actually fetch from those make those
so dead center here you have those four
text operations we fetch four pixels
from that off-screen buffer and then we
do the comparisons down at the bottom
half here and say which one has the
heads the low is variance and we just
and then we use the color for that to
output pretty straightforward ok the
last section of this the last part of
this middle section is FFTs as Fourier
transforms these are really useful
anyone that's vaguely familiar with
image processing knows that this is used
a lot and and this is something that we
can actually now do in Hardware the
radeon 9700 family products and and
other similar products that can actually
do full floating points that has full
floating-point precision and able to
render out to a texture 32-bit float
textures you can do some really
interesting stuff now so so here's
basically a test a test pattern on them
left here a very common test pattern for
FFTs and this is passed through this is
output from one of our our in-house
camels ever released this one yet but
it's coming soon and this basically
shows in two different the frequency
domain and spatial domain of the FFT
just that's all over the last day that
it's something that we're working on it
we can actually start to apply so this
is sort of stuff that's in the works
right now at ati we're starting to apply
this use SS used to do some image
processing effects ok so we talked about
those four things sepia tones edge
detection posterization and FFTs which
brings us to the final section which is
the shadow techniques we used in that in
the pipe dream and a music demo how many
people have so how many people have seen
this demo already a few you okay why
don't we let me jump over to the demo
machine to with audio to get the audio
hooked up for this ok great so so this
pipe um demo this is something we first
saw a big graph electronic Theatre 2001
I think yeah 2001 and we were watching
that and we look sitting there in the
audience and said we have to do this
real time as soon as we can and it turns
out the next trip we're doing we kind of
went went back to the office and thought
about a knife realize you know what I
think we can do this demo in real time
and so here's it running on OS 10 this
is one of the launch demos for the this
is the main launch that much for the
9700 product family is one for a minute
so as you haven't seen it this is the
debate a bill of our demo engine car
so pretty much future complete at this
point
[Music]
so if you notice all the shadows are
having the dynamic shadows and the
shadows globally in the team so we
wanted to implement this and do this as
one of our main launched a mode and one
of the things we realized was what the
first thing we asked ourselves how we
can do the shadows industry because the
geometric complexity is really high this
scene everyone on average we render
about 400,000 polygons per frame in this
demo it peaks around 550,000 polygons
per frame so there's quite a bit of
geometry in here and yeah here we're
about 400,000 so so we're trying to
figure out how do we do the shadows in
this we wanted to as writing graphics
amell's we want to sort of keep up with
what games will look like in a year down
the road as developer start to take
advantage of these these latest features
so we looked at a few games that are in
development schemes out there and said
let's stick with full steam stencil
shadow volume how many people are
familiar with central shadow volume if
you okay so basically it's a technique
that that lets you render a hard shadows
globally right in the past developers
have used shadow textures light light
match right that sort of bacon your your
lighting globally but it's sort of soft
it's not it's not very exact it's not
hard shadows and there's a way to do
dynamic shadows dynamic objects that
cast shadows holds nice and hard and
crisp and that sort of war games how
games going to be looking from the next
couple years so we want to stick with
that so we we first went through this we
got the scene up and running and we
turned on central shadow volumes
globally and our performance was
somewhere around two frames a second we
realize we can't do that there's wait
there's just way too much geometry the
overdraw on any given pixel on average
was you know near a hundred Apep points
because of all these different volumes
can all these different pieces of jobs
you these things up here or just insane
the number of the amount of overdraw
especially if you look at back from the
side you get a lot of ultra draw there
so I'll talk a bit about that in a
minute or so but wrap can you sort of
pauses or come out of the flashing mode
yeah and
slide over here to the left okay there
we go zoom up on this area here so
here's what so here's what we do we have
yes we mount and so there's two
different kinds of geometry in our
engine there's something we call static
geometry and then dynamic geometry so
static geometry so this is a dynamic
geometry this is the geometry in our
world that actually moves that animated
in some way right so these are the only
piece of geometry that cast shadows
dynamically this is our static geometry
this is all the geometry that never
changes its baked in it's not moving
it's nothing animated about it so this
you can do something different with for
shadows we have two different genres
techniques that we combine to do our
overall shadow shadow so if we zoom in
here Rab on the left if you look at that
shadow area right up here so we use this
technique that I'm calling shadow
cutting that's what we do is we actually
cut the shadows directly into the scene
so if you turn on wire frame here these
are static shadows here this is geometry
we actually cut the shadows directly
into the theme right here his page up a
bunch of me or it just press a topology
of the lighting so you'll see we
actually the artist don't do this
manually they would they quit I made
them do that so so we so we out do we
automatically generate this stuff we
figure out we basically cash do shadow
cutting which I'll explain now can we
jump back to the to the slide please
Thanks okay so static shadows like I
said it's a great opportunity to not do
stantial shadow volumes globally because
their shadow language don't change they
don't ever change and a lot of those
shadows won't catch geometry onto
dynamic objects so you don't need their
shadow volumes right so for a lot of
those things you can you can optimize
this out so so here's where we just show
the before and after right this is what
you see which looks like shadow volumes
which I'll get to in a minute but
instead we're actually cutting the
shadows straight into our or seen
geometry and so the advantage is like I
said as it looks just like central
shadow volumes right this is what some
you know some very well-known students
to be released games are going to be
doing a global central channel bottom so
it's going to look hard shadows
everywhere which is what we wanted but
we obviously have to cheat a lot we have
to you know we're not going to do full
shadow bombs we need to run more than
two frames a second so so the
interesting thing is that since we
separate out our geometry
so you're probably thinking isn't a more
expensive you have more geometry now
that you diced your geometry up right
doesn't isn't that slower to render the
answer turns out to be Mel because if
you compare it to shadow volumes you're
rendering these huge volumes to your
back buffer that have a lot of overdraw
instead we're just going to have a few
more vertices to process which is the
same a lot less than the shadow of I'm
verts and those polygons that we know
are in shadow we can actually draw with
a simpler shader because we know that
none of our dynamic are dominant lights
really hit those pick those those
polygons so we don't have to do all the
math that requires for those polygons
that are in shadow permanently so here's
how we do it we do it with shadow
cutting so it requires beans so so here
are some basics on what a beam is for
those of you who are familiar a beam is
basically this so a beam is sort of a
pyramid-shaped volume in a sense with
three three sides and so you have you
have a light a light position that
yellow thing up here that's a light
source and this polygon is sort of
horizontal edge at the bottom if you
think that the polygon is actually
popping out of the screen right out of
the screen here the volume is basically
the position of the light and you
calculate 34 total planes one plane is
is the point of light with each edge
right so each of the three three of the
four planes are calculated by the point
the light position and the edges of that
polygon and the fourth clip plane is the
actual plane of the polygon itself and
that gives you so depending on which way
you look at that that polygons plane you
can have a near beam which is basically
anything in that in that triangular
volume closer in between the light and
the polygon is a near bemis our beam is
all the job is anything that falls
beyond that that that polygon so the
shadow cutting algorithm is this we do
this brute force I tried I can't even
tell you how many different methods
malgor them to try and do shadow cutting
and all of them had I mean anything in
literature has a lot of different issues
you'd all try and optimize certain
things when it turns when it comes down
to it doing a brute force approach is
the right thing to do and you can
optimize it so it runs pretty fast for
music scene globally this algorithm
takes about 20 minutes for the entire
scene to cut those those shadows in
that's I mean that's a pretty
lexine so so that so that time is not
not too bad so basically here's what we
do for every polygon in our scene one at
a time we're going to dice it up we're
going to take all the other polygons in
the scene and cut the shadows into it
based on that geometry one of the time
so what we do is we'll call that polygon
a polygon a is for each polygon in our
scene we're going to find all the
polygons that live between that polygon
and the light source so everything in
that polygon near beam we're going to
say has to cut into that polygon and
dice and dice up for shadows so we have
for each polygon a we have a bit of
polygons beads for each one of these
polygons bees we take its far beam right
so for polygon be is is between right so
in a case of if this is our light source
here right and we're trying to cast
shadows onto this table surface this
microphone here it has to cast shadows
onto it so for a given polygon be here
we're going to be min to polygon a which
is the table I'm going to take its far
beam and cut in and cut into this table
with those those three planes which is
what's happening here polygon bait is
blue one being the microphone polygon
down into the table and those end up
cutting it into three different polygons
what happens is on the right here you
can then tag each resulting polygon as
in or out of light so if it falls in
polygon being far from some far beam
trust them we tag it as and shadow
otherwise we just leave it alone and
it's that simple you just keep doing
that over and over and over and you just
have an optimization step that optimizes
your mesh every step of the way and you
end up with you know what we just saw
this button here the the final shadow so
here's the interesting thing is a lot of
literature a lot of different ways to
solve shadow lines to do shadow cutting
in a sense can't solve these cases
typically overlapping polygons are the
root of all evil for anyone doing
research in this topic it's it's
polygons that overlap each other three
of them right there's no order you can't
possibly sort these three polygons and
figure out which ones on top because
they're each on top of each other right
and so this breaks a lot of ways
different methods this brute force
method of just cutting everything up
works fine you end up with what's on the
right there exactly what you want to see
everything caught up perfectly and
there's the result so those are our
static shadows we cut them into the
scene nice and simple right
now we have to deal with our dynamic
shadows and in a second we're going to
talk about how we combine both of them
so they look they look right they look
like they belong together so let's talk
about the dynamic shanna so dynamic
shadows are basically done with shadow
volumes let's take one we can we jump
into the demo please okay so okay once
you pull the camera back a little bit
okay that's good and now turn on just
objects be there so here's our dynamic
objects and if you turn on wireframe
these blue wire frames are the shadow
volumes so a shadow line is basically
given our piece of geometry we could
generate a volume that represent what's
in shadow so anything that falls into
the symbol for example here anything
that falls into that symbols volume this
volume generated is in shadow we tag
that is in shadow we can do tricks with
how we render these valleys volumes into
the stencil buffer to make it look like
to actually tagged the pixels properly
for saying what's in shadow what's not
in shadow so those are those are
basically what what a shadow volume is
if you turn actually return wireframe
off and go back to the full scene in
object geometry be right there so if we
zoom in up here over the drum machine
here we go and just sort of page down
briefly just to get those other
instruments up there that's fine there
I'm even with the balk line they you can
see those volumes intersecting with the
wall and generating our shadows so here
you have a lot of shadow is here welcome
back to this enak looking back to the
slides please okay so so here's this a
wireframe shot of these instruments
going over the drum machine there and
here's their shadow volumes and you can
see where their shadow volumes intersect
the wall we tag those pixels or tagged
as in shadow good sense of line so here
is essentially here's what a shadow
volume in shadow volume basics for those
you who aren't that familiar with it is
give it a light position and say a
sphere what you do is you figure out the
silhouette edges from the polite point
of view what you want to do is you want
to basically break that sphere in half
right keep anything that gets hit by the
light in place and take the bottom half
that sphere and shoot it down far away
from the light and at the angle of the
light as well so what you end up with is
is this fear turns into this sort of
this weird-looking pill
this full pill shape and that is your
sure shadow language a closed volume and
it's sort of and that and that you can
use that to render into the stencil
buffer to render to mark each pixel in
or out of shadow here is just sort of a
brief look at one way to do this in
Hardware essentially you have to so
there's this method how many people here
are familiar with the with this method
of doing shadow lines with degenerate
quads along the edge of primitives not
me okay so the idea the idea is
basically this for each proof so you
have this this model this sphere right
for each edge in your model what you
want to do is you want to stick a
degenerate quad and instantly spin quad
there just to two triangles that connect
that edge okay so you want to take your
sphere basically separate out all these
vertices and add two polygons along each
edge that is basically a quad so the
idea is when you take this sphere and
you split it at the scene and stretch it
and then stretch it apart those those
quads along those edges of civil and
edges get stretched right and so if we
go back one slide to this over here
these these represents sort of the quads
each one of those vertical areas
represent a quad that was that was
originally stuck on an edge right and
when we split it apart it's like putting
it putting like Elmer's glue there and
it gets you just pull it apart and it's
really just fills in the gap or just
stretch it open so the idea is this so
we stick these infinitely thin quads
along these edge so for here's two
polygons in our given model and that
shared edge there we took these
instantly thin polygons right in the
middle there now they're actually
they're actually right on top of each
other I'm just spreading on par so you
can see you can visualize it and what we
do is we stick the face normals of each
polygon so polygon a we take its face
normal sorry oh we take its face normal
and we embed it into the three vertices
of polygon a and we take polygons beans
face normal and embed that patient all
known to those three three birds and the
idea is this the idea is that in your
vertex shader as you're running a vertex
shader you want to basically be able to
shoot each vertex back one at a time you
want to either keep it in place if it's
front facing to the light or you want to
take the vertex and shoot it back to
generate this volume so the idea is that
since you have your fate
normal is embedded in your vertices if
you shoot back the lower left vertex for
polygon a you're going to end up
shooting back the other two as well
because this is seeing normal with the
same face you're doing the math with
you're going to basically do a dot
product with the light vector and that
base normal so if one shoots back they
all shoot back and in the end you end up
basically to either each primitive stays
in place or and it gets moved back one
at a time there's a sort of a
brute-force approach to doing shadow
volumes in Hardware fully in the
hardware there's no cpu work happening
here all happens in a in your in your
vertex shape so we render this into the
stencil buffer and so they're in there
so now we can render the shadow volumes
in there and you have this in your
stencil buffer you can basically
attested to say which pixels in shadow
which packs which pixel is add a shadow
so which brings us to the question is
how do you get those shadows into the
color buffer and make them look good and
make them blend in with the shadows that
we have baked in right you have these
baked in shadow you want to make sure
that you don't double dark in pixels if
something's already in shadow from the
scene geometry you don't want to have it
go into shadow again it's going to look
bad it's going to look obvious that
you're doing something different for
dynamic geometry so we're at the stage
right now where we have we essentially
have we essentially have our full scene
drawn into the back buffer with no
dynamic shadows everything is drawn
without the shadows and our stencil
buffer lives each pixel tagged as in
arathi shadow now we have to combine
these the back buffer with our stencil
buffer in some way to get them to look
right so what we do is one thing you
could do is just draw a full screen quad
over your scene and for each pixel
lesson shadow just dim the color by you
know Oh point 5 just dim it in half
right you're going to get some really
bad artifacts like that it's not going
to look right so so what we're going to
do is we use destination alpha that's
how many games your ma who uses
destination alpha for anything actually
stores real values and destination alpha
to perfect so before sorry so so
destination alpha you can think of this
as a whole nother channel you can store
it you can store data in as you render
your scene as your render each polygon
you're writing to RGB also write
something the Alpha you can write
something useful to alpha and then use
it later in a separate path so this is a
way you can do a lot of interesting
things so we're using destination alpha
to do our shadows
and the idea is this is instead of
drawing a quad over your screen to just
dim each pixel by some constant constant
value we want to basically as we draw
our scene right out of value to
destination alpha that is a dim factor
that means it basically represents this
if this pixel falls into shadow by
something dynamic how much should I den
this pixel so for things that don't have
any real lights hitting it you don't
want to dim it at all four lights that
have a really dominant light hitting it
you want to have a fairly high value in
there to say I want to I want to dim
this pixel a lot so the way we do it is
we end up want we want to write a value
out that we can just simply multiply
this dim factor into our color and get
the right value out and the idea is to
bring you its close back to ambient
lighting as you can right so you
basically have this scalar value so here
is what destination alpha looks like
here's a screenshot can we jump back
into the demo please okay so if you zoom
back and just sort of it what is it geez
that's color yeah I go two more times
there we go so here is what the scene
looks like now you're going to notice it
looks counterintuitive where there's no
lights up at the ceiling there's no real
dammit like sitting that you have white
what white means is it's just one right
you're going to basically take if you
multiply that pixel x 1 it's not going
to change which is what we want right if
that falls into shadow up there it
doesn't matter because there's no light
already hitting it so there's nothing to
mask out so you want to just multiply by
1 and leave it alone for the pixels in
here in these dark areas that these are
just just there you go these dark areas
here those have a lot of light hitting
them so we want to subtract out we want
to want to dim those pixels a lot so
when we when we scale those pixels there
we're going to take this mid mid gray
color say 2.5 or point 2 x they the
color value it's going to dim those
pixels so where any one of those balls
hits the wall you know has the their
shadow hitting the wall you get that
nice dark dark circle it's because we
multiply this value by the current color
value and you got those shadows and one
thing I want to point out is we go back
to the color mode if you zoom in dead
center there to the left here let me so
here's one thing about our engine which
uses more on things so we want to go
here
do me just right on there so here's how
our engine basically works right we have
these dominant lights back up back up
okay so here's the starting we have one
light in this area if you notice right
here dead center we have just one
dominant light on and we have geometry
here for this is where the shadow coming
comes in now in a second we're going to
have this other light turn on where it's
coming let's go turn on and what you're
going to see is we have we're going to
these are baked in shadows here on the
on the wall you now have two different
lights casting shadows in this area okay
if you turn on the wireframe quickly so
this is going to show you we actually
cut in in our shadow cutting to back up
for a minute we have we cut into our
geometry the shadows from multiple
lights and we can preserve which which
polygons fall into one light source two
light sources or shadowed by one light
or two lights so if we turn the
wireframe off for a minute trying to go
into the desk alpha view desk alpha
there G there we go so if you notice so
this is what we write into desktop or
for this area okay so the interesting
thing is for the pixels right here in
this sort of darker gray there's there's
no basin shadows right this is just what
this is a pixel being hit by the light
full-on okay the pixels down here that
is sort of lighter gray are being hit by
just one light right because the other
light is being is casting a shadow there
the brighter white stuff is actually
being cast those are pixels that have
shadows from both light sources so
there's no light to really subtract
either that's why it's white right it's
something the shadow volume intersects
those those pixels go back to the full
color of you it's already fully in
shadow there's nothing to shadow there
right so we can write out me when we pre
process into our shadow cutting we tag
each polygon and we know how many how
many life is already being shadowed by
right or how many likes is being hit by
and we right outside with the right
value so what we end up with so can we
go to the right over the drum machine on
the on the left yeah right there so
let's page down a little bit and get
these guys out here okay good so I want
you to know is this is the this is the
whole reason for doing this dim factor
thing right with this sort of Destin
alpha is as these shadows go out of his
life source they dim and they blend in
perfectly this bar here this
a horizontal bar that shadow is baked in
that's from our shadow cutting right and
the idea is that those shadows blend in
with that bar fine and they disappear
nice and smoothly you never know that
that in right when it gets out of that
volume that that disappears we don't
actually draw that shadow line because
we do culling based on our light sources
so we end up with these nice shadow
volumes these nice shadows that that
work perfectly that just integrate with
the scene well so he can we go back to
the slide ok so the shadow quad so now
that we basically have our desk I'll for
drawn right everything all the contents
of our destination alpha and our RGB
channels are filled with all the right
data we are essential buffer filled with
the right data now we have to basically
draw a plot over our full screen to get
the shadows for the right pixels and so
what we do is this this is our blending
and OpenGL you do your you set your
source color 20 and your desk color to
death alpha so you search your source
factor 20 and your desk that and your
destination factor to death alpha so the
equation you end up with when you plug
this into your your alpha blending
algorithm is you end up with this turns
out to be 0 and you end up with this
desk color x desc alpha my desk alpha is
your dim factor so you just end up with
that with this the equation that you
want and you turn on stencil your
stencil test as well and you only write
to the pixels that are in shadow and
it's that simple and you get those
shadows in the end so to summarize on
the shadows we basically have talked
about the differences between the static
and dynamic shadows went into some
shadow covering this are the shadow
cutting algorithm and some of the shadow
volume basics so and that's how we do
shadows in Animusic so I'm looking at
the clock and I have another have a few
minutes of burn here so i want to show
one other effect which is if we could go
back to the demo machine and out of
music so if we press one first ones drug
this over if you watch this the vault
amount of music are actually motion
blurred we when we first got the DVD in
our hands from from siggraph to watch
this video I step through every frame
and I watched everything they were doing
in a full
in the full video and what's the only
thing they date motion blur with the
ball because those are the only things
moving fast everything else is moving
slow we said well how do we use motion
blur on balls and make it look realistic
and so we came up with this with this
this vertex shader to do what looks like
motion blur so if we press a to pause
the animation okay so here is a pot
here's the actual geometry okay that's
good actually back up just a little bit
okay so the bottom right in the middle
there down on the bottom is almost a
full bottle shape the ones and that's
moving pretty slow the ones that are
moving faster looks like look like hot
dogs right so what we do is we have you
hit wireframe one more time okay so it's
going to be hard to see turnout for the
turn off this one more time okay so the
ball shape here looks like a pill what
we basically do is we take our ball and
split it'll at one of those themes and
we generate and it looks like its motion
blur we have the right shape for right
and what we do is we then use we when we
searched from our environment masks cuz
these things are all environment maps we
lower the the mid-level bias the LOD
bias so so when we sketch the Texas from
our environment is blurrier we also
change the opacity value on those pixels
so as the more surround it is and not
moving it's a solid ball as to become
more and more stretch we make it more
transparent so it so it looks like you
can still keep through it right because
if you keep it solid it's going to look
like you're launching hot dogs right you
which isn't good so you want to make it
look like you're gonna be able to see
through because motions or you can sort
of see through anything that's motion
blurred so now you're probably wanting
this is the coolest technique but and
you don't not a lot of games involves
launching out of machines and
instruments but you do have other
projectiles you have Rockets you have
someone's going to throw a grenade over
something or whatever whatever things
that are going to be moving in your
scene in your games you can apply this
technique to it right especially things
even things that are that are like a
grenade you can do the same thing with
shadow volumes where you have these
degenerate quad stuck in there and you
split it whatever the appropriate seam
in right based on how its rotating you
can split it based on its direction
vector its velocity
is based on the velocity you split it
more or less right and so and so this
technique can be used in a lot of
different ways so that's motion blur and
Anna music all right can we go back to
the slide please ok so we're more
information so here's some references
these will be on the web these are some
books highly recommend as far as current
current graphics go and some links to
the normal map atul will be on ati's
developer website sometime next few
weeks and here's us that's me raz
marwan's another guy in our office that
worked out some the video shaders and if
you have any questions feel free to drop
us an email and that's it
you