WWDC2001 Session 409
Transcript
Kind: captions
Language: en
I would like to introduce to the stage
Troy Dawson 3d API engineer for Apple hi
I'm with the open joke turbid Apple and
today I'll introduce for rendering
techniques to produce interesting
graphics effects on Mac OS 10 using
OpenGL these four tanks will be cubed
Verret mass improve texture filtering
using dr. texture combining to do by
mapping and finally talk about senseless
shot of lines first up is cube
environments here we have it a single
texture this texture is used to map
environment lighting effects onto a
rendered object as you can see this is a
spherical environment map both the front
half and the back half of the
environment is encoded in a single
texture we use this texture combined
with OpenGL text you're combining
operations and the automatic corner
generation to simply render the amount
map onto the render object as you can
see here so that was environment maps
next we'll talk about Cuban texture maps
this is a cube texture map as you can
see there's multiple faces of the images
the images are arranged in the cube
structure
for you says in a tyrant map the cube
texture map must have the faces arranged
to form that cubic panorama that shows
the scene as utrom the object using cube
maps is very similar to using regular
two-dimensional textures instead of GL
textures 2d we use the new India
constant which is the texture cube map
as you can see setting that detector
parameter is very similar to using
regular two-dimensional textures after
we have bound our texture map object we
need to download the textures telling
the textures images to the texture
objects are very similar we have these
six new constants each constant
corresponds to a face of the cube map
and as you can see here I just go
through the six constants and download
them these are our cube maps so the
width and height must be identical
running the queue masters is much
simplified if we do use open jails
texture coordinate generation facility
for cube maps will bind our texture
object will set the text retention to
reflection that mode and we're able to
map connection we enable the texture
coordinate generation if you don't hear
there three dimensions to text your
Cornett generation well spherical map
point uses to Jack two dimensions now a
demo
this is from the nvidia site this is the
nvidia bubble demo modified with local
scenery as you can see i have a bubble
this bubble is q map environment i can
rotate around the around the the scene
and you can see the environment being
putted on the surface of sphere it works
in all dimensions
what's berkeley i can show you the
actual texture images here the sex 6
texture images are the of the cube map
as you can see it does form of
contiguous panoramas of the scene as you
from rendered object we can we can
compare this panorama to its spherical
map here you see the circle the
corresponding spherical map that was
generated from these six images as you
can see has less information than the
cube map roughly 40% over the texture
map area is representing the front half
of the environment well the next 40% is
representing the back half of the
environment the black areas are our
wasted space which is about 20% of
circle map this inferior texture mapping
results in and poor upper for rendering
quality here you can compare the
spherical map in the cube maps so this
is the cube map environment and this is
a beautiful map there's cube and
spherical so there's the quality is
different and noticeably better for the
cube map
additionally because cube maps are not
view independent you can see i have a
very beautiful back view here but if i
were to go back to spiracle maps you can
see that there's a singularity the back
of the texture which results in this
rather poor rendering thank you so in
summary we saw that the texture maps for
using using cue maps for texture
environment sir what's better than
spherical maps
mainly because they're easy to generate
basically spherical maps are generated
from the cube map so if we skip that
step we can probably do it faster
additionally that's this texture maps
for cube mounts for a viewer independent
so it's no similar at the back next I'll
look at anisotropic filtering you you
may have noticed in your applications
that when you are you have a texture
that's that's using bitmaps when a
texture becomes oblique to the viewer
you might notice that it has blurring
effects this is due to OpenGL
subsampling the wind up levels a little
bit less hot than optimal Ashley a demo
that will improve this OpenGL has a new
texture parameter which will allow you
to improve this sampling here it is if
the driver supports texture supports
your ass proper filtering you can use
this function call to get the maximum
supported value this number will range
from 1 to some number like 8 or 16 the
greater the available value the better
the text yielding effect available all
you do to use the science track
filtering is use this texture parameter
set value and you said to desired value
the desired values set my strange for me
from one up to the maximum available
volume
now demo
here I have a test image
currents I'm using binary filtering so
when I move text image down you can see
some sparkling effects open Jilla
provides trilinear filtering treatments
crackling effects as you can see when
you use astronomer filtering the texture
becomes blurred which isn't so good
however if I raise the aspect of
filtering level you can see that the
image becomes sharp even when it's you
did a big angle I have another image
here we see trying that filtered blurs
out well using the anisotropic filter
parameter you can see it's clear again
we saw how using the ash traffic detect
your filter parameter will improve your
texture filtering when the textures of
you to Bleakley but this does required
map texture loading but you can't set
the desired level because using this
time traffic filtering will will give
you a performance hit for a fill rate so
you may need to reduce the level to keep
your performance rates the next is they
are using the OpenGL texture combiner
machinery to produce the perfect little
dot product effect here's an overview of
the process the OpenGL texture combiner
API allows you to use effective stages
to do certain detection combined
operations as you can see here we have
three inputs two into this two to the
texture liners and two stages the first
stage will operate on the primary color
operand and a texture operand the output
of the first stage will you give me to
the second stage to reduce our final
rendered image
let's look at the actual dot product the
dot view texture function operand for
the deck for the GL texture combined in
units will allow the texture texture
units to do a per pixel operation this
is kind of a hack but in that we have to
encode the the vectors of the dot
product into both color and a texture
map as you know the dot product takes
two two vectors and providing doesn't
that perturbation to produce a diffuse
lighting effect the output of the dot
pipe will be a scalar value but we can
we can duplicate the scale if I were to
make a output be a light map to
calculate the the local light vector
from the polygon to the light source we
can use this map it's pretty simple but
the dot product operation requires that
the light vector be in local coordinate
space so we must rotate this light by
the inverse rotation the polygon in my
code I use quarter nians somewhere looks
like this but I don't have a type of my
code hopefully after we will say the
local light vector into the local
coordinate space of the part of the
polygon we have to encode that light
vector into RGB color coordinate space
this is done like this
here I'm using inside bytes as the
critical as the color components as you
can probably figure out the color
components are mapped from 0 to 255 so a
minus 1 that your component will map to
0 and a plus 1 vector component will map
to 2 pi v additionally you can do this
encoding for all four for all the
vertices of your polygon
here we have the second operand of the
first page this is the normal map operon
this texture map image each pixel this
texture map image encodes the single
surface normal the texture map image is
blue because the blue component of the
color corresponds to direction out of
the of the screen usually these surface
normal maps textures are produced during
your content creation phase by the
artist who has a height map when you
apply a tool to convert the height map
to a normal so it's the first stage
abscess cry previously we have the first
operand which will be the local light
vector our second operand will be the
encoding normal map here we said that
the operation - dot operation
additionally the the first sector stage
for the dr. your combiner will allow you
to scale the op the output but I'll show
you the code next here's the code that
corresponds to the first stage
we're using the first extra unit as an
old map so we buy in the normal map to
the first extra unit
additionally we set the Tetra combined
operation do that for you the output of
the dot three you can either be a scalar
value alpha or it can be an RGB he
writes what they are the output to be an
RGB value and finally I set the two
operands of the of the operation the
result is the would be the grayscale
output you see here for adductors result
the second stage of the operation simply
modulates the output against on that
base texture here's the code that
corresponds the second stage I'm using
the second texture unit for the base
image and again I set the combiner
operation to modulate then I set the
first operand to be the previous output
second operand will be the base texture
next we'll see the code actually renders
the image it's a lot of code there so
let's look at one vertex
first I issue the local light the local
light vector which is an unsigned byte
vector then I simply issue the two
source that's two sets of text
recordings from the sources textures as
you noticed for each vertex of the
polygon is you a separate light vector
in the color you can calculate these
like vectors for each vertex of the
polygon and issued like that and if the
smooth blending mode is enabled you can
use the vertex interpolation mechanism
interpolate these these normals across
the polygon however you have to there
are issues about keeping the vectors
normalized but there are ways to get
around that now a demo
here I have a positional light source
circling the source texture you can see
a purposeful that effect perhaps let's
move in so we can see it better
this polygon is simple for vertex
polygon there is no geometry but because
of dr. texture combined we can see the
lighting change depending on the local
light vector I can show you the colors
of the that correspond to the local
lighting director here here you can see
the color change these colors are the
encoded a little local light vectors you
can see bizarre here you can see the
geometry of the polygon is flat
so in summary you saw how you can use
the OpenGL texture combined API to do a
diffuse lighting effect using detection
units instead of the geometry does
require you to to produce your normal
map as a texture and it's required
calculating local light vector for each
vertex or you can calculate the local
vector for the polygon finally a stencil
shadow volumes at the time oh well this
is gonna be a quick session I like
shadows shadows are cool using central
shadow volumes will allow you to do this
despite dynamically using the stencil
buffer to do shadows requires employing
the stencil buffer and the stencil test
as you can see a stencil test is part of
fixed pipeline it's a new feature in Mac
OS 10 we use the depth test and a
stencil test for this operation
the operation is can be simpler complex
depending on your application but the
basic steps is to first render your
scene geometry normally both color and
depth then we can switch our shadow
volume then we stenciled this shot of
volume into the stencil buffer and we
use a stencil mask to render either a
lighting effect or a shadowing effect
depending on your technique the first
step is to actually construct our shadow
volumes this is this can be very
application specific so I'm going to
hand wave it a bit but general idea if
this is our local lights are positional
light source and this is a occluding
object we just will construct a separate
shadow volume that will be extruded from
the including the object in the
direction away from the light source
it's important that this shadow volume
encompass the full scene geometry you
have shadowed after we have made our
shadow volume we will stencil it with it
with a pretty simple two step operation
and at the end a stencil operation now a
stencil buffer where the stencil buffer
grab nonzero values where the scene
geometry intersects the shadow blame
here's the code for that
the stencil operation only requires
writing to the stencil book or not so we
disable the cover and depth rights next
I will set up the stencil operation the
GL invert operand is important this will
set the the stencil test action of the
depth pass to invert the stencil buffer
when the depth when the depth F passes
for your rendered volume next I simply
render our shadow volume twice first the
back face is in the front faces then I
would reset the various state
after we had rendered our stencil buffer
its Nestle mask
we have separable we have several
options to get the lighting effect
perhaps the simplest is rendering a
two-dimensional overlay polygon with the
shadow color and then using alpha
blending this works well but does leave
one artifact where the scene geometry
within the shadow volume will still
exhibit diffuse and specular lighting
effects which may not be what you want
to avoid that you can do more
complicated multipath scene rendering
which where you render first and I'm
that scene then use a stencil mask
you've created from your shadow volumes
to mask out the diffuse and specular
lighting effects but this will slow your
rendering down
what do you do what I do just render
your shadow volume again there's a color
buffer using the alpha blending I'll
show ya
here we have a sphere inside shadow
volume I'll show you the shadow volume
as you might be able to see the shadow
volume fully encompasses both the sphere
and the ground plane this produces the
shadow effect I can move the ball inside
and out of the sphere and you can see
the shadow effect of the sphere
if I disable the depth test the dis
tensile test like this you can see how
high renders the shadow volume
the shot if I'm using in 85% of pic
alpha Blin so you can't see the specular
and a few sliding if I use 50% opaque
capacity here you can see the effect
it's much more noticeable but if you say
5% it's not too bad
finally let's look at the two past
operation in more detail first I'll show
you the back plains the green
corresponds to the back plains of the
shadow volume here are the front plane
by reading both the back and front
planes you can see the cyan is where the
back and front planes are both rendered
the blue region is where the front plane
is rendered but not the back plane and
you can see this blue region corresponds
to shadow area
which I did for half-hour or should I
continue I guess so in summary you saw
how you can use shadow volumes in
stencil buffer to animal to pass effect
to create your shadow effect just
require both used using a stencil buffer
and depth buffer and it's simple for the
my simple case but using in a real scene
may be more complex than my presented so
you may have to optimize you book your
creation of your shadow volume and the
fill rate and filling it
you