WWDC2001 Session 405
Transcript
Kind: captions
Language: en
to present session 405 OpenGL John and
geometry modeling I would like to
introduce thought private apples DTS
engineer all right welcome to session
405 geometry modeling my name is Todd
private I'm the 3d graphics DTS engineer
for Apple we're gonna be going through
today is a study of the implementation
of complex animated geometries any of
you who have downloaded any of our
sample code from the web are probably
familiar with the proof with a
multicolored square for most of our
OpenGL demonstrations there's always a
spinning square that we use to
illustrate any particular concept well
today I'm gonna move from that into a
complex animated geometry in the form of
a model you see our little marine here
on the screen and I'm explain basically
what goes into rendering one of these
things to the screen I'll get into
coordinate systems first one of the
things that was kind of a challenge for
me when I was first getting into OpenGL
was that there's so many different
versions of coordinate systems that go
into any type of 3d rendering you have
your texture coordinates which are
normally described in 2d space as they
were as they apply to a 2d texture these
days though we didn't we now have what
are called 3d textures which is a
layered texture so you can actually have
three-dimensional texture coordinates
but the ones we'll be discussing today
are two-dimensional we also have model
coordinates or model where object
coordinates that describe a describe an
object within the scope of its own space
as in relative to the centroid of the
model world coordinates are a global
reference it really defines how objects
relate to one another in the scope of
the of the world that you are rendering
them in and finally eye coordinates are
the result of a modelview matrix
transformation on model coordinates to
put them in pseudo or dual coordinate
space this kind of gives you an example
of coordinates here
you can see the triangular slice out of
the texture on the left-hand image as it
applies to a model here kind of
illustrates what texture coordinates do
these vertices here at the edges all
have a texture coordinate that Maps
directly into that image I just happen
to select that portion of it as it maps
onto this model here so what's in a
model format well you have vertices
texture coordinates triangles and
textures those are the basics basic
things that you need to store a model
load it back from disk and render it on
the screen
your vertices are just an array of three
floats or doubles however you want to
represent your coordinates in a moment
I'll discuss a particularly interesting
technique that was used by its software
to render their models for quake 2
that's really all it is they're just
it's just a simple array of three floats
that describe a vertex in space in 3d
space texture coordinates as I went
through just a moment ago are an array
of two floats or doubles whatever you
prefer to use and these apply directly
to the texture in its own in its own
space now with texture coordinates
they're not always equal to the number
of vertices it doesn't sound right how
can you have more than one texture
coordinate per vertex I'll show ya as
you can see I've highlighted the
vertices on these two triangles here
with the red line down the middle which
illustrates the shared of the shared
edge of the two triangles half you can
either have texture coordinates that are
based per vertex or you can have them
that are based on on a per triangle
vertex if you have them per vertex and
you have two different textures on each
triangle you can end up with some pretty
interesting results unless you go out of
your way to make it happen you're gonna
find that the texture coordinates from
one don't map directly into the image of
the other
so you're come up with some pretty
interesting looking textures if you if
you share texture coordinates amongst
vertices the alternative to that to make
things look right with two different
textures is to have texture coordinates
assigned on a per triangle vertex basis
at that point each triangle vertex even
though it
it has its own set of texture
coordinates so therefore you can have
two different textures on the two
different triangles and not screw up the
texture coordinates again we'll go back
to this that the all your triangles are
are an array of vertex coordinates and
texture coordinates that applies to
there they're usually indices into the
aforementioned arrays of vertices and
texture coordinates it keeps it keeps
things fairly easy and for loading
purposes you're better off using indices
instead of actual hard-coded vertexes
vertices excuse me
textures themselves are just the image
data stored in whatever format you like
whichever you happen to want to prefer i
do know that eid for quake to happen to
use the PCX format these days a lot of
people are going back to using BMPs and
gif files JPEGs are not used a lot
because of the compression schemes that
are involved in them you tend to lose a
lot of image quality so they tend to be
avoided so as I said I've been kind of
mentioning it here and there but the
case study we're gonna be going through
today is for the quake 2 model form
which is dot MD 2 I chose this this form
for a number of reasons it's publicly
available there's a ton of information
on the web that you can get it is it's
fairly well documented although as as I
learned just recently it's not quite as
well-documented as as I thought it was
there is also a lot of models that are
readily available you can download them
almost anywhere and there's a ton of
them out there so what does the header
file for this for an MD to look like
well that's it doesn't look like there's
a whole lot there you notice that it's
actually a structure of 17 intz makes it
really easy to load in it comes out to
68 bytes so you just I have to do is
point your put your file pointer at the
beginning of the file load in the first
60 bytes and you have a complete
description of what the rest of the file
looks like I've highlighted here the
elements that are actually relevant to
our discussion today magic conversion
those are I believe specific to it'd the
offsets are kind of interesting though
they specify the offsets of the various
data points from the beginning of the
file well if you read in the and file in
its entirety in memory they also become
the offsets to that data in memory so
how is this data actually represented
well there's a lot of supporting data
structures that go into the md2 format
and I'll be going through some of those
right now the triangle vertex T is as I
mentioned in the in the overview slide
this is our structure that will describe
any given triangle vertex what you have
is you've got three vertices that are in
a byte and you also got a byte which
describes the light normal index which
is not relevant to our discussion today
the triangle data is then two arrays of
indices into the vertex and triangle
data now I know I said that vertices are
floats or doubles some kind of
floating-point number that you can use
right what these are are an encoded byte
they're an encoded byte coordinate in
order to conserve space it decided to
pack all of its vertex data into bytes
this makes it really easy to go
cross-platform when you're switching
from big-endian to little-endian because
you don't have to do anything with these
they're already there it's just a it's a
it's a byte no swapping required can be
loaded by any machine also having the
vertices in such a small such a small
space means that they can be very very
tightly packed you notice that each of
these structures are actually only four
bytes in length well four bytes 32 bits
sizable long
the other aspect of it is why did they
encode them in a byte because it allows
for very easy scaling and translation of
any given vertex so Jeff mentioned
earlier in his in his presentation about
how OpenGL is very very good at scaling
and rotating things with just a simple
transform matrix same same concept
applies here our texture data is stored
as file names it's pretty basic
you just load in the you load in the
texture as you would any other file the
preferable method on Mac OS 10 is to use
QT new geo wolf and pointer which sets
up a non padded you well that you can
then you direct OpenGL to to suck in the
texture and apply to your to your model
some little caveats about this for any
given MD 2 file
the skin width and skin height will be
the same you can have as many skins as
you like but they all have to be of the
same size your texture coordinates small
struct of two shorts the S&T coordinates
there's really not much more to it than
that
those are those describe the coordinates
of the or the various slice of the
texture that you're going to be using
for whichever vertex the texture
coordinates apply to this is relevant to
animation which is your frame data on a
per frame basis this is what you get
when you're dealing with the MD 2 format
you've got a scale of Translate scaling
or translating those are the two that I
want to focus on for a moment as I
mentioned OpenGL is very good at scaling
and translating just using a very simple
matrix transforms these are the photos
are the values that we'll use and apply
to the byte vertices to get them to the
actual positions that we want them the
name the name segment of this is is just
whatever you happen to want to name the
frame for tracking your animations and
then the triangle T vertices this is
kind of interesting because it's only
one vertex or it's it's only one vertex
but what it actually is is a pointer to
a whole array of vertices that are that
are allocated on a per frame basis
so what'd you end up with here is is
that each frame will have the exact same
number of vertices in any given for any
given empty - this is how I chose to set
up a storage of an MD to file for
elements we have the header frames
triangles and texture coordinates you
don't have to you don't have to set it
up this way and I'd like to point out
that this does leave out a significant
chunk of the MD 2 file that being the GL
commands which were not exactly relevant
at least to this discussion so there's a
little pseudocode about loading from
loading an MD 2 the first things you
wanted you have obviously find and open
the file and you then you'll read in the
header as described on the slide here
you read it in copy to the woman once
you have your copy of it in memory you
can now directly access it to get to any
of the data that you need such as the
offsets of the frames like this and then
you can allocate your memory for your
frames here and then read in the data to
your then you read that excuse me and
you'll read the data in from the file
into your into your data structures in
memory so drawing models well there's a
there's a really basic loop that you
want to use whenever you have to draw it
whenever you want to draw your models on
the screen after setting everything up
you have your GL context you have your
viewport you perform your camera
transformations so you're looking at the
proper place in in space then you're
gonna push that matrix so you can save
it you'll transform your model to place
it where you want it you'll draw it
you're gonna pop that matrix back off
that would and that will return you to
the basic camera transformation so
you're right back where you started and
then you continue the next iteration of
the loop and you can go through it for
the however many models you happen to
have this is kind of a graphical
representation of what it looks like
after the camera transform you push the
matrix transform your model draw it pop
the matrix back off return right around
again for however many models you've got
after you perform all of this you
continue with any other rendering stuff
that you want to do
so this is kind of a of a pseudo code
representation of what a what a scene
rendering routine would would look like
the highlighted code specifies the loop
that I just described for you where the
push matrix and pop matrix surround the
model drawing routine one thing I didn't
talk about was animation the animations
this would actually only render one
frame of your of any animation which
happened to be framed um if you wanted
to render each individual frame the
model itself would probably have to have
some kind of rendering routine that
would that would animate through
whatever frame sequence you wanted to
perform the given animation so how are
you going to draw that model well this
is again a pseudo code representation of
how you might do it first we start with
the vertex indices VI and the ver and
the actual vertex we're going to use VTX
and we start with triangles now these
loops are kind of kind of strange and
and while we were writing a code for
this the language that we use to try and
describe it became very very convoluted
very quickly as you'll find out
momentarily so what we have here is for
each note for each triangle in the in
the header skip down one now for each
vertex of each triangle in the header
but wait there's more so for each vertex
index excuse me that's what it was for
each vertex index in each triangle in
the model now we're on two for each
vertex in each vertex index in each
triangle perform the following
operations those would be you get the
vertex itself you apply the scale and
you or you multiply by the scale you add
the translation and you have your vertex
calling GL vertex down at the bottom you
specify your vertex opengl knows where
you want it and you're good to go
so as I mentioned this this rendering
engine or this printer gauge excuse me
this rendering loop doesn't MIT animate
any sort of animation you would have to
do quite a bit more you'd have to add
quite a bit of code to get the
animations actually working here what
you want to do in that case would be as
you as you drop down and start rendering
the vertices you don't want to execute
that loop for each frame of each frame
of each animation that you want it to
display at any given time alternatively
what you could do is you can increment
the frame number every time a frame
renders through your rendering engine or
for whatever application you've got and
continue with your in with your with
your rendering loop that way as I
mentioned the on a per vertex coordinate
basis each vertex is it is an encoded
byte you multiply by a scale you add the
translation and you end up with a
floating-point result in vertex that you
can pass to OpenGL it's a little bit
about animation here I'm going to
discuss keyframed animation with vertex
interpolation between the keyframes
what exactly is keyframe animation if if
you're not exactly if you're not
familiar with it
keyframe animation is a series of
snapshots for instance if you are at a
racetrack and you're watching a car
travel around in a circle and you took
various pictures of it as it moved
around the track those would be your key
frames so if you were to view them in
order you would actually see the the
enim the quote unquote animation of the
car moving around the track well what's
wrong with that it's gonna look really
choppy so what do you do
that's where vertex and that's where
vertex interpolation comes in between
any two given key frames you can divide
up the amount of time that it takes to
render each frame and then interpolate a
number of vertices in between the key
frame vertices so you end up with as
much smoother anime
each frame contain a keyframe animation
sequence contains a complete set of
vertex data it has all the texture
coordinates all the vertex coordinates
for any any model that for the model
that that it applies to as I mentioned
it is a series of snapshots over a given
period of time
keyframe animation is very good for
things like simple rigid body motion or
for for rendering any sort of fixed
animation in in in a game context
for other things such as models that are
going to be interacting with your
environment and and and other and/or
other dynamic sources of energy you
would probably want to move to something
more more complex such as a skeletal
based animation keyframe animation will
will do the job but it's probably gonna
look pretty choppy and it's gonna be
hard to modify your keyframes on the fly
it's kind of a graphical representation
of what a keyframe animation looks like
each of these would be considered a
snapshot and as you progress through he
goes from a standing Marine here goes
from a standing position all the way
down to laying down if you're to play
these through one at a time at any any
amount of speed it's probably gonna look
pretty choppy again that's where the
vertex interpolation would come in and
you could smooth out the animation to
make it look much better what we're
gonna do is we're gonna interpolate the
vertices linearly as in there will be a
fixed amount of time between given
between any two interpolations the
number of actual interpolated vertices
is arbitrary is based on purely based on
on your frame rate at the time or you
can actually hard-code it to where you
can you can see you can you can lock it
in to a particular rate so the basic
equation is the number of interpolations
the number of Circulations is equal to
the animation speed from between from
frame to frame divided by the time that
it takes to render one frame as an
example of that if you have a half
second of animation 500 milliseconds
divided by 30 frames a second
approximately 33 milliseconds you'd come
up with 15 interpolated frames so for
any given vertex you would have 15 more
in in the middle of them that would
smooth out your your animation sequence
I've kind of gone through how how to
define a model format and then a
Caesarea that you would want to have
went through loading and storage and
memory representation and showed some
basic algorithms as to how to display in
animate a model if you want a little
more information on this kind of thing
you can go to Apple's website and the
the open Apple open Geo web site as well
as open jailbot org two really good
books on modeling are real time
rendering and 3d graphics file formats
which is where I ended up learning most
of my information from over the course
of time and the principles of 3d
computer animation is also another very
good book and advanced animation and
rendering techniques are very good for
for just that animation as far as the
roadmap goes 4:08 tomorrow is open to
optimization Jeff discussed that earlier
advanced rendering be given by Troy
Dawson tomorrow and our feedback forum
is tomorrow 5:00 p.m. my contact
information if for anybody who wants to
get a hold of me
you