Transcript
>> Graeme: Welcome to the second session on
Game Design and Development for the iPhone OS.
In, well, I really hope you attended Part One.
Did you all attend Part One?
[ Applause ]
Good game designers.
This session is all about the technical
aspects of how we made Quest and we're going
to go a lot into optimizing for the platforms.
As I said in the first session there is
more than one platform now and optimizing
for each of them requires a little thought.
We're going to talk about that.
We're going to talk about our lighting
algorithm and how we lit the dungeon
and how we actually got to 30 fps and higher.
Our game actually runs at a high frame rate.
We're going to take a, check in with our art department.
Pete will talk about the artwork at Quest and how the
art pipeline even made a tremendous difference in getting
to the higher frame rates that we see in the game.
Finally we're going to talk about the one dozen things
that we think will make a tremendous difference to you
in your game development and hopefully you'll
get to apply some of that tonight to your games
so you can talk to us about it in our labs tomorrow.
So to quickly reiterate Quest was written in two months by
three programmers and one artist who really were passionate
about making a highly performing application to
show off on the iPad, the iPod Touch and the iPhone.
Let's have another quick demo, take
a look technically at some stuff.
I'd like to invite Jeff to the stage
for a little talk through that.
[ Applause ]
>> Jeff: So this demo will be more of a technical
programmer look at it, sort of look behind the curtain.
So, what you see here is our dungeon level
exported via Open Collada from our 3D art package.
You see in this frame as you, it varies as he runs
around but it's around 45 to 50,000 polygons a frame
and we should be doing around 40 frames a second
so go ahead and bring up the console and change
to the perf
display so we can see what's going on,
50 and 45 so you know you can do the
math and we're pushing lots of polygons.
These devices are super awesome.
This is an iPad running 1024x768 and
pushing tons and tons of polygons.
It was so much fun making this game
on the iPad and on the iPhone 4.
These devices are just awesome to play with.
So we can see he's going through the light and we'll talk
about how we do the per pixel lighting here.
We'll show you the shader that did that but as
you've seen us kill the demon over and over again,
there's been some stuff that's been bothering
me so let's just go kill the demon real quick
and kind of describe what the problem is.
So as he's fighting you can kind of see that the hero is
way up in the demon's kitchen a little bit like we need to
[ Laughter ]
We need to back him out.
I mean the demon is just engulfing him right?
So one way we can do that is bring up the
console and we can change the runtime,
the character collision radius of the hero.
By default it is 100 world units so if we
back that out a little bit, let's say 150
and we fight again, we can kind of see the instant change.
So it's better.
Nice crit.
But still, still let's back it out.
Let's try 200.
So as you can see we're changing variables
in real time without doing another build.
We're on device and we're making
this loop as tight as we can.
We can change as fast as we can.
So let's see how that looks.
Alright that looks a little bit better.
That looks I would send that out the door.
Alright. OK one, a couple more things I want
to show you before we leave this demo
is let's show you the facade of Quest.
So, let's turn on rotation.
Let's kind of rotate around the dungeon a little bit.
Toggle rotation.
This is the part where you don't want to run over
and grab it from him and start typing it yourself.
So as we rotate around you can kind of see we don't have any
polygons where you can't see from our default camera view.
We don't have anything down the stairs.
We don't have anything behind the
hall of lights for example.
We cut out everything that the camera won't see
as a final post process to help our frame rate.
We don't need to render triangles that you're never seeing.
You know, it's a game.
It's a Hollywood set.
We don't, we're not making the most realistic dungeon.
I've never seen a real dungeon so it's
fake and now we're showing how we cheat
and as a final step let's turn off the diffuse map
because we're going to show off lighting right away.
So this is our lightmap for the dungeon.
You can see as he walks this lightmap is shading our
guy so as he walks through the red, you see it gets red
and as he walks out of the red he doesn't and
so we'll go into how we made this lightmap next.
[ Applause ]
So, OK. First quiz of the day.
How many people here see this and
see 10 lights in the scene?
10? Alright, how about choice of B, how about 20?
Couple? Alright.
How about multiple choice C, which is always correct, 0?
How many people see 0 lights in the scene?
Alright. Yes and then the final D
answer to throw you all off, 50.
Anybody see 50?
Yes. Alright.
Everyone's correct.
So what we do in Quest is we lighten up our dungeon.
We start off with our diffuse map which is just
our detail and our base color and then we go
into our art package and we put in lots and lots of lights.
Static lights.
In this scene we have over 50 static lights so
if you raised your hand on D, you were correct.
But we also have no dynamic lights so
if you chose 0 you are also correct.
Our first pass at this was your standard take
your diffuse, multiply it times the lightmap
and you get this result and it was muddy.
Our artist Pete looked at it and he said yeah
that's great but it doesn't look very good so he
and he said what I want is the
hard light shader from Photoshop.
I want the shadows to be darker and I want the highlights of
the frame to be brighter and so, you know, I'm a programmer.
I know math.
I'm like yeah and I have shaders.
Let's do it in a shader, you know.
Let me go look up the hardlight math and put it all
in the shader because shaders solve all problems.
So we did that.
Looks great, 12 frames a second.
It was terrible.
It was very, very slow.
So not only do we have.
We have two problems now.
We have one, we have an unhappy artist because it's slow and
we've also taken all the power out of our, out of his hand.
He doesn't have any control over the final frame any more.
We do all the fancy math in the shader.
So, we needed to go back to the drawing board
and kind of think of a different way to do this.
So, one way to do this was to pre-compute
what the hardlight does ahead of time.
What we did is we pre-computed the brightening part of
the hardlight shader in the diffuse map ahead of time
and then we do our final multiply with the
lightmap and our dungeon is now fast again,
running very, very fast using a very simple shader.
So we're now fast and we also have
the power back in the artist's hands.
He now controls both sides of the equation.
He can brighten up the lightmap, the diffuse map in
Photoshop, he can level it, and he can saturate it.
It's up to him and then he also generates
the lightmap for us and we just render it.
So here's the hardlight textbook version of the fragment
shader that was super slow and I'll kind of walk you
through it and it's going to take a long
time because it's a very, very slow shader.
First we sampled the diffuse map and scale it.
Then we sampled the lightmap and then we
start messing with the diffuse color -1
so we get it from the range of 0 to 1 to -1 to 1.
Next we have to start prepping the left and right sides of
the equation for our mix and then we figure out our step
and then we do our mix and we finally step on the FragColor
and you know the vertex shader is probably still rendering
them even though I've taken this long to explain it
and as a final don't do this ever, you can see that
in all these vec4's there are no precision qualifiers.
There's no lowp or mediump or highp anywhere in the shader
so we've given the shader compiler nothing
to go on to help optimize it for us.
So, what you see is what you get, very, very slow here.
Here's the final fragment shader.
It's your standard lightmapping shader, sample
and diffuse multiply it times the lightmap,
stuff it in the fragment color and you're off and running.
So now that we have the dungeon lit
how do we load it in, in a fast way.
The first version we tried was let's do the first
very naive way to do it, load it all in at once.
So what you see here is our top down or 3/4 view of the
dungeon and this was maybe 130,000 polygons at once.
This final version I think ended
up to be about 85,000 polygons.
So we did that.
We did the first thing which was a very quick thing to do.
We wanted to iterate as fast as we could so we loaded
it all up in memory and that caused a big problem.
It took 5 or 6 seconds to get to that Play button so we've
already broken that rule of getting to game play as fast
as we can so we needed to come up with a different way.
The way we finally ended up doing it
is we split it into logical sections.
The first thing we load is just
what you see at the main menu.
Then as you walk down the hallway we
start pulling those objects back in.
Now this also helped us be a good citizen
in Open jail because there is a limit
on how many triangles that you can index per 1 drawl call.
So breaking this up into logical sections got us
around that and then finally, look at your game design
and think about ways you can optimize your level loading.
For example, in Quest, we didn't end up doing
this but we could have if we had designed one
of those gates to shut after you walk through.
You reach a checkpoint and the gate shuts behind you.
You know you consider dropping off the first two sections.
You don't need that any more.
You might need the memory for particles
and all sorts of fun stuff.
So, drop those old sections if you can but the
main takeaway is, think about your game design
and what about your game design can allow
you to take these shortcuts you know.
You're making a game.
This is the best and most fun thing you can do as a
programmer is find those shortcuts and exploit them.
Finally, this is the part of the geometry that you don't
see in a game because given our camera point of view
and you don't rotate in our game play, you'd
never see these triangles so we never export them.
They're never in our binary.
It makes it small over.
It's faster to load.
Another important thing is we keep this around
on the art side because we use these polygons
to cast shadows and generate the lightmap.
So don't delete them entirely.
Keep them on the art side of the design side because
you might need to use them for some use later.
So, now that we've got the dungeon lit and
we're loading it let's throw the character in.
Our first pass at this was not good.
We threw down the guy.
He's got a diffuse texture, no shadow.
He sticks out like a sore thumb, doesn't look like he
fits at all but we also have another bigger problem
which is well we chose to statically
light our dungeon with no dynamic lights.
How are we going to light this guy so that
he fits in our world in a fast way given
that we've already chosen to take
static lighting into account?
One way you can do it is to take the minimap and drape
it over him as an environment map so as he's walking
around you know where he's in world space on the
minimap because you already done that for the game,
so you sample the texture and then you think well,
we're just going to distend our
lightmapping algorithm as he walks around.
This worked great and when we first thought of it we were
like you know it's not accurate
but it's going to look pretty good.
It will keep our frame rate pretty
high and it's going to be fun.
Right. It'll be fun enough.
So this worked great for about a second.
Right. You can see as we walked over that beautiful
grate that has light underneath it shining up,
he's now being shadowed by something
he's standing on top of.
Yeah. I laughed.
It was a good laugh but I also felt pretty stupid but
I knew I was on the right path because it looked cool
but something was; it just needed an extra little step.
So as I said being a game programmer
we can take some shortcuts here.
We're going to fake it.
We're going to not use the actual 1024x1024 minimap.
We're going to make a new one and this time
when we generate this lightmap we're not going
to take any information that's below his knees.
We're just going to take any light that's
above him because that's what we really want.
We really just want the lights from above him to shine and
we don't, our camera view we'll never see underneath him.
So this is the final lightmap that we
came up with and this is the final render.
Now when he was walking down the hall of lights and he was
all red and he walked into the darkness and it looked cool
but he was, he got kind of chameleoned into the background.
We lost him in the frame.
Now when he was moving it's easier to see but if you take
a screenshot and you send out a mail, people look at it
and they go well, yeah, he's shadowed but I can't see him.
One way to make him pop off the screen
is to add one dynamic rim light.
Now a rim light is one that we put on
the camera and we wanted the effect
of if the vertex normal is exactly perpendicular
to our camera, we want that to be fully bright
and as the vertex normal gets closer and closer
to the camera we want that to fall off to 0.
You can see that on the demon's shoulder.
The back of his shoulder is our fully bright value and then
as it gets closer and closer to the camera, it falls off.
So here's how we did that in the vertex shader.
First we set up a varying.
We need a place to store our result value and then
we get the normal from the camera to the vertex,
dot it with the vertex normal and then we smoothstep it.
Smoothstepping is basically a nice little quadratic blend
that we wanted to do between values between .5 and 1
and that final rim value you stuff into that
varying that we created at the very top.
Now as an exercise for you guys, this is a great place
to stop and think can I give my artist a knob here
to turn because artists love knobs to turn.
They love it.
Give them as many knobs and things to play with as possible.
So one way you can give an artist some love here
is to add an attribute called rim light color.
Let them set the color in your 3D world so that when they're
walking down the red hall of lights maybe he wants the light
to be tinted a little orange, blue, or whatever.
It's up to him right he's the artist.
Or another way you might want to
use this is in a very dramatic way.
If we roll a critten
and the hero hits the demon for that frame
and for the next few frames we can pump that red to
be fully red I mean pump that light to be fully red
and you know he'll flash red for a
second and it'll be you know dramatic.
So there's ways that you can play with this.
This is just the base case.
So we send that rim light over to
the fragment shader and you can see
that this fragment shader looks a
lot like our dungeon fragment shader.
Sample the diffuse multiply it times a
lightmap and simply add in the rim light
and then stuff it in the fragment color and you're off.
Let's talk about how we animated our guy.
There are two types of animations we considered.
We could have done skeletal animation or mesh animation.
Mesh animation is one of the older
techniques where at every key frame
in the animation you record the position of every vertex.
Given that our guy when we started was 3200 polygons
that information gets big fast, on disk and in RAM.
So we chose skeletal animation.
Skeletal animation is simply what you see here.
This stack of bones which is our matrix transformations and
then you also have a mapping of every vertex to each bone
or maybe a vertex matched to multiple bones.
For example if you have an elbow, this vertex on your elbow
might map to the lower arm and the upper arm equally 50%.
So you have this mapping and then when you
animate the bones because in your animation
for every key frame you're only storing the
bone transformations you then can draw your guy
and it looks correct.
So we chose skeletal.
Now there's two ways that you can
do this in a performant way.
The first way you should choose is you should look
at the GPU first because the GPU is very, very fast.
There are some tradeoffs to using the GPU though.
Let's say that your game is pegging the GPU fully, you have
no space left and but you still want to do some skinning so
but then you look at your CPU and you're
like oh look I have some space there.
Consider moving it back to the CPU.
You have two there you get to play with.
It's a balancing act.
Another reason you might want to do it on the CPU is
if you need at any time to know the vertex position
in the middle of the animation for, like in the swing.
If you even know that vertex position exactly you need
to do that on the CPU because once you send the GPU
down to render your stuff it's, you can't
get that back in any sort of performant way.
So it's a mix and match.
It's a game.
It's a game within a game.
So as I said before our first Sergeant Shock was one
that we made just to get in the game as fast as possible.
It had 3200ish polygons, 60 bones, tongue bone, eyelid bone.
I mean it was a lot right; 4 weights per vertex.
It was just done to get him in the game as fast as possible.
Now from our camera view there's really
no difference between a 60-bone guy
with 4 weights per vertex and a 2
weights per vertex, 20-bone guy.
There was no visual quality difference so
making that optimization was a great win for us.
Do less, your game will run faster.
It works for Quest but it might not work for your
game because you're doing a first-person shooter.
One solution does not fit all here.
It's up to you to test, play-test, redo the art assets again
and again and again until you find one that works for you.
Alright. Let's get to the console.
A typical game loop that I've been involved in and guilty of
is a designer will come to me, you know or Graeme will come
to me and say let's make the hero 100 HitPoints.
I'll be just typing away in my, making my best
feature ever, I'm like yeah, yeah I'll do it.
Then I go to lunch and then I'll go to the bathroom and
I'll go to the meeting, I'll forget about it you know.
I'll finally make him a build.
A couple hours later I finally give it to him.
He plays it.
He's like ah you know what 100 is not fun.
Let's make it 200.
By that time I've gone to bed and he's still late at the
office and he'll send the email and the loop just continues.
The loop is very, very slow.
The best way that we've found to manufacture
fun is to make that loop as tight as possible.
Stay on device in game like we showed in the demo.
So here's the fastest way to do it.
Designer sets it to be 100 in game, he play-tests
right there on the device, changes it to 200
and does it again and again and again and again.
One way you can do this is to data drive your engine.
Expose as many knobs and variables as you can to your
designer, to your artist, to your other programmers.
Now we've chosen to use the game console as the throwback.
There's many ways that you can
expose variables to your designers.
This is just one example.
Here's how we did it in code.
So this is one type of way you can do it.
You can say at the start of time you
can say addConfigVariable heroHitPoints
with a default value of 100 and a callback.
So when they bring up the console, type in
heroHitPoints = 100, you get this callback.
You get the value.
You can do whatever you want with it.
You can log it.
You can set the heroHitPoints to 100.
It's up to you.
Another pattern is to use a console command.
Console command is one it's just like a
variable but it doesn't really have a value.
So for example, like the toggle diffuse that you saw.
ToggleBBox here.
AddConsoleCommand toggleBBox withCallback and then
when they toggle the BBox you get the
callback and you can do whatever you want.
You can toggle your BBox rendering
or you can reload your assets.
You can do whatever you want.
But the point is that you are in
game and not reloading the game.
Here's one I like to use a lot.
Let's say that you get inspired one night and you start
working all night long and you've got this awesome feature
that you're going to do, 6:00 a.m. you're done.
Checking all your stuff, send out
the build email, all proud.
Check out my new feature.
This is awesome.
I'm going to bed.
Come in at 2:00.
Find out that you broke it.
You didn't check for an all.
You didn't, something's wrong and now you're the goat.
Anybody? Yeah.
I've done it.
How about if you send that email off at 6:00 and
then you get on a transcontinental flight to Maui.
Yeah. Here's how you save yourself from doing this.
You add a ConfigVariable.
Let's call it JeffsCrazyFeature
and I'll turn it off by default.
I'll send out the email and I'll say if you want
to see my feature, turn on this ConfigVariable
and at runtime they can turn on features, turn off features,
turn on debug features and you can get on that plane to Maui
and feel confident that you haven't broken the build
and if you did break the build they can just
turn it off and their work is not blocked.
So, you can get a sample of this
console code at the attendee website.
It's locked.
It's a sample of the console hooked into the
default OpenGL sample that comes with Xcode.
I encourage you to use it.
Figure out new ways to use it.
Extend it.
Use it in your games to data drive your experience, to
give your designers every knob possible to let them design
and every artist to let them just make great art, you know.
Let them do what they do best.
Thank you for your time.
I'd like to bring up Pete to talk about game art.
[ Applause ]
>> Pete: Hi everybody.
My name is Pete.
I'm an artist on Quest.
So today I'm going to talk about art, the art style that
we chose for Quest, how we built the art assets for Quest
and the steps that we took to optimize the artwork to
make it run really fast, way over 30 frames a second.
So to get started let's talk about style.
We really wanted to pick a style in
Quest that showed off really well
on all the Apple platforms whether it
was the smaller screen on the iPhone,
the larger screen on the iPad we
wanted to see the characters.
We wanted to know what they were doing.
We wanted to relate to the world and we
really wanted that style to be cohesive.
We wanted it to feel exactly like you're seeing the
exact same game whether you were looking at the UI,
the characters, the dungeon and because
we only had two months to create this,
we picked a style that was achievable
in that amount of time.
We didn't have a lot of time to do high polynomial mapping.
We wanted it to look good but we really needed
to make sure that we were doing the right thing.
So when you're designing your game,
style is also very important.
You want to pick a style that's really
going to speak to your audience.
If you're designing a history-based game starring Abraham
Lincoln you probably don't want to render him looking
like an animated character with a bazooka.
You're going to alienate a large portion
well maybe that's the greatest idea ever
[ Laughter ]
But the people who are coming to your game
are going to have an expectation of the style
and they want to relate to it and love it.
So let's start getting into it a
little bit about how we created Quest.
The. How many people out there know what a texture atlas is?
How many people are currently using
texture atlases in their applications?
So this is a great way to really optimize your
game and get a lot of really good performance.
You might be surprised to know that this one texture,
this 1024 texture accounts for 80 to 90% of our dungeon.
We're loading one texture.
So if out there if you don't know what a texture
atlas is basically instead of taking all of this,
these small little textures and
setting them to the GPU to get rendered
and having all these draw call saying
render me, render me, render me, render me,
we're combining all of these little
textures into one larger texture
and you're sending one time to get rendered, one draw call.
It's a lot more efficient.
Now as a programmer you're looking at
this and you're saying that's great.
You know we get a lot of really good
efficiency and it's a great way to work.
As an artist if there are any artists out
there, you might think this is insane.
You might have never worked like this before.
You're asking yourself why do I want to work this way and
I kind of had that opinion at first but I actually learned
to love it because one thing it does
is it keeps your artwork unified.
The style of the dungeon is very unified.
When I'm working on the dungeon I bring up this one texture.
I know where all the pieces are going and I can
start looking at the colors and looking at the values
of everything and unifying it so it
all looks like one cohesive piece.
You can see in the upper-left hand corner
that's our tileable texture for our floor.
To the right of that is our tileable texture for our wall.
You can see we have pieces for the banner, more organic
pieces on the right-hand side for our sculptural pieces
like the Gargoyle head or the sculpture
of the knight holding a sword.
Bottom left-hand corner we have pieces that
we're constantly reusing on doorways, on bridges,
so when we start to make a new piece
we can take all of these things
and know that we have a little bit of the stairs.
We have a little bit of wall.
We can combine them all into one object and load
that game and it's very fast and very efficient.
I'm a little bit obsessive about building on the grid.
It's a very efficient way to do
both 2D artwork and 3D artwork.
We use the grid for our interface, for a lot of our
models, for how we put our texture libraries together,
our texture atlases together so you can see this is
all on a power of 2 grid or the majority of it is
and one of the reasons why we do this is
to optimize our texture as much as we can.
We don't want a lot of wasted space in our texture.
Every time you waste space in a texture you're
losing an opportunity to make your game cooler.
So we've tried to use almost every single
pixel we possibly can in this texture
because it's going to be loaded into our game.
We also build on the grid to get a
consistent texel density in our dungeon.
So when you see a doorway next to a wall
that has the exact same texel density
so the details are going to match up exactly.
So you can see on the upper left, that is a 2D texture.
It's on a 256x256 grid.
It's from our texture atlas.
We're applying that to a polygon that's in our
3D art package that's also on a 256x256 grid
and that's being rendered in the world at
approximately 256x256 pixels on screen.
This is very efficient.
This is a great way to work because you're
not putting a super large texture in an area
where you're going to waste that resolution.
So if we were to map our floor with a 2048x2048 texture
and you're only going to see that on screen at 256x256,
then that's a lot of memory that's wasted.
Now given that this is a 3D environment,
a 3D world you're not going
to always be rendering stuff on
screen at that exact texel density.
When things go farther away from the camera you're,
they're going to get more, they're going to get tighter
when they get closer to the camera they're
going to get a little bit blurrier.
So this is just a good rule of thumb to use so you're
not putting an exit sign on a doorway that 2048
and you're basically wasting a lot of your memory.
So let's start building a dungeon.
This was one of the first textures and the first
pieces that we built when we were building the dungeon.
It's on a 256x256 grid.
It's 256x256 and it's one of our floor textures.
We took it and we started tiling it.
Because it's geometry we can flip it, we can
rotate it and we can orient it in such a way
that we're getting rid of a lot of that tiling pattern.
We're going to add some baseboard trim.
This is going to be used to kind of soften
the transition between the floor and the wall.
It's also on the grid, a little bit crazy about
the grid and we're going to add some walls.
We, because these pieces are tileable, we can make as
big of a room as we want, as small a room as we want
and later on I'll show you even more of the
pieces we use to add a lot of variation.
This is just an outline of how the pieces fit together.
Because again we're also, we have all of our vertices on
the grid because these pieces fit together in such a way
like puzzle pieces we don't see any gaps.
We don't see any holes in our geometry.
There's no weird, rendering artifacts.
It just all comes together.
Now the grid is also just a suggestion.
Because we're building a very you know square very man-made
structure like a dungeon it makes sense to build on a grid.
Other things like sculptures, stairs, the broken
pieces on the stairs; they're not on the grid.
They don't need to be on the grid.
We wanted to kind of give more of an organic feel
to some of the areas so those are completely off.
Although the main structure of the stairs,
the height and width of the stairs is
on the grid and that's for animation purposes.
If we had an animator who wanted to animate a
character walking up the stairs instead of doing a lot
of kinematics we can basically tell him
the width, tell him or her the width
of the stairs and they can animate to that scale.
If we were to make an entire game, if we had that
constant scale of stairs, we could use it on everything,
even though the look will change,
the animation will always sync up.
You won't see any clipping.
It'll look pretty good.
So here is just a shot of a few of the
pieces that we used to build the dungeon.
You can see some grates on the floor, some
doors, doorways, parts of our sculpture.
Initially when we were building these pieces we
were thinking especially with Jeff and Graeme
that we would take these and do a random map dungeon where
we'd apply a set of rules through code and hit a button
and it would give us our own unique dungeon
for that time when we started the game.
We didn't have enough time to implement that but
it might be an idea that we use in the future.
Art efficiency.
Like Jeff said earlier the initial character that we
did, we did pretty fast and it was way above 3,000 polys.
It had way too many bones, actually had
individual fingers that could animate.
He had a tongue.
He had some teeth, eyeballs and it was way too much.
It was overkill.
On our devices it looked beautiful but given the camera
view of our game you're not going to see any of it.
So we took another pass on him
and we just got rid of it all.
We took polygons out of the face.
We got rid of his fingers.
We tried to give the player the maximum amount of
quality in art with the minimum amount of resources.
Efficiency is always important even if you had an
unlimited amount of RAM and video memory and whatever,
you still have to take the time to create this.
So, working with resolutions that are a little bit
lower, it's actually will save you a lot of time as well.
Here's a shot of a hand where and it's
kind of hard to see but you can see
where we just completely cut out all the fingers.
Our hero grabs a sword.
He doesn't do any pointing barely any waving.
He needed a mitt and that's all the player's
going to notice is that mitt of that character.
So, got rid of all those polys
and made it a lot more efficient.
Next gen graphics are awesome.
I love them.
They make games look amazing but
they also come with a huge tradeoff.
If you want to cut your texture memory in
half, then you might be able to add more maps
and that might make your game look awesome but you have
to understand these tradeoffs before you start jumping
in to using things like norm maps or ambient occlusion maps
or realtime mesh displacement,
dynamic lighting, dynamic shadows.
Some of the most successful games out there
right now are using very little technology.
Now, it's not to say that there's not a lot of
really awesome games that use a lot of technology too
but you really need to pick what's best for your game and
what your audience is going to notice is if it's very subtle
like a subtle specular map and it's
causing you a lot of frame right issues,
then maybe that feature is just
not that important for your game.
So in closing I'd just like to say that a lot
of the stuff that we've talked about is not new.
The gaming industry has been using it for many, many years.
It's tried and true methods and at the same time we're
not saying that this is how you have to make games.
These are just some ideas that might help you
optimize, might help you make better games.
If you were making a Gumby surfing game building on
the grid might not make any sense at all but hopefully
that we've given you a little bit of
information so that you can continue
to make awesome games and we can continue playing them.
So, thanks for your time.
[ Applause ]
>> Graeme: Thanks Pete.
I think you did awesome, huh?
So the one dozen things that we learnt whilst we were
making Quest that I think can really apply into your game
that I think you can take home tonight and hopefully
come and talk to us in the labs tomorrow about.
Number one is optimizing assets.
As Pete just showed, optimizing your assets not only
increases your frame rate, it decreases your load time
and it makes a tremendous difference to the final
frame rate and your right production schedule.
If you're making polygon, you know, 3,200 polygon
seismic shocks, it's going to take a little while longer
to keep going with all the characters if
that's the resolution that you choose.
Choose a careful resolution.
Stick to it.
Optimization also extends out to things like texture
atlases and being able to use a single texture for most
of your level and as Pete pointed out texture
atlas that is full, is a happy texture atlas.
Optimization also applies to shaders.
Our first shader was awesome.
We could have stuck with it.
We really could have cut down number of polygons and
finally got that thing up to 30 Hertz and we'd have ended
up with BoxQuest but we didn't stop there.
We optimized.
We came up with a way to produce lightmaps that
were actually could work with a simple multiply.
The challenge is never in really getting the right results.
The right result is always there.
The challenge to game developers is always to think
about how can I get to kind of the right result quickly.
So by ramping up our diffuse map and making it much, much
brighter in the very beginning we were able to do a lot
of the pre-computing ahead of time and that made a
difference in our shader, bringing it down from the 8 lines
of code to 3 lines of code that it finally
was and increased our frame rate dramatically.
Also, giving the control back to the artist who can then
go and go crazy at the higher value diffused texture.
Something else is it's a game.
Now if you have been to Hollywood or seen a Hollywood
set you know that Hollywood sets are just these facades.
They're just there to be put in front
of the camera and what a lot of us do
when we make our world is we make these fantastic
3D worlds that you could really walk around
and look in any direction and there's stuff there.
There's trees behind me.
There's all sorts of things going on and you have
to remember that just a facade means just a facade.
Build only what the camera sees and in
Quest we built only what the camera sees.
If there's a door, there are no
polygons on the other side of the door.
If there's a wall that you can't see
the ground, that wall is not even there.
It doesn't matter that it's not even there
because your game players will never even see it.
It does matter and make a tremendous difference
to your bottom line and your frame rate.
So optimizing, always think about
your level as a Hollywood set.
It's completely and utterly fake.
Memory is something that's on your device.
I want to have the most wonderful
looking game in front of me.
I want my menu to be great.
I want a large Quest logo on the screen.
I want to use an interface that looks beautiful.
I also want to stream my level in so that
my level has the most polygons I can get
onto it in a particular part of the level.
It doesn't matter that it's going
to be gone in the next thing.
I'm going to get rid of it and eject to out of memory.
As soon as this first user interface
is gone I'm getting rid of it.
I'm freeing it.
I want that memory back so I can have particle systems.
I want that memory back so I can have two more bad guys.
Purge assets which are not in use,
reload them when you need them again.
It lets you be a much more, much better memory
citizen and get a lot more into your game.
If you were trying to load your entire
game at once, then use UIs and all,
you know that wonderful settings
screen the user only ever sees once
and you're not doing your game or your players any service.
This also applies to texture size.
As Pete said the number of times that we see textures
which are 1024x1024 hidden in the end of the level
that just shows a little exit sign, I've actually seen
that and it's taking up all this memory and it's taking
up all this bandwidth as it's rendered because the
renderer is going through it because he didn't,
bitmaps and bitmaps are made to even using more
memory because it's 1024x1024 is ridiculous.
Look at how your game is being
rendered and look at that texel density.
Texel density is tremendously important.
You want your texture map to be represented in
your 3D world as your artist has authored it.
One way that you can really optimize both memory
and loading time is to use compressed textures.
Compressed textures are 1/16th the size of the uncompressed
texture and as you can see here on our wall texture,
the uncompressed versus the 4-bits per pixel versus
the 2-bits per pixel there's almost no difference
and something else that's key here with our texture is
we are going to take this and we are going to multiply it
by some random lightmap value that's between 0 and 1.
We're going to abolish it so just in the name
of making something that looks good in the game
so this will not even be seen in the
game world even once we're finally there.
Texture compression is a great way to
load assets faster and to be able to get
into your game quicker and saving a whole bunch of memory.
We talked about gameloops and how important
it is to do the gameloop in the right order.
Load up that GPU.
Then handle user input.
Then let the aliens update and invade the world because you
prepositioned your little base to fire back just in time.
Multiple devices.
You might think that screen size and resolution mean exactly
the same thing but in fact it means two different things.
Screen size on our devices is different.
I'm going to hold the iPad in my hand very differently
than I'm going to hold the phone or the iPod Touch.
I might hold one in one hand and one in two hands.
You have to be thinking about how what
that means into your game design for you
to actually implement the same game on different devices.
It's going to mean different interfaces.
It's going to mean that different user interfaces.
It's going to mean different controls.
The resolution difference is also
something you're going to have to deal with.
A low-resolution game running on the iPod Touch needs to
be up rest and look fantastic on the iPhone 4 and I have
to tell you the full resolution game on the
iPhone 4 is like nothing you've ever seen.
When you hold that in your hands,
oh my God, that thing's a kicker.
Absolutely the most beautiful thing
I have ever worked on in my life.
Then the iPad, an amazing device but I hold it
differently and I play games on that for longer than I play
because I'm sitting on my couch
and my wife's watching House.
I don't like House because there's blood and
I'm going to play a game for a whole hour.
I'm not the only one that does that.
I know that.
So games play very different on devices too.
Think about the capabilities.
Now I already gave you my little rant on
playtesting so I'm not going to reiterate my rant on.
I am. I'm going to reiterate my rant.
Watch other people play your game.
Exceptionally important.
Take their input and apply it to your game.
Always remember that you are not there
when someone else buys your application.
They're going to buy your application.
They're going to hit the button and they're going to wonder
what to do so you might as well get that feedback ahead
of time and watch someone play your game.
Do that constantly.
One of the things that game developers strive for
is reality and oh my God we're so big on that.
We must have real physics in our games with 3D rigid bodies
and inverse kinematics and all these fantastic things
that we can put into games now with, you know
because it goes into the back of the box right?
It goes in the iTunes store description.
Real 3D physics.
Real lighting.
I want the perfect full photon pacer that does accurate
lighting on my environment and makes a tremendous difference
to what how my character looks and does the whole diffuse
shadow behind me that spreads out with the right thing
on stage with you know all kinds of cool shadows.
Games are not reality.
That's all fabulous.
I'm here to tell you that's all fabulous
but if your game runs at 4 frames a second
because you have 3D physics running with absolutely
fantastic lighting with super realistic shadows, guess what.
Your game's not fun.
Games are like Hollywood and like Hollywood
movies they need bright lights shone
onto them to look dramatic and look cool.
You need to be thinking about physics
that operate within your game environment.
If we were to apply that to Quest, one of
the things you may have noticed in Quest,
I'm walking on a flat plane ground the whole time.
So therefore a 2-dimentional physics
system would actually probably do me fine.
I really don't need a 3D physics system.
2D physics would actually do fine and even what I would
try because I like frame rate, is I'd try to get my artist
to animate something to fake physics
completely so there is no physics in my game
but it sure as heck looks like I have it.
Lighting I think as we showed, until you saw how the
shader worked I think a lot of you thought oh boy,
they have real perfect pixel in their games.
That's absolutely fantastic.
You couldn't tell the difference at our angle and
our gameplay what our lighting algorithm actually was
but it looked real and that's all that matters.
It's an illusion.
Real lighting in some games makes a tremendous difference
if I was a first-person you know version
of Quest I would be thinking wow.
Maybe I'll get real lighting algorithm here with
real shadows to make a real dramatic difference.
You know what?
My polygon count is very different there.
My. The way our memory in the world and the aim
I'm going for emotionally, exceptionally different.
So always remember games are not reality.
What you're faking needs to be fake.
Rendering the world.
We see an awful lot of people take 10,000 draw calls per
frame and wonder why they're app goes at 4 frames a second.
They have you know have tens of thousands
of draw calls they do to cause into GL
and think that their app should run at 60.
Well I'm here to tell you you're orders of magnitude off.
You should be thinking in terms of 10 draw calls per frame.
How many of you have 10 draw calls per frame in your game?
How many of you have more than 100?
The more than 100 crowd got some work to do.
Ten draw calls a frame is how you're going to bring
your game frame rate up to where it needs to be.
You need to be looking at the tricks we
have told you today with texture atlases.
You need to be looking at how you can load your level and
use GL draw arrays as a huge boost to actually get all
of your as many polygons on screen as possible
using all the features that everyone's going
to be telling you about this week in the GL lessons.
We did a lot right in Quest I think
given that we had two months.
One of the things that really proved
out was our data driven pipeline.
We had to change our variables on the fly.
Getting that power to the artist, giving that power to
the game designer to be able to load things on the fly,
drop assets in, data driving our game pipeline made a
tremendous difference towards us getting this game finished
on time.
Collada was a fantastic format to export our
3D assets in and load them on the devices.
Collada is easy to parse, easy to manipulate
and it made writing scene graph based 3D
system that we used for Quest really easy.
We cheated in the right places.
We cheated with the lights.
We cheated with the lightmaps.
We cheated in the right places to achieve the right effect.
Finally we used Apple technologies.
We trusted the UIKit and AVFoundation,
OpenGL 2.0, game kits;
that all those fantastic frameworks
would work as advertised and guess what?
They do. Of course it was not all roses.
There were many late nights on this project.
One of the things that we initially did was we promised
Pete, our artist, we're going to give you fantastic tools.
Well, the number of tools we came up with was exactly zero
and we probably should have spent
a little bit more time on tools.
We had wrote a fantastic particle system, a parametric
particle system that you just had to pass time in
and it would render out these fire things, splines and
everything and I thought my programmer art looked good
but Pete tells me it was really bad but and we wrote
this world space particle system where foibles could pass
in the sky and just reflect in the ground but we didn't
provide Pete with a tool in order to edit particles.
We should have spent more time upfront writing
tools and thinking about our asset pipeline.
Our asset pipeline is still not perfect.
If we had written those tools maybe we would have started
to write something that loaded the particles directly
into the game and dropped them via
the HTTP line that we did build.
We started to build the HTTP connection but
we didn't actually go ahead and complete that.
We probably scoped a little bit on the impossible side but
this is one of the things I actually liked about doing.
The challenge was exciting.
Make something that looks super fantastic,
sexy in two months and we did it.
So I'm going to say that we did that right
but it was frightening let me tell you.
When you drive home at night and saying well
we're going to make Quest in two months.
Finally, a lot of people ask us are we
going to make this into a real game?
And the answer to that is no.
We're not.
Quest was only made to be shown here this morning to all of
you and to be used for the sessions here throughout the week
and we'll be repeating on Friday so I guess that counts.
We're already onto the next thing.
We're onto our next two months and I've
got to tell you that's even better.
We read a few books and magazines.
We want to share a few of those with you.
Game Developer Magazine.
How many of you read that?
It has these fantastic post-mortems in every single
month of games coming out on all the platforms
and we modeled our talk today a little
bit on those post-mortem series.
I love that it tells the story of what went right, what went
wrong, the approach they took to managing their project,
the approach they took to getting
the thing out in the store,
game design elements that work,
game design elements that didn't.
That is a fantastic series and worth getting
that magazine monthly just for that alone.
If you're a programmer you should have the
Graphics Gems series right there on your shelf.
If you want fast rays intersection
you know that's in Graphics Gems 1.
If you want you know fast access align boundary box
to boundary box, Graphics Gems 2.
You kind of learn where those tricks are.
Those fast algorithms, you want to get
those books, have them on your shelves.
Same thing applies to the Games Programming Gems
which kind of has much larger systems in it.
If you want to think about terrain systems.
If you want to think about you
know BSP rendering environments.
A lot of that is in there and a lot of those stories in
there are also very good about the games they play too.
We recommend a whole bunch of sessions for you to go to so,
especially the Game Center sessions,
learning about that framework.
It is going to become extremely applicable later this year
when millions of people have access to sign up accounts
and make Game Center their homes for their games.
A lot of the OpenGL sessions this
week are absolutely fantastic,
absolutely incredible stuff is happening out of--
If you go to those you'll see something incredible.
Core Animation was used for everything in our game to drive
IUI, to drive our characters, to drive anything animated.
It's a fantastic framework.
The AVFoundation, being able to use that for
augmented reality applications as well as sound.
I mean just the access you get from
AVFoundation in iPhone OS4 absolutely incredible.
But the most important thing I can tell you is to have fun.
If you are a game developer and you are not
having fun, you are in the wrong industry.
Let me tell you.
If you're not coming to work and thinking this is the
best thing ever, what the heck are you doing making a game
because your game isn't going to end up as
fun and you're not going to make any money.
Make something that you think is fun
and that you are passionate about.
You have to be thinking every single day oh
my God I'm working on the coolest thing ever!
Look! I want you to show me that
stuff in the last of our and I want
to see how excited you all are
about making games on our devices.
You need to be the only way you're going to get the
extra little bit of magic into your games is by having
that passion and by really applying
that passion into the games.
That extra little 5% you had because you had fun
making the game and you had fun playtesting it
and you had fun showing it off makes a difference.
It's the difference between an application that
falls flat and an application that sparkles.
So if you're in the game industry make sure you
have fun making games and absolutely finally,
if you have any questions please send an email to our
Graphics and Game Technologies Evangelist Allan Schaffer.
Thank you.
It's been great talking to you today.
[ Applause ]