WWDC2015 Session 608

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> BRUNO SOMMER: Hello
everyone and welcome.
My name is Bruno Sommer,
I'm a game technologies
engineer here at Apple.
And today I'm very
excited to be able
to introduce you to GameplayKit.
Apple's first dedicated
Gameplay framework.
We have a lot of solutions to
the visual part of making game
on our platform things like
SpriteKit, SceneKit and Metal.
The gameplay is another
really important part
of that game development puzzle.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
As it turns out, there is hard
problems in the gameplay space,
things like AI, pathfinding,
autonomous movement.
We firmly believe
that experience shouldn't
be a barrier
to prevent our developers making
great and compelling games.
So going forward we want
you guys to be able to focus
on bringing your
cool ideas to life.
And we'll do the heavy lifting
on the back end to
make that happen.
So our mission when we set
out to make GameplayKit
was very clear.
We wanted to make a simple,
yet powerful solution API
of gameplay solutions.
Now this is things like
common design patterns
and architectures so we can
all start speaking the same
gameplay language.
And there's also a number of
standard gameplay algorithms
that is applicable to a
wide variety of game genres.
And it is also very important
to us that this remains graphic
and engine agnostic, so
while GameplayKit is separate
from a lot of those visual
frameworks I talked about,
it plays really nicely
with them.
It plays nice with SpriteKit,
SceneKit, Metal, and more.
So here we have GameplayKit
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So here we have GameplayKit
and the seven major
features that make it up.
And these are components
which are a really great way
to structure your game
objects and game logic.
State machines, which describe
the statefulness in our games
and the various state
changes of our game objects.
Agents, which are
autonomously moving entities
that are controlled by
realistic behaviors and goals.
Pathfinding, which deals with
navigation graph generation
and how we move our entities
between the passable
areas in our game world.
We also have a great
MinMax AI solution
which is a really
great way to give life
to our computer-controlled
opponents.
There is a number of game
quality random sources
and random distributions
at your disposal.
And last we have a rule system,
which are a really great way
to model discreet
and fuzzy logic.
There's a lot to cover today.
Let's go ahead and jump right
in with entities and components.
I want to pose sort of this
classic problem with inheriting
from common game objects.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
from common game objects.
Here we have a tower
defense game
with a simple projectile
tower and archer classes.
We have some shared
functionality here.
We have shooting,
and we have moving,
and we have being targeted.
Let's take shooting for example.
We want towers and archers
to both be able to shoot.
Where then do we put
our shoot function?
One option might be to
simply copy and paste it
between the tower
and archer classes,
but now I have two spots in my
code that share functionality,
and if I ever want to
update that functionality,
there is now two spots
where I need to update it.
And if I only update it in
one I'm undoubtably going
to get some really
weird behavior.
So our only real option in
this inheritance model I've
described, is to move
shared functionality higher
in the tree.
So here we have a shoot
function we might put it
in the game object class
or some common-based class.
Now the problem with this
approach is that as we get more
and more shared functionality
in our games we're forced
to move it higher and
higher in the hierarchies.
And our basic game objects
become anything but basic.
They become large and hard to
understand, hard to maintain,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
They become large and hard to
understand, hard to maintain,
and hard to collaborate on.
Let's take a look at how
we solve this problem using
entities and components.
You see here we still
have our three objects:
projectile, tower, archer.
But now instead of them
having functionality
in an inheritance sense, being
a mover, being a shooter,
or being targetable,
they instead have these
objects we call components
which encapsulate singular
elements of our game logic,
so here we have a MoveComponent
that deals with moving,
a ShootComponent that deals with
shooting, and a TargetComponent,
what it means to be targetable.
So we gain these really
nice little black boxes
of singular functionality,
that are loosely rather
than tightly coupled
with the hierarchy.
So we see now that entities
and components are a great way
to organize our game logic.
For one, they're
easy to maintain
because they're these
nice black boxes
of incapsulated functionality;
they tend to be simpler.
We also have a really
nice collaboration
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We also have a really
nice collaboration
with the entities
and components.
Now I can have one developer
work on one component
and another developer working
on yet another component,
and they don't necessarily need
to know the intimate details
between these components.
We also get nice
scaling with complexity.
What I mean by that,
in that class
and inheritance model my
hierarchy is grows wide and tall
as my game gets more complex.
With entities and
components it just grows wider
in that width is no
longer a detriment.
It's really a toolbox.
Any time I want to make a new
entity in the game I simply look
at the components
I have available,
choose the appropriate ones or
perhaps implement a new one.
And with entities and components
we get really easy access
to dynamic behavior.
Let's think back to the
tower defense example.
Perhaps I want to implement a
magic spell that roots archers
to the ground so they
can no longer move.
One way to represent
this might be
to simply temporarily
remove it's MoveComponent.
This implicitly tells
the rest of my game
that it can no longer move.
And I get the added benefit of
the rest of my game not needing
to know the intimate
details of magic spells.
So let's go ahead and take
a look at the classes.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So let's go ahead and take
a look at the classes.
Here we have GKEntity.
This is our entity base class,
and it's really just a simple
collection of components.
It has the functions to add and
remove components dynamically,
as my entities functionality
undoubtedly changes.
Also let's me access existing
components by unique class type.
When I call update on
my GKEntity it's going
to automatically update all
of the components that it has.
So thinking back to that
example, projectile, tower,
and archer would
all be GKEntities.
Here we have our
GKComponent class.
Now you subclass this
any time you want
to add those singular bits of
functionality to your game.
And you do that in
a number of ways.
Properties on your components
become state information
about those components.
So you can imagine the
ShootComponent here would likely
have a damage property
that describes how much
damage it's projectiles do.
You also implement
custom selectors
that extend functionally and
tell the rest of your game how
to communicate with
your component.
So the MoveComponent here for
example would likely have a move
to position function that
you would call from the input
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to position function that
you would call from the input
or game controller code.
As I mentioned before components
are automatically updated
by their entity's update and
you can optionally implement any
time based logic in
updateWithDeltaTime.
So undoubtedly a need will arise
where you need finer control
over the order or how
your components update,
and for that we're
providing GKComponentSystem.
This is a collection
of components
from different entities, but
they're all the same class type.
And you use this when update
order is somehow intrinsically
important to your game.
Perhaps I want to update
AI after my movement code
because I want my AI
to deal with the most
up to date position
information available.
And it's important to note that
the components that are placed
in these component
systems no longer update
with their entities update.
It is up to you to call the
component systems update
at the correct time to
update all these entities.
So thinking back to
our example again,
we probably have a
move system which would
in turn have all the move
components in my game,
and I can use that to
synchronize the movement
between my various entities.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
between my various entities.
So lastly we have a code
example of what using entities
and components in
GameplayKit looks like.
You see at the top here I'm
going to make my archer entity,
and then I'm going to make
the three components that make
up being an archer:
the MoveComponent,
the ShootComponent, and
the TargetComponent.
And add those to my archer.
Then I'm going to make that
moveSystem we talked about,
passing in the MoveComponent's
class,
indicating this component system
only deals with MoveComponents.
Then I'm going to add my
archer's MoveComponent
to that moveSystem
and I'm good to go.
This archer and moveSystem
are ready for use in my game.
So that's entities
and components.
So let's move on
to state machines.
I'm going to start with
another example here.
Let's imagine some game where
the player is being chased
by ghosts, and sometimes he gets
to take a power-up and chase,
and maybe defeat
the ghosts instead.
Here's an example of what
a state machine to control
that ghost might look like.
You see here we have
the four states
that a ghost can ever be in,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that a ghost can ever be in,
chase for when the ghost
is chasing the player,
flee for when the player
is chasing the ghost,
defeated for when the ghost
gets caught and gets defeated,
and respawn sometime after the
ghost is defeated before it
comes back to life.
Now it's important to
note here that only some
of these state transitions
are valid.
You see I move between chase
and flee interchangeably here.
This makes sense based on
the game I just described,
sometimes the ghost
does the chasing
and sometimes the
player does the chasing.
And of course we only go
to defeated from flee,
this is the only time that the
player can actually defeat the
ghost, when he has that power-up
and is chasing the ghost.
Then we go from respawn to
defeated, this again makes sense
and after we respawn
we go right into chase.
This is our initial state.
After ghosts respawn,
they go right back
to chasing the player.
So why are state machines so
important in game development?
Well for a lot of games
they're the backbone
of many gameplay elements.
A ton of our common gameplay
elements are full of state,
things like animation,
AI, UI, levels.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Anyone who's tried to bring
life to a humanoid character
in a game is undoubtedly
familiar
with the state machine
on the right.
We usually have an
IdleAnimation,
and a MoveAnimation,
and an AttackAnimation,
and move between them
in meaningful ways.
So because this pattern is
so pervasive in our code,
we reimplemented it a lot, to
what amounts to little more
than boilerplate, and
this take the form
of really big switch
statements or if else trees.
What if you could come up with
some common implementation
to remove that boilerplate, add
a little bit of maintainability,
and give us the benefit of
being able to reuse our states
and state machines
throughout our game.
That's what we've
done in GameplayKit.
So let's take a look
at the classes.
Here we have GKStateMachine.
This is your general
purpose finite state machine.
And what I mean by
that is it's in one,
and only one state at any given.
And it possesses all of the
states that it can ever be in.
You call enterState
on our state machines
to cause those state
transitions I was talking about.
And what happens under
the hood, is it checks
if that transition is valid,
and if so, makes that change.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And it calls a number of call
backs on the state objects.
We exit the state we were in, we
enter the state we're going to,
and we update the current state
that the state machine is in.
So in that ghost example
we'd probably have
a GhostStateMachine.
It would in turn have those four
states we were talking about.
Here we have our
GKState abstract class.
And you implement
your state based logic
in a number of callbacks.
We give you an enter callback
when the state is being entered,
an exit callback when
we're leaving the state,
and an update callback when
this is the current state
in the state machine.
As I mentioned before,
they're automatically called
by the state machine at
the appropriate time.
You can optionally override
the isValidNextState function
to control the edges
of your state graph,
those valid transitions
I was talking about.
Now by default, all of
these edges are valid
but undoubtedly you'll want
to use the dynamic internals
of your states to decide which
of those transitions are valid.
So these four ghost states we
talked about: chase, defeated,
flee, respawn would all be
implemented as GKStates.
So I want to end
on an example here.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So I want to end
on an example here.
Let's implement
that GhostStateMachine
we just talked about.
At the top here I'm going to go
ahead and make my four states:
chase, flee, defeated,
and respawn.
Then I'm going to make
my state machine passing
in those four states,
those are the four states
that the state machine
can ever be in.
Then I'm going to go ahead
and enter the initial state
which is chase in this example.
We're good to go.
This state machine is
ready for use in our game,
and that ghost is going to
do exactly what we expect.
So that's state machines,
let's move on to agents,
goals, and behaviors.
So some concepts
before we get started.
What we call agents,
goals, and behaviors,
are really autonomously moving
entities, they're controlled
by realistic behaviors
and goals.
They have a number of physical
constraints, things like masks,
acceleration, and inertia.
The behaviors that control
these agents are in turn made
up of a number of
goals, that you combine
with the appropriate weights,
to achieve some meaningful
autonomous movement functionally
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to achieve some meaningful
autonomous movement functionally
in your game.
So why are agents so
important in game development?
I think a lot of games benefit
from really believable
realistic movement.
When our game entities
move in straight lines
and take turns instantly
and bump into obstacles
in our environment, it
doesn't look very real.
Movement in the real world
has things like inertia,
and mass, and acceleration.
And it correctly
avoids nearby obstacles
as well as other entities.
And when entities know
how to get from point A
to B they usually follow a
path, and they usually do
so smoothly rather than rigidly.
So here's an overview
of what we're giving you
in our agent system.
We have our Agent class, it
is controlled by a behavior
and it also has a delegate
that let's you respond
to changes in the agent.
These behaviors are in turn
made up of a number of goals
that you combine with
weights to achieve
that meaningful functionality.
You have a lot of goals at your
disposal: things like seeking,
and intercepting, avoiding
obstacles, and following paths.
Let's go ahead and take
a look at the classes.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Let's go ahead and take
a look at the classes.
GKAgent is a simple
autonomous point mass
and it's also a GKComponent
so it plays really nice
with our entity and
component systems.
And when you call update
on GKAgent it's going
to apply its current
behavior and what that does
under the hood it's going to
look at the goals that make
up its behavior,
and calculate along
with the weights some total
change in acceleration necessary
to simultaneously meet
those goals as best it can.
It then uses that
change in acceleration
to change the agents velocity
and position in rotation.
Now GKAgent has a had lot
of those physical constraints I
talked about, things like mass,
and A bounding radius, max
speed, max acceleration.
It is important to note that
these units are dimensionless
and very likely to be
game world specific.
So you can imagine a game on
the scale of kilometers is going
to have vastly different
numbers here
than a game that's
on the scale of feet.
So make sure you choose
the appropriate numbers
for your game world here.
Here we have our
GKBehavior class.
And it's a simple
dictionary-like container
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And it's a simple
dictionary-like container
of those goals.
It let's you dynamically
modify the behavior
as your game undoubtedly
changes, and you do this
by adding new behaviors,
adding new goals,
removing existing goals,
and modifying the
weights on existing goals.
As I mentioned before, you
set a behavior on an agent
and that agent is good to go.
The next time you
update that agent,
it's going to correctly
attempt to follow that behavior.
So some examples of
what behaviors might be,
perhaps you want to
implement a flocking behavior,
to simulate the flocking
of birds in real life.
We may combine a cohere, a
separate, and an align goal
with the appropriate
weights to achieve that.
Or maybe I'm making a
racing game and want
to make a racing behavior
to control my race cars.
This might be as simple as
combining a follow path,
I want my race car to
follow the race track,
and an avoid other agents
goal, I want my race car
to avoid colliding with
the other race cars.
Here's a code example much
what making these behaviors
looks like.
You see the top, I'm going
to make a seek behavior,
I want to seek some enemy
agent in my environment.
I'm going to make an avoid goal,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I'm going to make an avoid goal,
I want to avoid nearby
obstacles.
Then I'm going to make
a targetSpeed goal.
I want my agent to accelerate
to and reach some target speed.
Then I'm going to make
my behavior passing
in those three goals with an
appropriate set of weights.
You see here I'm weighting
the avoid goal at a 5
because I definitely don't
want my agent to collide
with nearby obstacles.
Then I'm going to make my agent,
initialize it, set the behavior
on it, and this agent
is good to go.
The next time I call update
on this agent it's going
to correctly do what I expect.
So let's talk a little
about that agent delegate.
GKAgentDelegate is useful when
you need to sync your visuals,
things like graphics,
animation, physics,
with this underlying
agent simulation.
We give you two callbacks
to do that.
agentWillUpdate, which is called
before any updates are applied
to the agent.
And agentDidUpdate,
which is called
after the updates are applied.
In your game this might be
things like a SpriteKit node,
or a SceneKit node,
or a render component.
Let's take a look at what
this delegation looks
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Let's take a look at what
this delegation looks
like in a SpriteKit game.
You see here I have a custom
sprite node MyAgentSpriteNode
and I'm going to go
ahead and implement both
of those callbacks
that I talked about.
In agentWillUpdate, I'll set the
agent's position in rotation,
equal to my node's
position in rotation,
I want that underlying agent
simulation to match my visuals.
Then we are going
to do some updating.
And then an agentDidUpdate,
I'm going to do the inverse,
I'm going to set my node's
position in rotation,
equal to my agent's position in
rotation, the visuals will match
that underlying agent
simulation again.
Now I would like to
give you a quick demo
on what agent movement
looks look and some
of the goals you have
at your disposal.
So here I have a simple
SpriteKit scene and we're going
to represent the agents
with a triangle in a circle.
They're oriented where
the triangle is pointing.
Here I have a seat goal.
The agent in the center
is simply going to try
to seek my mouse position.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to seek my mouse position.
Notice how fluid and realistic
the movement looks, because he's
under those realistic physical
constraints, things like mass,
and acceleration, and inertia.
Here we have an example of
the inverse, a flee goal.
The agent is going to
instead attempt to run away
from the mouse position.
Here is an example
of a wander behavior.
My agent is just going
to randomly wander
about the environment making
random left and right turns.
Here we have an example of
an obstacle avoidance goal.
Once again my agent
is going to attempt
to seek the mouse position
but I have added a couple
circle obstacles to my scene,
and he has an obstacle
avoidance goal on him.
So while he's still trying
to seek the mouse position,
he's also going to avoid
colliding with the obstacles.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Here I have an example
of a separation goal.
I have three agents that
are once again going to try
to seek the mouse position.
But they also have a
separation goal on them.
They're going to try to
maintain some minimum separation
between them.
And this is really useful for
things like formation flying
or keeping groups of units
together in your game.
Here I have an example
of an alignment goal.
The agent on the right
is simply going to try
to match the heading of
the agent on the left.
This is really useful for things
like synchronizing
units in your game.
Here I have an example
of a flocking goal.
Here we have our leader agent
in the red which is just going
to wander about the scene.
But I also have a group
of these blue agents
under a flocking behavior.
They are combining a cohere,
a separate, and an align goal
to stay in a blob,
while also trying
to chase that leader entity.
So the separate goal is
maintaining some minimum
separation between them,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
separation between them,
the cohere goal makes them stay
together in a cohesive mass,
and the alignment
goal wants them
to reach an average heading.
Last thing I have an example
of a follow path behavior here.
I have a simple polyline
path and my agent is going
to attempt to follow it.
Now I want you to notice
that he doesn't take the
corners sharply.
He's under those realistic
physical constraints we talked
about, things like
mass and acceleration.
So he's forced to follow it
in a smooth manner even though
the underlying path itself
is rigid.
So that's agents,
goals, and behaviors.
[ Applause ]
Let's go ahead and
move on to pathfinding.
Now I'm sure we're familiar
with this problem
in game development.
I have some entity in my
game world that wants to get
from point A to B, but there
is an obstacle in my way.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
from point A to B, but there
is an obstacle in my way.
I don't want the entity to
move through the obstacle.
I don't want her to
bump into the obstacle.
I want her to correctly
find a path
around the obstacle
like a human would.
What I'm looking for here
is something like this:
I want her to correctly
find the shortest way
around the obstacle,
clear the obstacle,
and continue on to my goal.
This is the realm of problems
we call in gameplay pathfinding.
Now some concepts before we get
started, pathfinding operates
on a navigation graph.
In this navigation graph,
it is a collection of nodes
that describe the passable
areas in your game world.
The places where my entities
are allowed to be and move.
These nodes are in turn
joined by connections
to describe how my entities move
between these passable areas.
And these connections
can be single directional
or bidirectional, and there is
always exists an optimal path
between any two nodes
in a connected graph.
And this is usually
the path we're looking
for in pathfinding.
So let's go ahead and take
a look at the classes.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So let's go ahead and take
a look at the classes.
Here we have GKGraph, which is
our abstract graph base class,
it's quite simply a container of
graph nodes, those descriptions
of the passable areas
in my game world.
It has the functions necessary
to add and remove nodes
as the game world
undoubtedly changes,
and it also lets me connect
new nodes to the graph,
making the appropriate
connections
to existing nodes
I would expect.
Add of course we also
let you find paths
between nodes and a graph.
And we're offering you
guy's two specializations,
a GKGraph that works
with grids, and a GKGraph
that works with obstacles.
Let's talk a little bit
more about those now.
All right.
GKGridGraph.
This is our GK graph that's
specialized for a 2D grid.
And what this does,
is it's going
to automatically create all
the nodes to represent a grid
of some given start position
and width and height.
It's going to automatically
make the cardinal connections
between the grid nodes
and optionally the
diagonal ones as well.
And it also has easy
functions available to add
and remove grid spaces as they
undoubtedly become impassable
and passable again in your game.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and passable again in your game.
Next we have our
GKObstacleGraph.
This is a GK graph that's
specialized for pathfinding
around obstacles
in your game world.
And these obstacles can
be any arbitrary polygon.
Now we give you the functions
necessary to dynamically add
and remove obstacles
as your game world,
again, undoubtedly changes.
It also lets you dynamically
connect new nodes to the graph
and this is really useful for
stuff like inserting a start
and an end node in my graph
to find a path for a unit.
Now we do this by what we're
calling a buffer radius,
this is a safety zone
around obstacles,
where my entities are
not allowed to go,
and it's often a
game-dependent size relating
to the bounding radius
of the entities
that I want to do
the navigating.
So let's talk a little more
about how these obstacle
graphs are generated.
So here I have a simple scene
with two square obstacles,
an entity on the lower
left that wants to get
to that bridge on
the lower right.
My entity is bounded by some
bounding radius, and we're going
to use that as our buffer radius
to artificially make
our obstacles larger.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to artificially make
our obstacles larger.
Then under the hood the
obstacle graph is going
to make the appropriate
connections between all
of our grid nodes,
and it's going
to correctly not make the ones
that would violate the
spatiality of our obstacles.
So you see here that we found
that shortest path
we were looking for.
It doesn't collide with
any of my obstacles.
Here is a code example
of that last example,
but with just a single obstacle.
Here at the top I'm going
to make a simple square polygon
obstacle; it's just four points.
Then I'm going to make
our obstacle graph,
passing in our obstacle
and some buffer radius.
Then, I'm going to make
a start and end node.
One for where my hero
currently is and one
for where she wants to go.
Then I'm going to
dynamically connect those nodes
to my obstacle graph using the
obstacles that it possesses.
And what it's going to do is
it's going to insert those nodes
into the graph and again
automatically make the
connections that make
sense, and not make the ones
that would violate the
spatiality of my obstacles.
Then at the end here I'm going
to find a path for my start
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Then at the end here I'm going
to find a path for my start
and end node and I get back a
simple NSArray of graph nodes,
which I can then use to
animate my character.
Some advance nodes on our graph
node class which is GKGraphNode,
undoubtedly some need will arise
where you want to subclass this.
And this is really useful for
implementing stuff like advanced
or non-spatial costs or for
when you need more clear
control over the pathfinding.
You can imagine a strategy game
that has a variety
of terrain types.
Perhaps you want a
forest terrain type
to take double the move over
as my other terrain types.
I correctly want pathfinding
to take this into account.
I don't want it to return
the visually shortest path.
I correctly want it to
navigate around the forest.
Because that is actually
the shortest path
in my game world's terms.
GKGraphNode is also
useful when you want
to manually make your own
graphs, and you do this
by manually managing the
connections between nodes.
This is really good for
things like abstract
or non-spatial graphs.
Perhaps you want your game to
have portals and your units
to correctly take those
portals into account
for pathfinding purposes,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
even though those portals aren't
spatially connected in anyway.
And our Grid/GraphNode and
GraphNode2D which is used
with our obstacle node
are also available
for subclass as you see fit.
This is a feature I'm
really excited about.
We have done some work with
the SpriteKit team to allow you
to easily generate
these obstacle graphs
from your existing
SpriteKit Scenes.
And you can do this for
things like node bounds,
node physics bodies,
and nodes textures.
So what this means is
with very few lines
of code you can take an
existing SpriteKit scene
and generate an obstacle graph
and automatically
pathfind around it.
Now I would like to give
you a small demo of this.
Let's explore pathfinding
with SpriteKit.
Here I have the tower
defense game we have talked
about implemented as
a SpriteKit scene.
I'm generating entities on
the left and they want to move
to the bridge on the right.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to the bridge on the right.
But because this is a tower
defense game I'm undoubtedly
going to place some
towers right,
that violate their current path.
So let's go ahead and place one.
And you'll notice they
correctly pathfind around it.
That's because we're using
the SpriteKit integration,
that we just talked about,
to automatically generate
an obstacle from that node,
update the underlying
GKObstacleGraph,
and update our path.
So let me turn a debugger, let
me remove this tower real quick.
You see we just start with
our simple path, right,
between start and end node.
But as I insert an
obstacle in here,
we recalculate the
underlining GKObstacleGraph.
And this allows our entities
to find a new path
around that obstacle.
So let's go ahead
and add a few more.
And because of that SpriteKit
integration, every time we add
or remove an obstacle,
we can keep that underlying
GKObstacleGraph updated.
So that's pathfinding
with SpriteKit.
[ Applause ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
Now I would like to call
my colleague Ross Dexter
up to tell you a little
about our MinMax AI.
Ross.
[ Applause ]
>> ROSS DEXTER: Thanks, Bruno.
So many of the features
that Bruno spoke
about earlier can be used to
create AI, but they're more
about giving life to
entities within your game.
Many games also need
equal AI opponents
that can play the entire game by
the same rules as human players.
And this is critical for
games like Chess, Checkers,
Tic-Tac-Toe, and so on.
So we wanted to provide
you a solution for this.
And so we've chosen to implement
a classic AI solution, MinMax,
as a key part of GameplayKit.
MinMax works by looking at all
the moves available to a player,
and then it builds
out a decision tree,
from each of those moves and all
the permutations that can arise
from each of those moves.
When you request a move
from it, it searches this,
the decision tree,
looking for a move
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
the decision tree,
looking for a move
that maximizes potential gain
while minimizing potential loss.
This Tic-Tac-Toe example
here, the AI selects the move
on the right for the X player
because in the best case
it results in a win,
or in the worst case, it
only results in a draw,
the other two moves
both lead to losses.
So MinMax AI gives
you the ability
to add AI controlled
opponents to your games,
but it can also be used to
suggest a move for human players
when they get stuck, and it's
going to be great for games
that even don't have any
other AI requirements.
It's best suited for turn
based games, but it can be made
to work with any game
where you have a set
of discrete moves available
for players to make.
You can easily adjust
the difficulty of the AI
by varying how far out
into the future it looks.
Looking 10 moves
in advance results
in much more effective play than
looking ahead only 2 or 3 moves.
Additionally you can
optionally direct it
to randomly select
suboptimum moves
to give it an element
of human error.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So let's look at how this
integrates with your game.
The great thing about MinMax is
that it doesn't need to know any
of the details of your game.
You don't need to teach it
your rules and it doesn't need
to know how it's implemented.
This is all abstracted away.
All you have to do is provide
a list of players in the game,
the possible moves
they can make,
and a score for each player that
indicates the relative strength
of their current position.
When you request a move from
the AI, it takes all this data
into account and it
builds a decision tree,
and returns the optimal
move for you to use.
Let's look at the classes.
There are three key protocols
that you're going to need
to implement to work
with the MinMax AI.
And the first of
these is GKGameModel,
and it's an abstract of
the current game state.
If you're creating a Chess game
for example, a good candidate
to implement this class would
be on, say, the board class
because it tracks all of the
positions on the board as well
as all the pieces that
are currently in play.
As I mentioned on the
previous slide, all this needs
to do is provide a list of
the players that are active
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to do is provide a list of
the players that are active
in the game, the current
player, scores for each
of those players, and then
the possible moves that each
of those players can make.
It also needs to have a method
for applying those moves
and this is used by the AI to
build out its decision tree,
and can be used by
you to apply a move
after it's been selected
by the AI.
And when this move is applied,
it will change the
current game state,
possibly changing the
current act of the player,
scores for each of those
players, and the moves
that are available to them.
The next protocol is
GKGameModelUpdate,
this is an abstract of
a move within your game.
It should have all
of the data you need
to apply a move to
your game model.
As we have said, it is
used by MinMax to build
out the decision tree, and can
be used by you to apply a move
after it's been selected.
Finally we have
GKGameModelPlayer
which is an abstract of a player
of the game, and it's used
by the AI to differentiate
moves from one another.
Now we get to the AI itself,
it's within the class
GKMinMaxStrategist,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and it operates on
a GKGameModel.
So after you create an
instance the MinMaxStrategist,
you're going to hook it up
on the gameModel property.
maxLookAheadDepth is how
far into the future it looks
when you request a
move from the AI.
And as we mentioned earlier
higher numbers result
in more effective play
than lower numbers.
And that's all you need
to do to start using it.
When you call bestMoveForPlayer,
the AI will build
out its decision tree, rank all
the available moves in order
from best to worse, and then
return the optimal move.
There may arise cases where
you'll have more than one move
that is equally advantageous
for the AI to make,
and in those cases
you can direct the AI
to randomly break ties.
And that sort of thing
comes in use if you want
to call randomMoveForPlayer.
Say you have 10 moves available
for a player, but you only want
to select a random one from
the 3 best moves, it will take
that sorting and randomly choose
one of those 3 best moves.
One of those moves may be
suboptimal unfortunately,
but that may be desirable
if you are trying
to make your AI appear
more human
and have a chance
of making an error.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And both bestMoveForPlayer
and randomMoveForPlayer
return a GKGameModelUpdate
which you can then use to apply
to your GKGameModel
to make a move.
So here is a quick code sample.
Here we're creating
a Chess game model.
And unfortunately going over the
details of how you might want
to implement your game
model are beyond the scope
of this session, but we do have
excellent sample code available
that you can look at to
show how you might want
to go about doing this.
So we create our Chess model,
and then we create
our MinMax AI,
and hook it up by
setting the game model
on the gameModel property.
We then choose our
LookAheadDepth to 6,
so we're going to look
ahead 6 turns in advance
when we build our decision tree.
That's all we need to do.
Now we call bestMoveForPlayer
with a currently active player
and it will find the
optimal move for that player
with the given information.
You can then apply that move to
the game model to make the move.
So let's look at a quick demo.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So here we have a simple game
where there are two players,
black and white, and they're
trying to get as many pieces
of their color on the
board as they can.
When they place a piece on the
board they will flip any colors
of the opponent's pieces
to their color that lie
between their own pieces.
So here we have both
players controlled by the AI,
the black player is looking
ahead five moves in advance,
while the white player is only
looking ahead three moves.
This allows the black player to
easily defeat the white player
as it goes through the game.
You can see here we have
a score for each player.
This is simply we take a look at
how many pieces the player has
on the board minus the number of
pieces that their opponent has
on the board, adjusted
with some weights,
and that gives us our score.
So you see here the black player
easily defeats the white player.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So you see here the black player
easily defeats the white player.
So let's look closer
at the score here.
We see here that we have all
of the pieces in the center,
they're weighted at 1.
The position on the edge of
the board are weighted higher,
the corners are weighted
even higher.
That's because these
positions are more advantageous
for the players, and
so we direct the AI
to favor these places
by changing how those
places effect the scores.
So let's change-up the
look ahead on these guys.
We'll make white look
ahead 4 instead of just 3.
And even just this small
change will allow the AI
to play more effectively
and in fact in the middle
of the game it looks like the
white AI has the upper hand,
but the black AI is able
to trade a short-term game
for a long-term victory,
and is able to overcome
white in the end.
That's MinMax AI.
[ Applause ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
>> ROSS DEXTER: So now let's
talk about random sources.
And at first this topic
may seem unnecessary,
because we already have rand.
Why shouldn't we just use that?
Well rand gives us
random numbers
but games have unique
random number needs,
and rand may not give
us everything we want.
First of all the numbers
that rand generates may not be
the same from system to system.
You're not guaranteed
to have the same results
on different platforms.
And that can be a big
problem for networking games,
because if we can't rely on
the numbers on either side
of the collection
to be generated
in the same sequence we have
to waste critical bandwidth
in syncing those two sides up.
So we want platform-independent
determinism.
Also whenever we make a call
to rand we're drawing
from a single source.
So if I have a bunch of
calls to rand in my AI code,
and then I add a new
call in my physics code,
that call in the physics
code will affect the numbers
that are being generated in
my AI code, which could result
in unexpected behavior.
What we really want
to do is be able
to separate those
two systems apart,
so that the numbers generated
in one system have no effect
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
so that the numbers generated
in one system have no effect
on the numbers generated
in a different system.
And also we may not want
control over just the range
of numbers we're generating
but also how those numbers are
distributed across that range.
And this is where
random sources comes in.
So we're offering you a set
of game quality random sources
that are deterministic, so
when you have the same seed,
you will always get
the same sequence
of numbers no matter
what platform you're on.
They are also serializable
so they can be saved
out with your game data.
And this can be really useful
in helping to prevent cheating.
And they're also implemented
using industry-standard
algorithms that are
known to be reliable,
and have excellent
random characteristics.
In addition we offer you a
set of random distributions
to leverage, and these allow you
to control how your
numbers are distributed
across the given range.
We have a true random where
every value is equally likely
to occur, Gaussian distribution
where values are weighted
on a bell curve with values
toward the mean more likely
than those on the fringes,
and also anti-clustering
or fair random distribution
which helps eliminate
runs of numbers.
And finally we have NSArray
shuffling, which is super useful
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And finally we have NSArray
shuffling, which is super useful
for doing things like
shuffling a deck of cards.
So let's look at the classes.
GKRandomSource is the base
class for random sources.
And it adopts NSSecureCoding
and NSCopying
so it can be securely
serialized.
Determinism is guaranteed
with the same seed,
no matter what platform
you're on,
so if you want the same sequence
of numbers, you can always rely
on it to be generated.
If no seed is given,
one is drawn
from a secure system source.
Go on to sharedRandom, which is
the system's underlying shared
random source, and this
is not deterministic
but there are cases in
which this may be desirable,
such as when you're
shuffling a deck of cards
and you want every
result to be unique.
Let's go over the AI random
source algorithms we have
available for you.
We have ARC4, which
has very low overhead
and excellent random
characteristics and is going
to be your Goldilocks
random source,
we have Linear Congruential
which has even lower overhead
than ARC4, but it's random
characteristics are not quite
as good, and you may see
some more frequently repeated
sequences of numbers, finally
we have the Mersenne Twister,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
sequences of numbers, finally
we have the Mersenne Twister,
which is high quality
but memory intensive.
Note that none of these are
suitable for cryptography
but Apple offers other separate
APIs to meet these needs.
Now we get to our random
distributions, in the base class
for this, it is
GKRandomDistribution
which is implements a
pure random distribution,
meaning every value between
lowest value and highest value,
are equally likely to occur.
You can get numbers
by calling nextInt,
nextUniform, and nextBool.
We also offer a set of dice
convenience constructors
to create 6 sided, 20
sided, and custom sided die.
Then we have
GKGaussianDistribution
which implements a bell
curve Gaussian distribution.
The values are biased
towards the mean value
and the values farther away
from the mean are less likely
to occur, and that's
what happened
in our sample distribution here.
We have generated a sequence
of 15 numbers between 1 and 5,
and we see that the mean value
of 3 occurs far more frequently
than any of the other numbers.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
In fact it occurs
more than twice
as frequently as
any other number.
With 1 and 5, the
values on the fringes,
only occurring a
single time each.
Note that in a standard Gaussian
distribution, is unbounded
but that's undesirable
for a random source,
so we call every value outside
of a three standard
deviations of the mean.
Next we have our anti-clustering
distribution implemented
in the class
GKShuffledDistribution.
This is our fair random
distribution, which helps reduce
or eliminate runs of numbers,
but it's random over time.
And you control this by
using the uniformDistance.
At 0.0, all numbers are
equally likely to occur,
and this is indistinguishable
from a true random source,
our random distribution.
At 1.0, all values are
different and it will run
through every value in
the range before you start
to see any repeated values.
That's what we have here.
In our distribution here.
Once again we're generating
15 numbers between 1 and 5
and you can see that
we're hitting every number
in the range before we start
to see any repeated values.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in the range before we start
to see any repeated values.
And in fact every value is
generated exactly three times.
So let's go over the
simple code examples.
It's very easy to create a
6 sided die random source,
you just use the
connivence constructor
GKRandomDistribution,
and rolling the dye is
as easy as calling nextInt.
It's similarly easy to
create a 20 sided die.
And creating custom
die is also quite easy.
Here we're creating
a 256 sided die
which would be pretty
awkward if you tried
to roll it in the real world.
The previous three examples were
all implemented using a true
random distribution, but you
can use any of the distributions
that we have available to you.
Here we're creating
a 20 sided die
with a Gaussian distribution, so
it's weighted to the mean value,
around 11, so when you roll
it, you're most likely to come
up with a number around there.
And here we're creating
a die, a 20 sided die
with our shuffle distribution,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with our shuffle distribution,
and by default the
uniform distance
on our shuffle distribution
is 1.0.
So when we roll this one,
we're going to hit every value
in the range before we start
to see any repeated values.
The first time we roll
it, we might get 5,
then we know the next time we
roll it, we definitely not going
to get that number
again, until we run
through every other
value in the range.
And finally, here we
have array shuffling,
we're using the shared random
source we mentioned earlier
on GKRandomSource,
which gives us access
to the system's underlying
random source,
which is not deterministic,
but in this case
that's advantageous.
We want every instance of the
card shoveling to be unique.
And you can see how easy it is
to make random sources
a part of your game.
It's only a couple lines of
code and you can get going.
And that's random sources.
So now I would like to invite
Joshua Boggs up here to talk
about our rule systems.
[ Applause ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
>> JOSHUA BOGGS: Thanks, Ross.
Hi. I'm Josh.
I have been working
alongside Bruno
and Ross while they've
been putting
on the finishing
touches to GameplayKit.
I'm here to talk about one of
those systems, the rule systems.
So before I go into the rule
systems, I just want to go
over some common ingredients
that games tend to have.
Games tend to consistent of
three elements, it is things
like your nouns: position,
speed, player health,
equipment they may be holding.
Secondly, you've got things
like verbs: these are actions
that the player can perform,
things like run, jump,
using an item, or if you're
in a car, accelerating.
Lastly, the rules.
Rules are incredibly important
because they define how your
nouns and verbs interact.
Rules give flavor and
texture to your gam,
and great games have
great rules.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and great games have
great rules.
So let's have a look
at an example rule.
Here we have a rule that
a driver may use to decide
when to brake and
when to accelerate.
Using an input property
of distance the player will
either slow down or speed up.
We can see in this example that
if the distance is less than 5,
they're going to brake,
when it's greater than
or equal they'll accelerate.
This is fine logic, but
there is a subtle problem.
In the distances around 5, we're
going to get very jerky movement
because the car is going
to continue to oscillate
between braking and
accelerating.
This is going to give
us very jerky movement.
So for more natural movement
we need something a little
more approximate.
Using a more fuzzy solution
we output facts about what
to do rather than perform
the actions immediately,
we've output two facts
here, closeness and farness,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we've output two facts
here, closeness and farness,
both based on distance.
The important thing is you
can now be both close and far.
So rather than perform
one or the other,
this lets us blend
the two together
to get a more natural movement.
This is especially important
around the previous example.
Now when the distance is
around 5 we'll get much
more natural acceleration.
This is the motivation
behind rule systems.
Facts can be grades of true.
This allows us to perform
more complex reasoning
with fuzzy logic.
Fuzzy logic deals
with approximations.
It also allows us to
separate what we do
from how we should do it,
rather than performing actions
immediately, we just state facts
about the world, and then take
deferred actions later based off
of those facts.
So let's take a look at
one of those classes.
Here we have GKRule.
GKRule consists of a Boolean
predicate and an action.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
GKRule consists of a Boolean
predicate and an action.
The predicate matches
against facts and the state
in the system and only fires its
action if the predicate is true.
Actions could be as
simple as asserting a fact,
or as complicated as you'd
like with a complex block.
Importantly they can now be
serializable using NSPredicate
serialization methods.
The important thing
to remember is
that rule systems provide
approximations to answers.
Things like how close am
I to the car in front?
In the first example
we can kind of say
that with a fairly high grade
of confidence, that
we're quite far.
Where with the other two,
things are a little more fuzzy,
answers that we're after,
things like somewhere
in between, closer.
Let's have a look at the system
that manages these rules.
Here we have the other
class, GKRuleSystem.
GKRuleSystem is an ordered
collection of rules and facts.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
GKRuleSystem is an ordered
collection of rules and facts.
To assert facts about the world,
simply call evaluate on it.
This will to run through
the rules in the array
and those rules will use a
state dictionary as input
and insert facts later
based off of that.
The facts will be held in the
facts array and it's important
to know that whenever a fact
is asserted the evaluate will
actually go back
to the beginning,
and continue evaluating.
This is because when
you assert a fact,
this may affect the
way other rules work.
This ensures that when evaluate
is finished you know you have
the most concise and
accurate view of the game.
To start over again, like maybe
at the end of an update loop
or on a timer, simply call reset
and will clear up old facts
so that you can repeat
the evaluation.
Let's have a look
at the code example.
Here in the beginning, we
initialize our rule system,
and then later we
access the state
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and then later we
access the state
and assert two facts
based off this.
Later in the game
code, excuse me.
We then grab these two
grades and sum them together
to get a sort of
fuzzy approximation
about how much we
should accelerate,
and feed this in our game code.
So let's take a look at
little example we have going.
Here we've got cars
driving along the freeway.
The cars in the intersections
are using one set of rules,
and the cars on the freeway
are using a different set.
The ones on the freeway are
deciding how much they should
slow down or speed up
based off the distance
of the car in front.
They're asserting two
facts about the world.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
They're asserting two
facts about the world.
These are things like
distance, relative speed.
The cars in the intersection are
using a different set of rules
and asserting facts on
who has the right of way.
Putting them altogether we can
get very complex simulations
about the world.
This is a power of rule systems.
So before I go just
some best practices
on using the rule systems.
It is important to remember
that GKRuleSystem is isolated.
You should be using
the state dictionary
as a snapshot of the game world.
You should also use many simple
rules and assert many facts
about the game world as opposed
to large complex
rules and fewer facts.
It is also important to note
that facts are approximations
and it is up to you to decide
how you should use them.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and it is up to you to decide
how you should use them.
The grade of a fact is the
system's confidence in it,
and this allows us
to use fuzzy logic
to achieve more complex
reasoning.
With that, I would
like to hand it back
to my colleague Bruno
to finish up.
[ Applause ]
>> BRUNO SOMMER: Thanks, Josh.
So that's GameplayKit.
Today we talked about the seven
major systems in GameplayKit,
entities and components
which are a really great way
to structure your game logic.
State machines which deal with
the statefulness in our games
and the various state changes
that our objects undergo.
Agents, which are our
autonomously moving entities
controlled by realistic
behaviors and goals.
Pathfinding, which deals with
navigation graph generation
and finding paths
within our game world.
We also talked about our
great MinMax AI solution,
which helps you give life
to your computer
controlled opponents.
Also the slew of great random
sources and distributions
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Also the slew of great random
sources and distributions
that are available to you.
Lastly we talked about rule
systems which are a great way
to describe discreet
and fuzzy logic.
We really are excited to finally
get GameplayKit in your hands
and can't wait to see
what you make with it.
Some great code samples
dropped this week,
you should definitely
check it out if you want
to learn a little more.
DemoBots is a SpriteKit
game covers a wide variety
of the GameplayKit API,
FourInARow is a good example
of MinMax AI in action,
and AgentsCatalog is
a really good example
of the agent's behaviors and
goals, so definitely check
that out if you want
to learn a little more.
There is also some sessions
coming up if you want to find
out a little more about our
related technologies, SpriteKit,
ReplayKit, Game Center,
SceneKit.
After lunch today we have
a deeper dive into DemoBots
which is that sample I talked
about, so definitely check
that out if you want to
learn a little bit more
about GameplayKit or SpriteKit.
There is also some
great labs coming up,
check out the Game
Controllers lab.
There is also a GameplayKit
lab today after lunch,
meet the team, ask questions,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
meet the team, ask questions,
talk about any problems you
might have with the code.
If you need anymore information,
we direct you to check
out our great developer site
and for any general inquiries
contact Allan Schaffer,
our Game Technologies
Evangelist.
Thank you.
Have a really great
rest of your conference.
[ Applause ]