Transcript
[ Crowd Sounds ]
[ Applause ]
>> Good morning, everyone.
Welcome to VR with Metal 2.
My name is Rav Dhiraj, and I'm a
member of the GPU Software Team
at Apple.
So, as you saw in our
Introducing Metal 2 Session,
we've enabled some great new
features this year.
And in this session, I'll be
focusing specifically on the
support for VR that we're adding
with Metal 2.
So, I'll start with a brief
summary of what we've enabled in
macOS High Sierra.
And then, take a deep dive into
what's required to build a VR
app.
And then, end by providing some
details on how you can take
advantage of the new external
GPU support that we've added to
the OS.
All right.
I hope everyone here knows what
virtual reality is.
But just in case, it's am
immersive 360-degree 3D
experience, with direct object
manipulation using controllera.
And an interactive room sized
environment that you can
explore, thanks to highly
accurate motion tracking.
Now, we at Apple think that this
is a great new medium for
developers like you to create
new experiences for our users.
And Metal 2 enables this support
in three ways.
First, by providing a fast path
to present the frame directly to
the VR headset with our new
Direct to Display capability.
Second, by enabling new features
like Viewport Array that are
specifically targeting VR.
And finally, by providing that
foundational support for
external GPUs, so that
developers have a broader range
of Mac hardware.
Of VR capable Mac hardware to
work on.
So, let's get right into what
we've enabled with macOS High
Sierra.
So, we've added built-in plug
and play support for the HTC
Vive VR headset.
And this headset works in
conjunction with Valve SteamVR
runtime, which provides a number
of services, including the VR
compositor.
Valve is also bringing their
open VR APIs to macOS, so that
you guys can create VR apps that
work with SteamVR.
We've been working really
closely with Valve over the last
year to align our releases, and
both SteamVR and OpenVR are
available to download in beta
form, this week.
So, before I go on, I'd like to
describe what the VR compositor
actually does.
So, in a nutshell, it distorts
the image that your app renders
to account for the lens optics
in the VR headset.
In this example, you can see the
barrel distortion that the
compositor is applying to
account for the pincushion
effect of the lenses.
Now, in practice, there's
actually a lot more complexity
under the covers.
And the compositor had to handle
things like chromatic
aberration, and also presenting
a Chaperone UI, so that
developers know the bounds of
their VR space.
All right.
Now, that you have built-in
support for a VR headset, a VR
compositor with SteamVR, and an
API to write to, let's dive
right into how you build a VR
app.
Well, we have two options.
You first, is to use an existing
game engine that has VR support.
This is a great option that many
developers prefer, because it
hides some of the complexities
of the VR compositor.
It also provides a familiar
content creation tool chain.
Now, your second option, is to
write a native VR app that calls
open VR, directly.
This gives your app full control
of rendering and synchronization
with the VR compositor.
But at the cost of some
additional complexity.
Which path you take depends on
the goals of your app.
Let's start by talking a bit
about the game engines.
So, you saw Epic's Unreal Engine
4 in action in the keynote, and
it's a powerful platform on
which to build your VR
experiences.
VR support will be coming later,
this year, and you can find
tutorials and other information
on Epic's website.
We're also really excited that
Unity is bringing VR support to
macOS in a future release of
their engine.
We've been working closely with
them to ensure that the engine
is optimized for both VR
playback and development using
Metal.
So, I'd like to take a moment,
at this point, to talk about a
specific Unity title called
Space Pirate Trainer.
So, we've been collaborating
with Unity and I-Illusions to
bring an early preview of Space
Pirate Trainer to macOS.
And the speed at with I-Illusion
was able to bring their app to
our platform was truly
astounding.
They had a working build in just
a few hours, then a stable fully
playable game in just a handful
of days.
We've been having tremendous fun
playing with this game, and
think that it's a great
representation of the type of VR
experience that you can build
with unity.
We hope you get a chance to
check it out at WWDC.
We think you'll love it, as
well.
So, Unity and Unreal Engine 4
are two great options for VR
development.
But you can, of course, choose
to write a native SteamVR app,
that calls the open VR
framework, directly.
Now, we'll have details on how
you can add the framework to
your app, later in the session.
But you can download binaries
and API documentation at the
OpenVR GitHub.
There'll also be a Metal
specific sample app available to
download in the near future.
But to give you a taste of
what's involved, I'd like to
now, provide a primer on VR app
development in a segment that I
like to call VR App Building
101.
So, we're going to cover a few
things, here.
We'll start with an overview of
some of the challenges involved
in VR development.
And then, talk a bit about some
unique considerations on our
platform.
And then, take a deep dive into
the anatomy of a VR frame.
And then, end with some best
practices specific to VR apps.
So, we've got a lot to cover,
here.
So, let's get started with the
overview.
So, a traditional non-VR work
load targeting a 60Hz display
has around 16.7 milliseconds to
do work for the frame.
And in many cases, the app has
the entire frame budget to do
work on the GPU.
A VR workload, on the other
hand, has to target a 90 frame
per second display.
And it needs to hit this target
to achieve a smooth stutter-free
experience on a headset like the
Vive.
So, this reduces the available
frame budget to around 11
milliseconds.
Additionally, the VR compositor
has to do work on the GPU to
prepare the frame for the VR
headset.
This can take up to 1
millisecond, leaving you app an
effective frame time budget of
around 10 milliseconds.
Which is about 60% of the non-VR
case.
And if that wasn't enough, your
app also has to do more work
every frame.
This includes stereo rendering
for the left and right eye.
And also, rendering to a higher
resolution, in many cases.
The Vive headset has a
resolution of 2160 x 1200.
That's 25% more pixels than a
1080 display.
Additionally, many VR apps
render at a 1.2 to 1.4 X scale
factor to further improve
quality.
So, in summary, your app has a
lot more work to do in less
time.
Welcome to VR development.
All right.
Let's talk about some platform
specifics.
So, Metal 2 introduces a new
Direct to Display capability for
supported headsets like the
Vive.
This is a low latency path that
bypasses the OS window
compositor, and gives the VR
compositor, like SteamVR, the
direct ability to present a
surface onto the VR headset.
This avoids any pixel processing
or additional copies by the OS.
Now, it's worth noting that in
order to guarantee this fast
path, that macOS does not treat
VR headsets as displays.
They're hidden from the system
and offer no extended desktop
capabilities.
So, to summarize, we move from a
model where you can present or
an app presents to a display
through the OS window
compositor.
To one where a VR app can
present to a headset directly
through the VR compositor.
And that's our Direct to Display
capability for VR with Metal 2.
All right.
Keeping with the theme of macOS
platform specifics, let's talk a
bit about how your app selects a
Metal device to use.
So, in macOS, the VR compositor
can query the OS to find the
Metal device for the GPU that
the headset is attached to.
And for performance reasons,
your app will want to select the
same device that the compositor
is using.
So, we've worked with Valve to
make sure that there's an API to
do this, simply called
GetOutputDevice to get the Metal
device that you should render
to.
It's that simple.
Now, let's talk about managing
drawable surfaces on macOS.
So, the VR compositor and your
app each maintain separate pools
of drawable surfaces.
And in a typical frame your app
will render to textures that it
owns, submit these to the VR
compositor.
These will get composited onto a
surface that the compositor
owns, and that's the surface
that'll get presented to the
headset.
And on macOS, IO surfaces are
ideal for transferring this
rendered data from your app to
the compositor.
So, make sure that you create
Metal textures that are backed
by IO surfaces.
As a refresher, let's take a
quick look at how you create
those textures.
So, you'll want to build a
texture descriptor that
specifies the Render Target
Usage flag, since your app will
be rendering into it.
But also, the Shader Read Usage
flag.
Since the compositor will be
sourcing it as an input.
And then, to create your left
and right eye Textures, you
simply pass IO surfaces that you
previously allocated and this
Texture descriptor to
newTextureWithDescriptor.
All right.
I'd like to now, take a few
minutes to talk or to describe a
typical frame in a VR app.
This is important, because your
app and the VR compositor need
to work in lock step.
As I noted before, the rendered
output of your app will be
passed to the VR compositor for
additional processing on the
GPU.
And since the GPU is a shared
resource, synchronization and
when work is scheduled matters.
So, let's start at the beginning
of the frame.
Your app will need to query the
VR system to get the headset
poses that it needs to render
the frame.
For SteamVr, this is done by the
WaitGetPoses call.
And then, your app can encode
the rendered commands for that
frame immediately after getting
those inputs.
And then, once you've encoded
your command buffer, you can
commit it to Metal to queue onto
the GPU.
And then, submit your left and
right eye textures to SteamVR.
This'll wake the compositor so
it can start encoding its GPU
work for the frame.
And then, since ordered
execution matters, your app will
also need to signal the VR
compositor when work that is
sent to the GPU has been
scheduled.
So, for a Metal SteamVR app, you
simply wait until your command
buffer has been scheduled, and
then, you can call the SteamVR
PostPresentHandoff function.
This'll signal the VR compositor
that it can commit its work to
the GPU and it'll get queued up
in the right order.
So, let's see what that would
look like in your draw loop.
So, at the top of your loop
you'd have your WaitGetPoses
call, to gather the inputs from
the headset.
You then, build your command
buffer to render your scene and
commit it to the GPU.
And then, at this point, you'll
submit your left and right eye
textures to SteamVR.
And then, once you've waited for
that command buffer to be
scheduled, you can call
PostPresentHandoff to tell the
VR compositor that it's now free
to commit work to the GPU, as
well.
So, one more note before I move
on.
It's worth noting that as we
extend this diagram to include
the next frame, it's
WaitGetPoses and not the
vertical blank that defines the
start of the frame for your app.
This is an important thing to
keep in mind, and we'll be
coming back to this, very
shortly.
All right.
Let's move on to talk about some
best practices.
So, the first is to avoid
introducing a GPU bubble at the
start of your frame, while
encoding commands on the CPU.
So, helpfully, SteamVR offers a
useful mechanism to let you
start work for that frame early,
by giving your app a two to
three millisecond running start.
So, his should look very
familiar.
By aligning the start of your
frame with WaitGetPoses, you'll
ensure that you're taking
advantage of this optimization
and getting that running start.
All right.
Next, make sure that your app is
not building large monolithic
command buffers before
committing them to the GPU, as
this can also introduce GPU
bubbles.
Instead, you want to split your
command buffers where possible
and commit them as you go, to
maximize GPU utilization in the
frame.
So, the next optimization that
we recommend is to try and
coalesce your left and right eye
draws, together.
The Metal 2 Viewport Array
feature provides a great
mechanism for you to do this, by
letting your app select a per
primitive destination viewport
in the vertex shader.
This can substantially reduce
your draw call overhead, as you
can now render to both the left
and the right eye with a single
draw call.
So, let's take a look at an
example of how you can adopt
Viewport Array for your Metal
app using instancing.
So, the first thing I want to
point out is that you'll need to
create a texture that's twice as
wide, because you'll now be
rendering both the left and
right eye scene to the same
texture.
And then, you simply create your
Viewpoint array that defines the
bounds for your left and your
right eye viewports.
And then, you can pass this
Viewport array to your render
commanding coder using the new
setViewports API.
And then, at this point, you'll
want to make an instance to
DrawPrimitives call with an
instance count of 2 to issue the
draw to your left and right
Viewports.
We'll be using the instance ID
as our eye index in the vertex
shader.
So, let's take a look at that
vertex shader.
So, the first thing I want to
point out is that the viewport
is selected by the new viewport
underscore array underscore
index attribute.
And then, as I previously noted,
we're using the instance
underscore id as our eye index.
And we'll be able to use it to
access our viewport dependent
data like this model view
projection matrix in this
example.
And then, finally, you do want
to associate your viewport index
with the instance underscore id
so that the right viewport is
selected when rasterizing your
image.
And that's how you use the new
Viewport Array feature to reduce
the draw call cost in your VR
apps.
All right.
The final optimization that I
want to talk about, today, is
really a general best practice.
You want to try and reduce the
number of pixels that you're
shading every frame.
So, due to the nature of the
lens warp, about 15% of your
rendered scene is not actually
visible on the VR headset.
This is represented by the blue
regions in this image.
Luckily, SteamVR offers a mesh
based stencil mask that is
specific to the Vive headset
that you can use to cull these
pixels.
It's really easy to use and
provides a substantial benefit.
All right.
That brings us to the end of our
brief journey into building a VR
app.
So, with that background
covered, I'd like to now
introduce Nat Brown from Valve
Software, to come onstage and
talk a bit more about SteamVR on
macOS.
[ Applause ]
Thanks, Nat.
>> Thanks, Rav.
Hi, everybody.
I'm Nat. I work on VR at Valve.
So, if you don't know Valve,
we're a game company.
We distribute games and we have
a community of gamers that play
on Steam.
It turns out, games are actually
a super-interesting crucible for
user interface and
human/computer interaction.
And at Valve, we do a lot of
experiments around games and
input.
So, years of VR prototypes
really didn't click, for us, for
making games or any other kind
of content.
Until we found this sweet
combination of 90 hertz low
persistence displays, accurate
room scale tracking, with two
track controllers.
We consider this a magical
threshold for VR.
I like to think of it as kind of
similar to how when you first
used a smartphone with low
latency accurate touch, it
really felt magical.
That's what this magical
threshold is, for VR.
Once room scale VR really
clicked for us, we knew that we
could build VR games and VR
content.
We license aspects of our VR
technology like base stations,
headset lens designs, and so
forth, non-exclusively to
partners like HTC and LG.
And we have a big program that
licenses Lighthouse tracking
technology to lots of different
partners.
Our approach to the software
stack you've heard a little bit
about, already, is called the
SteamVr runtime.
SteamVr has an application
model, up above, and a hardware
and driver model, down below.
Our goal here, is to promote
experiments in VR, because we're
really in the early days of what
VR is and what the content's
going to be like.
We wanted to make this model so
that people could experiment in
VR hardware and content and
reduce risk.
So, you can go out, maybe not
you, but some of you can go out
and design a new headset or a
new tracking system, or new
controllers.
And then, you can come to an
Open VR platform, and you can
write a driver that plugs right
in.
And you'll have access to all
the existing content that's
already running.
That will give you real world
tests that make your hardware
better.
Because you and your customers
can compare your new idea
directly with other hardware
that's already out there.
So, hardware developers don't
have to develop custom content
and content developers don't
have to bet on which hardware
will win.
They can just focus on making
great content.
So, applications link to the
Open VR framework.
It's a really tiny library, and
all it knows how to do is define
the runtime that's currently
installed.
So, it finds this VR client
library, that's a shared
library, and that either
connects to or launches the
other runtime processes of
SteamVr.
The vrmonitor process.
You're going to see a lot of
that little window.
It's a UI and a setting
application.
It shows you the state of any
attached headsets and
controllers, and the tracking
sensors.
The vrserver is in charge of
keeping track of drivers,
loading alternate drivers.
And it puts poses and other
information into shared memory.
So, that your application and
the rest of the SteamVR runtime
have access to it.
The vrcompositor, you just heard
a little bit about, this process
is sort of like the Windows
server.
It draws scenes and overlays to
the headset.
And it corrects images for lens
distortion, color.
And one of the things that's
sort of under the hood, that you
may not understand, is it
corrects for smearing and
ghosting under motion.
And it also, fades into a stable
tracking area when applications
fail to meet frame rate.
Because we don't want people to
fall over or bump into
something.
So, the vrcompositor
communicates with Metal, like
you heard.
It presents directly to the
headset through the Direct to
Display Metal 2 API.
Let's see.
Last, but not least, the mighty
vrdashboard.
That is a piece of UI that lets
you select applications.
It lets you control volume,
other system settings.
We provide a default one that
shows you your Steam library,
unless you choose applications.
But there's actually an API.
So, you can write your own
dashboard application, as well.
So, Valve and Apple, we started
working together more closely,
about a year ago.
And our port to Metal from
OpenGL, it didn't cost us very
much.
Metal is a really cool API.
And it was critical for us to
get VR up and performing.
Our biggest request to Apple, a
year ago, was for this Direct to
Display feature.
Because it's critical to ensure
that the VR compositor has the
fastest time predictable path to
the headset display panels.
We also, really needed super
accurate low variance VBL,
vertical blank, events.
So, that we could set the
cadence of the VR frame
presentation timing, and we
could predict those poses
accurately.
Predicting the pose accurately
is actually more important than
the time between motion happens
and when the display happens.
If we know when it's going to
be, that's actually more
important.
Finally, we hit some speed bumps
around inter-process and
inter-thread synchronization.
Once everything else was working
really well, Metal was blazing
fast, we had really tight VBL,
but we still were having some
synchronization problems.
But Apple helped us find better
ways to signal and synchronize
with low scheduling variance
between all the processes and
thread involved.
Okay. So, my diagram of a VR
frame is a little more
complicated.
Most of you are never going to
look under the hood, this far.
But I thought I'd show it to
you, anyway.
So, the low persistence OLED of
the HTC Vibe uses global
illumination.
All of the pixels of the display
flash for a tiny period, all at
once.
And this is common in VR,
because heads move really
quickly.
And we want to make sure that
the image doesn't smear or tear
in front of the user.
So, the panel illuminates for
about two milliseconds, one
frame after it is presented by
the GPU, because the panel takes
time to charge before it can do
that global illumination pulse.
So, over here, that's the
photons coming out.
We're going to follow this red
frame through this sequence.
So, way out here is the photons
coming out.
Because of this timing,
applications typically pick up a
pose, like you heard before,
from IVR compositor wakeup
poses.
So, wakeup poses stalls and
returns with a pose for that
future firing of photons.
So, the rendering is happening
here, and you present it there,
here in the middle.
But the photons don't go out
there.
So, we've had to predict a pose
about 25 milliseconds out.
So, 25 milliseconds is two
frames plus the little tiny bit
of running start.
And you heard how running
start's very important, because
we want to give you as much of
that 11 milliseconds of GPU to
get the most excellent pixels up
in front of the users, that you
can.
So, one last thing that's
happening, here.
You see this frame actually
stretches back all the way to
the beginning of the slide.
And that's because this
application, and your game
engine might be doing this under
the covers, is kind of more
complicated.
It has some physics and input
event processing, that it needs
to do.
And that's costly work that's
going to take some CPU time,
over here.
But that code actually needs
pose information, also.
It needs to know when the
buttons were pressed and where
the controllers were.
Maybe it's interpolating
something having to do with
motion, or you're blocking
something, or you're firing
something.
So, this thread actually, it's
going to wake up at around the
same time, because wakeup poses
gives it this important kind of
synch point.
But it's going to be calling a
different API, because it's
trying to get a pose much
further out.
It's looking 36 milliseconds
out, and so, it's going to be
calling the getDevice to
absolute tracking poses.
So, I tell you this just so you
know, Open VR has some pretty
deep APIs for you to tune your
applications so that you're
predicting where the headset and
where the controllers are going
to be very accurately, based on
where you need them to be, where
your code needs them to be.
So, the point of wakeup poses is
it gives you a predictable point
in time at running start, so you
always know when those photons
are going to come out.
So, last but not least, let's
talk about what you need to do
to get up and running with
SteamVr and macOS.
So, first of all, it's a tool in
Steam, as a developer gets
started by installing Steam and
registering for a free account.
And if you don't use Steam, you
should.
Next, install SteamVR itself.
SteamVR is under the Library
menu in Tools.
Search for SteamVR, right click
on it, choose the Properties,
choose the Beta tab, and opt
into the Beta.
For now, it's a beta.
Then, install it.
We'll be keeping SteamVr up to
date as we fix any bugs that you
find.
Finally, you want to download
the Open VR headers and the
framework from GitHub.
And I've put a link, right up
there, for you in the slides.
So, here's the funky part.
You need to include that Open VR
bootstrapping framework inside
of your application.
The Open VR framework that you
link to, that conveys the
version of the interfaces of the
runtime you've built and tested
against.
And that allows us to upgrade
the runtime and to version
forward, gracefully.
Because we move this forward
quite actively.
In XCode, instead of just adding
the framework to your link
phase, go into General Settings
and make it an embedded binary.
So, it will be installed into
the contents frameworks portion
of your application bundle.
Okay. Finally, we really want
your feedback.
So, we put some things right
into the UI of vrmonitor.
There's a pointer to SteamVR's
support site and the community
hardware discussions.
And you can report a bug, create
a system report, and send it to
us or probably send it to me.
You can reach me at
natb@valvesoftware.com, but I'd
rather you use the tool.
And with that, thanks very much.
I'm really looking forward to
what you guys make with VR.
And thanks, to everybody at
Apple for making VR shine on
macOS.
Thank you.
[ Applause ]
>> Thanks, Nat.
It's been great working with
Valve, and I'm still astounded
about what we've been able to
achieve, over the last year.
All right.
Let's move on to talk about the
external GPU support that we've
added with macOS High Sierra.
So, an external GPU is a
standalone chassis with a
desktop class GPU in it, that
you can plug directly to your
host system via Thunderbolt.
And as noted previously, the
primary motivation here, was to
enable developers like you to
build great VR apps using a
broader range of Mac hardware.
There's a great workflow story,
here, where you can use your
MacBook Pro with an external GPU
to get the rendering horsepower
that you need for VR.
But of course, there's also
additional performance benefit
to be had for other GPU bound
cases, like games and pro apps,
as well.
And as you saw on Monday, we've
been partnering with Sonnet and
AMD to offer you an external
graphics developer kit with an
AMD Radeon RX-580 GPU in it.
This kit is optimized for use
with all our Thunderbolt 3
capable Macs, and is available
for purchase through our
developer program, today.
All right.
Let's dive right into how you
identify the external GPU.
This device enumeration code
should look very familiar to
you.
CopyAllDevices will give you all
the Metal devices in the system.
And then, you can identify the
external GPU by simply looking
for the removable property in
the device.
This is very similar to how you
previously identified the low
power devices on our platforms.
Now, let's talk a bit about
Thunderbolt bandwidth
capabilities.
So, Thunderbolt 3 offers twice
the theoretical bandwidth of
Thunderbolt 2, which is great.
But you have to keep in mind
that this is still a quarter the
bandwidth of the PCI bus
available to the internal GPUs
in our platforms.
So, this is important.
You have a choice, now, between
using the internal GPU with a
high bandwidth link, or a high
performance external GPU with a
link at about a quarter the
bandwidth.
So, you need to treat the link
and the GPU as a pair when
deciding which GPU you use.
Additionally, users can now
attach displays to different
GPUs.
And in this environment, there's
a penalty to render on one GPU
and then, display on another, as
that date needs to be
transferred across the link.
So, where your content is
displayed clearly is a huge
consideration when you decide
which GPU you want to use, as
well.
So, there's additional
complexity, here.
But fortunately, there's a
couple simple things that you
can do to make your app a good
citizen in a multi-GPU
environment.
So, let's start with GPU
selection.
The best advice that we can give
you is to render on the same GPU
that's driving the display your
app is on.
I call this the golden rule of
GPU selection.
So, let's extend this and build
a decision tree.
So, if the content your app is
rendering will be presented to a
display, you want to select the
GPU that's driving that display.
This is our golden rule.
However, if your app is doing
compute or other offline
rendering operations, then you
need to decide if you prefer to
use the low power GPU is it's
available.
This can be particularly useful
on our portables, where
selecting this device can have a
substantial battery savings.
But of course, if you need the
GPU horsepower for things like
VR, you're going to want to
select the external GPU.
So, let's get back to our golden
rule and find out how you
identify the Metal device that's
driving a particular display.
Well, it turns out that this is
really easy to do.
There's an existing core
graphics API that will give you
this device.
You simply have to get the ID
for the display that your window
is on, by querying the
NSScreenNumber.
And then, call
CGDirectDisplayCopyCurrentMetal
Device to get the Metal device
that's driving that display.
It's that simple.
All right.
Now, that we've established that
each display can be attached to
a different GPU, your app will
need to handle GPU migration as
your displays are moved.
Sorry, as your window is moved
across those displays.
So, you can do that by
registering for the new, well,
it's not new, our existing
notification handler called
WindowDidChangeScreen.
So, let's take a look at what
you do with this notification
handler.
So, you'll want to start by
finding the Metal device for the
display your app is now on, by
calling the core graphics API
that we previously discussed.
You can early out if it's the
same device that you're
currently rendering to, since no
GPU migration will be required.
And then, you'll want to perform
your device migration and switch
to using the new device for all
your rendering.
So, that's how you use a display
change notification to handle
GPU migration.
But what about the case where
the external GPU is plugged in,
or unplugged from your system?
Well, Metal 2 introduces three
new notifications to help you
with this case.
These are DeviceWasAdded when an
external GPU was plugged in,
DeviceWasRemoved when it's
unplugged, and
DeviceRemovalRequested when the
OS signals an intent to remove a
GPU at some point in the future.
So, let's take a look at how you
would register for, and then,
respond to these notifications.
So, you'll want to use the new
CopyAllDevicesWithObserver API
that we've introduced with Metal
2.
This will let you register a
handler for these new device
change notifications.
In this case, we're simply
invoking a function called
handleGPUHotPlug.
So, let's take a look at it.
It's really straightforward.
All you have to do is check for
and directly respond to each
notification.
But I want to point out a couple
of things, here.
The first is that your app
should treat the
DeviceRemovalRequested
notification as a hint to start
migrating off the external GPU.
And second, if your app did not
receive the
DeviceRemovalRequested
notification, then it should
treat DeviceWasRemoved as an
unexpected GPU removal.
So, an unexpected GPU removal is
when your external GPU is
disconnected or powered down
without the OS being aware.
So, this is the equivalent of
somebody reaching into your
system and yanking out that GPU.
And since the hardware is no
longer there, some Metal API
calls will start returning
errors.
So, you'll want to add defensive
code to your app to protect
against this.
So, it can survive until it
receives a migration
notification and it can
gracefully switch to another GPU
in the system.
It's also worth pointing out
that if you had any transient
data on the external GPU's local
memory, your app may need to
regenerate this, as it's no
longer there.
All right.
Now, on to some best practices.
So, we recommend that you don't
immediately release devices
after you migrate, as users tend
to move windows back and forth
across displays.
Instead, employ a lazy teardown
policy with some reasonable
timeout.
Additionally, if your app has a
complex UI with a number of
windows, you may, may, need to
handle GPU migrations separately
for each window.
And then, finally, we recommend
that you profile your app in
multi-GPU situations, to ensure
that you're not needlessly
transferring data back and
forth, across the GPUs.
All right.
Let's tie it back to VR with
some best practices specific to
this use case.
First and most obvious, make
sure you attach the VR headset
directly to the external GPU.
Less obvious, but equally
important, you want to make sure
that your app is presenting its
UI or the mirrored VR content to
a display that's also being
driven by the external GPU.
This will ensure that both the
VR compositor and your app are
rendering with the external GPU.
And finally, make sure you cache
as many resources as possible on
the external GPU's local memory,
so that you're not streaming
data across the Thunderbolt
link, unless absolutely
required.
All right.
So, this brings us to the end of
our session.
So, let's quickly review what
we've covered, today.
So, the big news is that you can
now, develop VR content on
macOS.
We've added built-in support for
the HTC Vibe headset.
And we've partnered with Valve
to enable an optimized version,
an optimized Metal version of
the SteamVR runtime on our
platform.
Additionally, two of the most
popular game engines, Unity and
Unreal Engine 4 will be bringing
VR support to the Mac, in the
near future.
But of course, we also encourage
you to build your own native VR
app.
And finally, we added support
for external GPUs.
And we hope that this motivates
you to ensure that your app is a
good citizen in a multi-GPU
environment, so that you can
take advantage of that
additional GPU horsepower.
We're really excited to see what
you guys are going to be able to
build with this new technology.
So, be sure to check out the
other sessions on the Metal
Track.
We had a great Introducing Metal
2 session, yesterday.
If you didn't get a chance to
see it, I encourage you to watch
the video online.
We have two additional sessions
in the afternoon, tomorrow,
Metal 2 Optimization and
Debugging, and Using Metal 2 for
Compute.
We hope to see you, there.
I'll see you, there.
Thank you, very much, and have a
great WWDC.
[ Applause ]