WWDC2004 Session 209

Transcript

Kind: captions
Language: en
welcome to Wednesday at WWDC this is
session 209 Mac os10 OpenGL in depth
Jeff stall and we're going to be going
into OpenGL on the Macintosh today so
that's me so first let's talk about a
little bit what we will and won't talk
about at this session this session will
not be like an OpenGL 101 session a lot
of references for that on the web and
there's a lot of references in books and
those kind of things are really at wwc
we want to take the time to go through
what differentiates the mac OS what's
important for you to know about the mac
OS and that's particularly trying to
teach you exactly how to use one OpenGL
and in a day or in an hour which is
probably a difficult task but so if you
are interested in like you just heard
about some of the uses of OpenGL you've
seen the keynote you say he'll play wow
that's a great technology I want to talk
about one thing you can do is you can go
to the OpenGL org site great reference
has a lot of documentation on there has
news what's going on with OpenGL and has
links all the specifications and all the
extension registry with the with
extension specifications and this is a
key reference for you when you're
working with OpenGL working with an
extension or working with with the API
also if you're again if you're a
beginner and you want to learn a little
about OpenGL Google's a great reference
OpenGL tutorial 1 and did that of the
other day and there's some really really
well-put-together tutorials for just
starting out with OpenGL which will show
you how to get going and a lot of them
actually have a mac OS implementations
for mac OS code for you to use so you
can just immediately get going with the
mac also again starting with opengl the
programming guide which is the red book
and which is affectionately called the
red book and that's basically the kind
of the step-by-step tutorial about what
basically is in OpenGL there's also
things you should use for your
references are like the OpenGL
specification and or the blue book which
is actually a reference guide would not
recommend that as kind of a learning
tool but definitely while you're using
OpenGL SI me you should have on your
desk or on your computer and kind of PDF
form so now what will we cover what we
will cover is OpenGL on the Mac OS
we're going to give an update and talk
about what on the mac OS is new since we
talked last since we talked last year
then I'm going to move into Mac open
jail on the Mac OS and talk about about
architecture talk a little bit about OS
dependent data which is something kind
of kind of this mystic data that we
taught that you need to understand
things like virtual screens and pixel
formats that you may be using in your
OpenGL applications but you may not
quite understand exactly how they fit in
exactly how you can best use them to
your advantage to write great apps and
finally we'll talk about interfaces and
that will go kind of a review of some of
the interfaces you can use depending on
where you're coming from a little kind
of app application you have finally I'll
wrap off with a few techniques to take
advantage of some of the great things
you can do with OpenGL in the
Natchitoches so let's start a game with
the OpenGL update where we've been and
where we're going one thing that if
you're familiar was going to open OpenGL
on the Macintosh one thing you'll notice
is our updates are slightly different
than kind of a regular OS update for
regular OS component you may see a major
feature increase that nihad Panther and
then if there's a bug or two they'll fix
it in the software update well because
of the the GPU and because of the driver
vendors and because of the rate of
increase of hardware capabilities we're
continuously trying to rev OpenGL trying
to get to the features that you guys
need as soon as possible what this means
is you'll see our update cycle has been
used software updates an awful lot to
get features to you just recently we r
Mac os10 Panther 1034 was a software
update and it was a major OpenGL update
there and you'll continue to see as we
move forward opengl updates so what you
have right now with you onions on the
disk that you have and the tigris eat
it's actually the current kind of
snapshot of OpenGL as we move forward or
try and get more snapshots more seeds
more updates and those kind of things as
we move forward to our tiger so expect
some of the features i'm going to talk
about will start rolling for will start
rolling out as we move forward and
moving closer to tiger so what we've
been doing in the past year well a lot
so couple things that we've been busy
with and so since you can't probably
read all this and getting take it in
let's break it down and
talk about in three different categories
talk about new features talk about
performance and we'll talk about some of
the bug fixes at the end we'll talk
about looking forward and we're we're
we're looking to go with so in pants are
a couple key features were added which
you can take advantage of first of pixel
buffers pixel buffers basically adapted
kind of the industry standard off-screen
accelerated rendering API with a little
bit of mad Mac OS flavor to fit into our
API as well since it is a windowing
system API has to deal with the
windowing system and allows you to do
things you couldn't do before with
offspring renderings of what this means
if you can do not have to bring a window
up you can do all your calculations off
screen and you can then look at the
content things like core image can use
pixel buffers to a great advantage do
all the calculations off-screen get the
acceleration in the GPU and then get the
result and displayed in any way or use
it in any way you need can form a
texture for a different image or to just
display in it or saving it off as a
maybe a movie or something like that
other things we added we're floating
point pixels people didn't make a really
large deal about them it's on our web
page that we have the extension
supporting you'll notice them some of
the pixel formats that we'll talk about
later you can add floating point pixels
but this means is you can actually do
calculations have the result be floating
point numbers instead of just RGB
coordinates this means you can render a
data set which we're going to talk about
shading later you can take a vertex or a
fragment shader take the results of a
rendering them up being floating-point
pixels and use that as some kind of data
set so you can make you know render a
procedural bump map or something like
that it's kind of sky's the limit as far
as using the using the floating point
pixels in the precision you can use
there also in Panther we add a lot of
blood enhancements which I'll go over a
little bit later which for focusing
especially in the scientific community
allows people to take advantage of some
of the unique capabilities of Mac OS
some of the desktop desktop capabilities
some of the capabilities with with
integration with other peripherals since
panther we've had a lot of weave out a
lot of work in the software updates you
can see a list of some of the extensions
we've added swen the features we've
added this is just the features I am
talking about performance updates what
was it again what we're trying to do is
we're trying to roll out these features
as the arbor proof them as they're
available in drivers and as we were able
to implement them and get them out to
you
and is why it widened for a wider
audience of users as possible and as
soon as possible so what that means is
you'll see things like this continue on
as we move forward things were added
since in software updates occlusion
query vertex buffer objects much
requested additional to vertex array
range much requested way of optimizing
transport of vertices up to the graphics
card and as I go through the
presentation I'll note some other
presentations you might be interested in
tomorrow at ten-thirty and I'm not sure
the room I believe it might be Nob Hill
but I'm actually not sure is an open G
optimization session and then followed
in the afternoon there's a session about
actually showing you how to use some of
our tools we'll talk specifically about
some vertex buffer objects optics and
talk about how to use that how you can
use that in your application things like
point sprite non-power up to textures
depths balance test ones equations
separate and text Ramiro clamp these are
all extensions that were added features
that were added to OpenGL since Panther
is shipped and Tiger one of the main
things in Tiger which we're going to
weave a kind of already announced but
will be the OpenGL shading language
where we're working really hard to
OpenGL shading language support you
should see it coming coming soon to you
guys so you can start working with it
and start working with with the shading
language now will be full support for
shading language depending on of course
on the whatever Hardware you have
supporting it but we're also hardware
and software paths for that so you
should be able to sort the full API for
your application so performance again we
were really busy one of the key things
we've heard is that we really want as
much performance as possible while the
API so things like for Panther were
things like static geometry
optimizations with displaylist Mar a lot
of instance we're a very large we're the
largest UNIX vendor out there a lot of
scientific applications have come come
to the mac OS and said hey I really need
to be used a lot of static geometry
molecular modelers for example so we've
optimizes the display list to process
into vertex array ranges and it allows
you to use static geometry in the
optimal form improved hardware
acceleration for texture copy geo copy
copy text image verveer am the vram
stays on the card improved speed they're
optimized color version color conversion
and texture compression things like
texture compression which is making
people look at st. how long these
texture compression that's that's that
ugly thing that that
you get poor textures out of and and it
really won't look my app look good well
it turns out some of the texture
compression schemes now are good enough
that you can have less data and higher
quality images by allowing you to use
larger images in your athlete you could
before given your data size constraints
so texture compression something to look
at especially if you're doing with large
amounts of image data stream line
midbass generation again let's
accelerate that looks good as fast as
possible so you can get the highest
quality images highest quality texture
handling for you guys Panther software
updates so since panthers shift in the
past year we've done things like more
immediate mode optimizations again
scientific applications coming to the
platform want needing to use a media
mode having a cross cross platform
source compatible dead you know instead
of being cross town from source
compatible if they don't have options
sometimes to change to a vertex array
range or display list they may want to
stay in immediate mode so we've
optimized that we've improved handling
of pixel data throughout the system
working very hard to make sure we have
an optimum pipeline forgetting the pixel
data through there we also vertex
program an emulation robust some speed
enhancement what that means is we
understand that some of you want to run
vertex program and you don't have the
hardware support but you want that to be
your only in the only path and where we
understand that we're working to improve
that as much as possible to get you the
best cpu support for running vertex
programs and then obviously one thing we
always look at is bandwidth improvements
throughout the whole system and lastly
but not least is the adding of
asynchronous texture fetching as a
feature john in a session tomorrow we'll
talk about options for getting textures
back from the card again the GPU being
your coprocessor being there's a second
processor on the system you have a cpu
and GPU want to load balance those and
to do that you need to be able get the
data that you process with the GPU back
off the card asynchronous texture
fetching similar to opengl is any
synchronous api will allow you to start
the retrieval use the GPU to push the
data on without using the cpu power and
then later get the data once it's
retrieve so if i 0 means your
application does not have to block and
block at the point where is trying to
copy the data back so it's want me to
pay attention to if you're moving large
amounts of data back and forth
so tiger and feature software updates so
what we're looking for us and move
forward we're going to obviously
continue to improve the resource
handling what you may have gotten from
all the talks we've done is that there's
a lot of clients of OpenGL right now we
have tons of people who are using OpenGL
on the system from your applications to
the windows server to core image the
core video so you know lots of things
you see on the screen are now running
running through OpenGL so it's really
important to us in working with from an
OS standpoint to make sure the resource
handling is optimized which benefits you
guys so that means you have an optimal
resource handling for your application
you gain those same benefits we're going
to also look to optimize vertex and
fragment programming shader
implementation so as we move forward
with the glsl the OpenGL shading
language and with vertex program to
fragment programs the assembly language
words that we're going to optimize that
pipeline and ensure that we have you
know software support for those ins and
optimize software support for those and
then the announcement that we made the
other day the end with new GPUs the
nvidia 6800 ultra is a great card to
have on the platform and will continue
to move work with both ati and nvidia as
their new products come out given to the
macintosh as fast as we can for you guys
so always bugs there's always bugs that
you guys run into corner case something
that you may be doing differently than
what we're doing we want to hear about
that we fixed hundreds of bugs and
Panther and since the software update
but one thing we want to do is we want
to make sure that we know what problems
you have so this is about call out to
you guys to say hey if you have a bug
make sure it's in radar make sure it
you've filed in the bug reporter on the
ADC website and that we know about it if
you have questions on it you see you've
probably seen our mail our emails on the
websites you always can obviously query
us and either through the website or
directly and see what's with the status
of things are we want to know what the
problems you're having is so we can
correctly prioritize and make sure we're
covering the things that are causing
problems so it looks forward so looking
forward from today on towards the
shipping a tiger and beyond one thing
that we really want to do and one thing
we feel like we've now we have a mature
system we're going to continue adding
features adding hardware support but we
really
to focus on quality we have new test
suites that we're implementing on our
end we have unprecedented used in Tiger
as I've talked about things like or
image core video quartz extreme I chat
AV those are all built on top of OpenGL
we can't build these apps these apps
can't be reliable unless we have a
quality bug-free OpenGL stack so it's
really important to us we're working
really hard on it and I just want to
make sure that you guys know that's what
our focus is and get us the information
and we'll start we'll continue to try
and fix any problems you have the new
features we're producing our working
groups so you're seeing things like you
would not even sure what the spec file
name will be but like render target for
uber buffers we're working through very
strongly as i was working groups to sure
the spec is a good spec good spec for
developers develop on and as soon as
things like that I ratify we're going to
start implementing them you know as the
as the request and need need is for the
different different specifications if
you have specifications or extensions
that you're particularly interested or
particularly beneficial to your
application please let us know file a
feature bug root go to the bug reporter
file a feature request bug and I'll let
us know that your application either
either blocked by this feature or that
you are interested in using this in the
future one thing about the future when
is the future for an application you
maybe i'll be working on applications
right now that's great we want to hear
about bugs want to hear about features
but obviously there's some lead time for
us to implement things so think ahead
think about your next application hey
cool i'm going to use that thing i'm
going to use shading language i'm going
to use you know render targets in my
next application is really important to
me I need that on the system let us know
ahead of time that will give us time to
implement things and get it's time to
get caught up with where you guys are
and where you guys want to be in the you
know what you guys want to use in the
mac OS so now that's the end of the
update so that's kind of where we are we
brought everyone up to speed so we know
where we came from from last year let's
talk a little bit step back and talk a
little bit about the system itself so
one thing that we haven't really covered
before I've kind of talked about in
previous years talking about interfaces
and how to do a few techniques here and
there but really haven't talked about
the architecture of OpenGL and Macintosh
what makes it great and what makes it
different than maybe our architecture is
used to dealing with and how you can
take advantage of that
so this is my big sentence here it's a
multi-client multi-threaded multi-headed
high-performance graphics API with
virtualized resources so it's not your
parents graphics API and nor is our
competitors graphics API key here is
that there's a ton of clients using it
multi through as multi threading
capabilities we're trying to eat all the
performance we can out of it and there
are things that you can do and you can
do in your application to improve your
app maker app run great on the unopened
jail so we're going to talk about the
OpenGL driver model talk about how it's
architected and how that matters to your
application and then we're going to go
into a little bit about framework model
how the framework model will fit with
your application so if you're even if
you're not using OpenGL it gives you an
idea where you need to start so first
thing I want to talk about it a little
bit about the framework interface and
then the interface of how this fits into
the driver model so the way you read
this particular slide is your
application will always use OpenGL if
you're an OpenGL application it's kind
of by definition the second partly on
your left side is what interface are you
actually accessing what when you in
system dependent interface and this the
reason I have stacked it like this is to
show you just for your information kind
of what where things are what things are
built on what things so at the bottom
layer we have cgl cgl kind of liqueur
opengl interface it's the base base
windowing system interface and
applications can use this directly the
applications like a full screen
application might use this directly
built on top of that is both a GL and
the NS open shell implementations both
for any self-control contacts and nsop
chill pixel format and so if you're in a
Coco application you want to look at the
NS classes there if you're a carbon
application AGL's where you want to go
but this also says you don't need to use
CTLs not require that you use the lowest
level you can use that next level up or
further if you're an NS open lesson in
it if you're a napkin application of
cocoa application and you really want to
use OpenGL but you don't really want to
get into context and pixel format you
can use the NS OpenGL view directly and
what that allows you to do is it allows
you to you know skip the fact that you
need to handle pixel format or context
you can interview the interface builder
interface
use the OpenGL kind of widget and just
directly and it's OpenGL if you wish and
just directly you use that and link that
into your application so it simplifies
the process of getting up and running so
that's something you should look at if
you're the cocoa application you don't
have a lot of special constraints above
that actually glut was built on top of
the cocoa have our cocoa interface and
so glut is actually it's a framework but
it's really kind of an application or a
client of NS so one one thing I want
mention about glut right now is our full
blood implementation besides for being a
framework we also have the sample code
up on the sample code site so we've
released all our codes alive so if you
ever to want to see how we do things or
how things are done in the galata in the
glut application you always can download
that and look at it so again your
application will what it will do is will
reference some ate some windowing system
API glut NS OpenGL view some of the NS
OpenGL pixel format earnest open
contacts AGL or cgl directly to
reference back and it also reference
OpenGL when it reference point it when
it hopes into the system what that means
is you have an application this is your
application and below it is a framework
interface which we just we just talked
about so that's you kind of pick which
which interface you need and you also a
reference OpenGL and below that is the
OpenGL engine so we talk sometimes about
the OpenGL engine and actually the
rendering part the part of the OpenGL
engine is this is what makes the kind of
a multi-client plus it's multi-headed
it's a multi-client if you want to look
at it it's the kind of the top of the
tree and it's your application since
most multiple applications come down
multi-headed is the bottom of the tree
and that's multiple renderer targets on
the bottom we have a software renderer
we have a nvidia renderer we have an ati
render you can take advantage of all
these things by the way its interface
into the driver so all these things are
existing on the system we do the work in
the OpenGL engine we do the bulk of the
work there and that's kind of the common
works we separate out all the common
functionality although common the
function handling and put it into the
engine then that is distributed in a
very defined interface which we call the
govt interface and the appeal to do gld
plugins you have a software plug-in you
have an ati plug-in you have an nvidia
plugin this allows you to do things like
have two video cards in your system and
seamlessly move
windows back and forth between the video
cards what it also means is I will talk
about later is there some things that
you should realize about when you're
when windows are moving at things you
can invent you should handle and things
you should look at for as far as are you
changing renderers you're not locking to
render you could lock yourself in to
render but normally or not and you could
change capabilities and bid flight in
your application so something your
application should take into account
that you could have one card this is a
HUD high level card very capable someone
could slide the window across to a
different monitor that's driven by a
different car and you could change
renders in that case you need to handle
out situation not trying to send
commands that aren't supported on that
particular video card these things are
particular into the rasterizer and the
rasterizer is directly hook into the
hardware interesting interesting to note
I have Hardware down here on the bottom
and you might say well the software
renderer is not hardware the software
renderer obviously has the cpu as its
hardware so you have GU again we're
talking about GPUs of CPUs being kind of
first-class citizens here 89 and Vidya
renderers work particularly with their
GP use our software renderer works with
a CPU might not be the best thing for
all applications because you're using a
lot of CPU resources up rendering but in
some cases that may be what you want to
do you may want to use a software render
as either for capability it has some
other render may not have or as your
fallback I know just move seamlessly on
to the CPU so I've talked I've thrown
out some terms here like pixel format
I've talked about context I've talked
about screens and moving between
monitors those things are based on OS
dependent data these are things you
won't find another platform to
particularly you may find their versions
of them but you won't find these
particular definitions and these
particular definitions have a little bit
of their little smack specific I
meanwhile context may exist on a Windows
platform or may exist on a unix platform
the context has some specific things you
can do on the math platform so what kind
of things we're talking about here we
talk about virtual screens first virtual
screens are great thing because
inventions virtual screens to someone a
see a lot of heads nine yeah and virtual
screens no one understands what you're
talking about because virtual screens of
this block this mysterious bucket it's a
parameter for like choosing a pixel
format but you but I just throw a null
in there all the time so I don't know
what I'm doing you know that kind of
thing and
you're not alone that everyone does it
you know this is some of the things when
I'm putting this presentation together I
have to go back and look at the
reference material and ensure that i
have the correct for the presentations
is something that it's one of those
things you use once and you don't care
about but there are some powerful things
you can do with virtual screen so I'll
talk about those in a minute pixel
formats I think anyone is used OpenGL is
is no to understand the pixel format but
we'll talk a little bit about what
really is the pixel format and how you
can think about it and how you can use
it to your advantage in your application
contacts contacts basically again the
state bucket for OpenGL if you used
OpenGL you understand about context and
and controlling the rendering and
finally what talking about drawables and
how think about drawable so these four
things are the only real OS dependent
data you need to know virtual screen
talks about hardware and what renderer
you're using on pixel format is a
specification list basically for your
application to say what you want in far
as capability context is OpenGL state or
your rendering target and a drawable is
going to be where you're actually
drawing the pixels to that's really the
top level that's all you really need to
know but let's talk about the details
seeing pick up some more sohrab if you
don't remember nothing else in the
presentation remember that and I think
when you look at the API to understand
more about how they work and how they
fit together it'll make more sense just
one note on this i'm going to use cgl
calls for the presentation just for for
consistency's sake so I didn't have to
list multiple API versions up here but
you don't you this applies to a GL or NS
OpenGL equally it's just I used I chose
to use of cgo once for consistency and
the presentation so let's talk about it
what's been a data flow again something
that people may not understand so you
again we'll start with the application
in this case we have a slightly
different color application but it's
still the same application so it
actually what is the first going to do
when it uses OpenGL it's going to create
a pixel format fix the floor mats this
thing you give a little the staff
through before you give it some virtual
screen specifications and you're done
right you don't need to do it worry
about it well what does that do that has
two pieces of information that are
important one piece of information is
important is your renderer attribute and
that's determined your virtual screen
lists so and then the second piece of
information is your buffer attributes
which is determines your surface
definition so buffer attributes are
things do I want a depth buffer how deep
is my color buffers and
basically makes a definition of what
you'll accept for a surface so for
example you if you say I want a depth
buffer virender didn't happen to have a
depth buffer that's going to prevent it
from being selected for your render
definition list you can do things simply
like accelerated software renderer you
know you can fix your Derek renderer or
you can fish it particular take a
particular vendor if for example you
know that your your application only
runs on on a TI system you can pick a TI
or the same for nvidia if it only ran on
a video system you can actually specify
your pixel format that i'm only going to
accept an nvidia or an ati renderer so
those are rendered definitions what
that'll do is build the list of possible
renderers which would basically possible
virtual screens then you bit of context
you need to pick some format to build
the context or the context basically
kind of is is attach the pixel format
and then what the pistol fires going to
do is basically transfer this
information of the context so this when
you think about it you can say haha now
I understand why when you share
attributes and share things between
context they have to basically the same
virtual screen list is a personal screen
lists that pixel format thing defines
what renders can be used so for example
if you want to share textures you have
to have renders that are compatible yet
have you know a list of renders it
basically can have this the contexture
can be rendered in the same places
moving forward you have a drawable comes
in and draw label can be anything it can
be a window can be an off-screen can be
a p buffer can be a full screen it
doesn't you know drawables aren't
special you know set p buffer set
drawable those are basically the same
kind of call it's basically setting
information that the drawable has into
when you attach it to the context and
what happens when you attach it a
drawable contains surface dimension
information and what happens is now
instead of having a list you saw the
list but now instead of looking at the
list of possible virtual screens and
surfaces as definitions of surfaces you
actually create surfaces we actually
create when you attach hardware surfaces
are created and that surfaces are
created both of the pixel format
information and from the drawable size
you then allocated your buffers the when
you actually set the drawable you also
set a current render and so that takes a
virtual screen list looks at what the
characters to the drawable are whether
it's on a screen off screen or whatever
and then selects an appropriate render
or so for
example if you attach to an off-screen
here in a selective software renderer
for example or who catches something
that's on a monitor that's powered or
the screen is powered by an ati car your
intellect an ati render so that's that
process so attaching the drawable does
that for you and then finally what's
this look at the actual flow of data
through through it you take the
application you're going to into an
OpenGL call it's going to go to the
context the context 20 Punjab State it's
going to apply that state to it and it's
also the context knows what render is
going to have so it's then sense expands
to a specific render which then draws
into the target surface and you see it
on the drawable it's kind of a logical
definition of it it's not quite the
actual data flows not quite like this
but you can see logically how it works
and how you can take advantage of the
searing applications so let's go back
and talk about specifics of things when
we showed virtual screen it's a render
associated with the pixel format the
reason I'm saying its associated with a
pixel format its virtual screen numbers
are absolute a pixel format just has a
list of them 0 1 2 3 4 or whatever if
you have another pixel format you also
have a list 0 1 2 3 4 it doesn't mean
that 0 and 0 are always the same one it
may is maybe an error that someone can
easily makes it probably they will
likely be the same one but it's not
there's no guarantee between two pixel
formats that a virtual screen of a
certain number is exactly the same
virtual screen so a pixel format may
have may have multiple virtual screens
you can get this by virtual screen
counts you can actually find out how
many virtual screens or how many
renderers that pixel format has to
virtual screen count normally anyone out
here with the powerbook they want to run
their OpenGL application what they'll
see is when they call choose pixel
format and they look at the list look at
two probably they'll probably get a
software renderer and they'll probably
get a hardware vendor we're going to
prefer to use the harbor renderer but
you also have the software render in the
list they're all probably likely to in
most cases so the context current first
will screen the screen and is actually
current which is cricket virtual screen
call will give you that it's actually
associated with the current renderer so
if your current virtual screen changes
your renderer has changed that's a key
point here let's think about this so if
you're on an app and you have your wind
you have a window on one screen you drag
it across the other screen and you look
at the virtual screen as you drag it you
see a virtual screen change that means
you actually change renderers
that means your capabilities change and
you should handle that but also it also
means it's blowing one ker enduro
certain times you're only going to
reference to one certain renderer and
the last kind of caveat there is
obviously with google headed cards you
could have 2 30 inch monitors hooked to
2g 468 hundreds and both of those will
be on the same virtual screen so we'll
drag in between those two windows you
never see a change your application
doesn't have to do anything everything
seamless works perfectly no work
required on your end so let's talk about
libs kind of developer notes about this
I kind of put some things together which
may be used in your application may be
useful to you virtual screen Chainz as I
said equals a renderer change to respond
to this check settings checks and
abilities you may have just lost the
fact that you have glsl shading language
on one one card and didn't have it on
another car and you may have just lost
that capability which would actually
change the path you want to take in your
application sharing context there's a
tech note on our there's a QA on this
you might want to look at that about
you're doing if you're sharing contacts
capability but they must have the same
let me get this right must have picks up
for us with the identical virtual screen
list so what does that mean you can
create multiple pixel formats and
and used to create contacts and
different pics performance and share
them but they better end up with the
same virtual screen lists the best way
to do this is one of two ways either use
the same pixel format which you
guarantee has identical to itself or the
second way is to actually specify all
the things that are controlling virtual
screens like render ID software renderer
accelerated no recovery those kind of
things in your purse apart make sure
they're the same between the two pixel
format you're using so you can do that
as another way of getting a dentist one
thing you may want to do is how do you
get a current renderer ID some people
want to look at their application say hi
when you want to splint or I want to
know what it is or I want to give the
users from feedback their rendering on
the the ATI 9800 card and so for this
way you can get the virtual screen and
if you keep the pixel format around you
can then do a described pixel format get
renderer ID and then the render ID will
then feed back into or render a list
that are listed in our headers they'll
tell you the render ID so you can
actually if you wanted to especially
code some handling of your app for
different renderers you can even just
use this check the renderer ID and
understand what render I do you have
caveat here is you need to keep the
pixel format around so let's move the
pixel formats now so now we've talked
about screens will talk about pixel
formats fix upon us basically the open
field renderer capability buffer and
buffered depth specification we talked
about that already I've covered a lot of
this stuff so I'm not going to spend a
whole pile of time on the slide more
spent on specific things you can do in
your app so I'm supposed to go in some
tips to selection of attributes you
require and require an accelerator
renderer so applications who want to say
I don't want to solve our render what do
you need to do you need to accelerated
you need to do no recovery but this says
is get an accelerated renderer secondly
don't give bring me don't give me a
software fallback this succeeds you have
an accelerator renderer there's no case
you ever go back to software in this
case you'll stop rendering so if you
only want to accelerate a renderer use
this you want to force the software
renderer for example you want to do a
chest that's a good test for for
debugging is it a renderer problem is it
a problem with an ati card or in a video
card or is it our problem in the opengl
kind of stack and framework well in this
case what you can do is you can do set
the render ID and set for the generic
renderer and that allows you to force
the software render force on the
software path great test for your
application to make sure that we're a
problem lies with Rob maybe lives with
your application with us renders the
system memory so there's a couple
confusion about we thought we will talk
about p buffers a little bit at the
beginning i'll talk about p buffers
again but a little confusion about off
screens versus p buffers off-screen has
been around forever in the attribute
list and I think it's already been
around in other people's attribute list
for forever other operating systems but
off screen you can just say in your mind
equals software software renderer system
memory loss can be software renderer but
it's going to be system memory so if you
want to render to system memory not to
vram off-screen so think of rendering
off the screen into ram not off the
screen as accelerated off screens
else we got um so minimum policy Maxim
policy closest policy what the heck to
those things do I have no clue basically
what it what they're saying is they're
they're referring to buffer depths
they're referring to color buffer depth
buffer the depth of those actual buffers
and they're basically going to say
things like the information you have is
the minimum allowed that I want so if I
want a 32 bit depth buffer minimum
policy 32-bit means that's me if you
don't give me a 16-bit I absolutely want
a 32-bit I won't run without it I'll
fail creating a pixel format same with
maximum policy maximal max Impala
slightly different Maxim policy says
choose prefer the deepest available but
there's no limitation it's not a
limitation to say this is the minimum
i'll take buffer defs our closest buffer
detonated again it changes some of the
brew behavior to actually choose
whatever closest what you specified and
no matter whether you specify something
that's particularly master or not it'll
find whatever closest to what you're
looking for and then finally I you slept
through attributes are calm not normally
used we're not deprecating these things
but this is stuff that day has been held
around since the beginning of OpenGL on
that on Mac OS and so backing store
pretty much everything turn our backs
OMP save all of our render is going to
be MP say for a bus multi screen and
compliant we all all of our OpenGL
renders now are compliant so those
things we don't normally need to specify
so context I look I think of a context
with a big bucket of OpenGL state so
context moves around your state moves
with it so if you change contacts
feature you're basically changing your
rendering state if you modify state in
the context you're throwing the state
into the bucket and stays in the bucket
until you take it back out or change it
so state machine big stage bucket
contacts in a context is then going to
use that stage with your commands to
draw the something so it is you know it
you says you sent a draw command it
looks in the bucket and goes okay I have
this data state this state I'll draw
that pixel like this so think about that
and forth context is basically a big
state bucket also everything that you
send to OpenGL goes to the current
context so it's not magical so if you
know you don't know what your current
context is at some point be careful if
you have multi-threaded app and you're
changing context and multiple threads
be careful because it's going to use
what it thinks is the current context
and you need to be careful about that
some of the mistakes we see may when
people are doing especially
multi-threaded is that they make make
assumptions about what the current
context is so that brings up the
threading notes we recommend that you
use a single thread to draw OpenGL
commands not a requirement we recommend
it because we see a lot of issues with
people who spread multiple OpenGL
commands across threads and they don't
synchronize correctly and they have
context changing behind their back so to
speak and then having someone reference
from an object like a texture object
they may be in a cot it may not be
current or may not exist in the context
of a now as current which is not should
which they weren't aware of so you if
you do want to issue from from multiple
columns for multiple contacts these
standards reading controls mutex locks
will critical sections whatever you need
to do use your status writing controls
never use OpenGL to sync test fence wait
fence whatever copy whatever read
whatever not a thinking mechanism it's
not a synchronization to use OpenGL
commands to block in some place and then
issue commands somewhere else not what
you want to do so let's talk about
something new for context parameters one
thing we've added for tiger is back
buffer size control and what this will
allow you to do it's people who have
especially a video application when they
know the content is only a certain size
I know that in this case I have 720 x
480 context never going to change
doesn't matter what the window system
does doesn't matter how big the person
drives the window 720 x 480 content
there are a lot of applications who may
have an image or something in the back
buffer who is centric to the source
material not centric to what the
application is doing you can use this
parameter surface backing size and you
can enable and what this allows you to
do is use allows you to take a fixed
size back buffer will scale the image on
swap automatically and then a variable
size frame buffer image so when you drag
something larger in the frame buffer it
doesn't mean all your buffers in the
back are changing it could be real
helpful to someone especially like in a
video application or when your source
centric some more context parameters I'm
going to walk through this really
quickly retrace synchronization swap
limit so if you need to think the replay
swap lemon turkey surface opacity
transparency control we've shown this
before I think there's a buoyant Boing
ex-demo on the sample site and this
shows you surface opacity transparency
control you can make OpenGL render above
some on the desktop you render above
things so you actually have a window
there but you're not seeing the actual
window itself that's something you can
use their and drawing order you can
order the drawing above or below the
actual kind of windows surface so you
can do use that for combining OpenGL
with some other OS features drawable so
let's talk about drawable draw over
actually pixels and RAM or v ramps or
actually drawable is actually going to
be the the window the p buffer the off
screen whatever full screens also and so
again think about this is one kind of
common thing they're all kind of behaves
the same way a couple different routines
you need to use for them but they're all
basically the same behavior so one thing
I want to highlight here is actually
using drawable from display so in this
case this little code snippet what it
was going to do here we're good we get
the seeds to cg call opengl display mask
and that gives us the for the main
display ID for example in this case so
what we're doing is we're going to what
is the OpenGL display mask for our main
display we then use that in our pixel
format attributes to basically say
that's the display we want to render on
you can see I but use display mask here
and in the display mask that i got from
from the return of that function call
and then you show you choose pixel
formats what that basically is saying
it's going to set that as your display
mask or your single display for
rendering so you can control display is
that one and for example you can use
this little trick to actually get
information about display before you're
showing a window no windows created here
you can create a context set the context
current and you can make opengl calls
like getstring and those kind of things
and they'll actually get you about valid
data without asking having a drawl a
window so let's talk on the interfaces
and the good news about interfaces this
year is that we've covered it before you
can look at the previous sessions and we
have some documentation on the web for
that we have the QA 1269 open interfaces
and i'm not going to spend a half an
hour in interfaces so cgl we've talked
about that a little bit score OpenGL for
lowest level interface low level the
basis for see you for age
NS OpenGL it's full screen only no
windowing system connections there you
can't draw a window a cash to a window
but you can do things like p buffers in
with cgl and for context set up you see
last year's presentation there's a lot
of samples on the web and in this case I
have the CEO of the Geo carbon cgo
fullscreen sample it's on the web right
now you can download that it goes
through all the setup for cgl for her
cgl application agla GL is the carbon
interface no reason to think of a GL is
some strange thing and what is house AGL
fit to everything else AGL carbon good
so window and in Flynn full system as
windowing and full screen support so you
can do windows going obviously carbon
janu on taxing window or you can use the
AGL full screen which will do a full
screen full encapsulation to capture
screen for you again same thing so you
last year's presentation or GL carbonado
window as an example of how to do a GL
system setup I don't want to spend a
half an hour going through things here
that I think you all can work through
and ask questions about and read the
documentation we have I think that stuff
well we've covered previously NS open
shell a little bit more depth here this
is there's there's two ways to handle NS
OpenGL this one way is kind of DNS
OpenGL view and one way is more using
the pixel format in context and why do
you do what that's one of the things
that people may not understand you're
you're in interface builder and you have
the NS open GOP well do I really want to
use that not sure what you do is you
look at what you need to do with the
relationship between a context in a
pixel format for NS OpenGL view there's
one column one special format in one
context that's set behind the scenes you
can't control that if you have a
different kind of relationship need
multiple contacts multiple pixel for mac
and you change some things up you may
want to do a custom subclass of NS NS
view the good news again is there
samples on the web to do both of these
things so you can see how to make custom
sub class it should be basically
skeleton down for you and so you can use
that just as easily as you would do NS
opengl viewing what if you look at that
particular sample the custom cocoa
opengl sample there's a file on there
that basically is the equivalent
functionality for NS opengl view in one
particular
IM file and that with that you can do is
modify that as you would need for your
custom class so basically it brings you
back to the point where you got within
its OpenGL for you I'm start about going
to for a minute excuse me so what the
problem is for cross-platform source
compatibility so if you want a full app
that runs without having to do anything
between the OS and windows to Nick's mac
OS one thing is and also Linux should
leave out linux up you can use glut and
glut allow it contains a windowing
system and events in vent system that's
a good news the bad news about Gladys's
it's it's fairly old and design it's a
number of years old now and it actually
is fairly limited in what you can do
this so amazing apps that have been done
with glut but I think if you talk to
those authors is at some point you
become you're almost fighting upstream
against what god wants you to do great
for prototyping great for an application
that they can work within the
constraints of the API probably not
great for a fully I'm sorry an
application with a full GUI that wants
to do some of the inner user
interactions that may be fairly
complicated things that are specific for
Mac OS which is why I want to talk here
there's blood sample code as I mentioned
and they go up sample code you can go to
the readme in there and they go through
all the things we've added to glatt than
a mac OS committal highlights are use
extended desktop onstar different
application so that means what you can
actually do is you can actually have a
window that expands both of the full
full desktop and you can do that for
like a full screen we have a developer
is doing stereo applications by having a
window to stretched across to display
using two projectors and putting two
screens up putting one one frame of the
stereo image on one screen one frame of
stereo mentioned on the other screen so
the video card as far as it's seen its
seeing the image rendered into two
projectors so from the user standpoint
it looks like two separate projectors
stereo image works great from the
development standpoint it's just one big
image with two frames rendered
side-by-side use mac OS coordinates you
can use negative coordinates as far as
specifying windows otherwise glut
normally would clamp 200 so allows you
to move windows on a desktop of negative
coordinate captioning single display if
you have a glut application that you
only want to have one-on-one display
full screen you can use that and go out
stereo stereo supporters and glutes
we have utf-8 spring support for both
inputs so you give foreign keyboards and
we have internally and we have an add
eggs at hand floor so if you have to do
clean up on glut glut normally exits
without telling you about it you can set
up the add eggs at handle or so it calls
your code prior to glut exiting scene
clean-up what you're doing again go up
basics sample code is on the web so you
can look at that for basics of using
glut so the final section I'll talk
about here is actually talking about
some techniques and using mac OS some
things again that may not be obvious may
not be clear and talks actually about
some some of the neuse a few new things
that we have for tiger so I'm gonna talk
about detection of functionality OpenGL
macros some texture loading stuff
contact sharing and finally I'm going to
talk about pixel buffers and some remote
remote rendering technique functionality
I think I've gone through this before
but and there's a great sample for their
some great samples for this but and
there's a tech note on specifically on
this the key for detecting functionality
I kind of my soapbox for this is don't
ignore it the fact that that you
normally see texture rectangle for
example most cards other than the rage
128 a support for texture rectangle
don't ignore the fact that some user
might be running your application on
rage 128 so check for that functionality
look at the extension string use glu
check extension to make sure it's
supported understand what functionality
needs to be supported for your essential
support for example we talked about
OpenGL shading language arbitrator
language 11 that's a one point 0 0 but
it's 100 is the name of the extension
and if thats extensions not available
then the shading lines will not be
available yes and once you say that
extension you can expect the shading
language Sabine be available same for
all of the new functionality you can
check it before use the functionality
should be not a problem for you it's
OpenGL macros again something that some
people not aware of so the mac OS not
the mac OS PowerPC architecture is have
a slightly slightly higher overhead per
function call than an Intel Architecture
so does that really matter to you well
for most applications probably not from
the OpenGL standpoint but if you're
using a medium mode you're
really pounding the application hard and
your millions of function calls per
second this may be a significant portion
of your overhead we provided you with
the cgl macros which eliminates in one
level of function calls and for
applications that are really in a medium
mode that are actually sending a lot of
function calls OpenGL this can be a
significant speed up other folks are
using vertex array reins display list
and actually don't have a high call
frequency may not see any speed up at
all from this but basically I'm FLA will
discuss more specifics about that shot
you I use of it in the optimization
session a couple things you need to know
here though far as just user using it in
your application you now need to track
current context we were eliminating the
idea of current context excuse the
macros so if you need to track it
yourself to understand what your current
context is and that means you have to
ensure you're threading issues are
handled it allows you can use the cgl or
AGL macro header you can use a cgl
header for for NS and used to a gel
header for cocoa I'm sorry for carbon
and what you'll what you the key to
understanding this is very very simple
there are two one for a jail one for cgl
variables that are defined in the macro
header everyone's a GL underscore ctx
and one cgl underscore ctx you define
these to be equal to your current
context what the macro header does if
you look at it it then substitute these
into an into an indirection and will
directly use that look up and a
dispatched able to call into the
function the opengl function so once you
include the header and once you include
it set that variable you don't need do
anything different so what that means is
you include a GL Oh GL macro H here
you've now sets as a GL ctx to be my
contacts which is your current contacts
and amazingly enough the rest of your
code is the same so you everyone here if
they wanted to nothing you should do
this but if you wanted to you could
probably put this in some of your OpenGL
code within a few minutes again
understand what your current context is
because we normally track current
context for you and when you change it
you need to make sure you update the
variable let's talk a little about
texture loading now so in cocoa one
thing we talked about is how to you need
a bitmap representation of a texture it
may not be obvious how to do that from
cocoa so I'm going to go through getting
a bitmap image up from a view and then
text
texture map image rep and then at the
end after that I'm going to talk about
image ioma Jo is a new thing for tiger
and allows you do some really cool
things so basically to walk through this
I'm going to talk about the top bullets
in the bottom is the code for reference
and but by the way there's a to an a on
this so no need to actually jot down all
the code instantaneously just listen up
here and look at the QA which is on the
web after this after the session if you
have an NS OpenGL if you have an NS view
contents you want to use it anything you
want is a view contents what you can do
is you can allocate an NF it map image
rep you can then initialize it
initialized with a focus view rep so you
have your view focused and you
initialize it and you have provide the
balance and what that does is basically
give you a bitmap representation of that
image source he then move that along and
you're going to texture with that the
texture with that you need to set your
pixel store data you need to ensure that
the roll length of the textures handled
so i use the road length and also you
need to show ensure the alignment and a
lot of times you may have RGB data for
example though it's just RGB without the
RGBA and you want to make sure that you
don't have a 4-byte alignment and
implied there then you're going to
generate some textures you're going to
bind that text here and then the texture
parameter i'm going to set here is I'm
going to set min papping to be linear in
this case for the texture range is kind
of superfluous in this particular case
but realize that we're dealing with a
single image without MIT maps so you
always want to make sure you don't have
a map filter and perimeter and then
finally do one thing I use a bitmap to
get the samples for the pixel so it
tells me how many samples three or four
I then it's basically in this case is
going to give me I'm handling an RGB or
RGB amh and I'm just in a texture from
it looks a little bit complicated but
basically all we're saying is either use
our DBA 8 0 rgba RGB 8 and either use
RGB a or RGB for your texturing
parameters it's pretty simple so this is
done you know about maybe 15 lines of
code total it's written up in a QA and
you can just drop this into an NSF to
grab a texture from almost any NFC or
any bitmap image rep so really real
strong way to simply get that
information from a nap kitap image I ohh
new new for tiger there's an image i/o
session directly following this this
session I'm not sure exactly what room
is in but look at
because image value is pretty cool there
are some things that image io handles
hug handles um want to say
floating-point images and high DPI
images and those are high dynamic range
images and what that allows you to do is
allows you to have one single source in
cocoa or carbon to get images that you
may not have been ill handle otherwise
it's kind of replaces you know the
QuickTime image importer if you want to
think about it that way and allows even
afford a larger subset of images so for
example you can do things like load a
floating-point image into a
floating-point texture and use that you
know in a drawn to a floating-point back
buffer with up with a shader and you
have a floating-point pass from image to
your final destination without ever
having an RGB a RGB you know integer
inter step in there so I mean this is a
really powerful things allows you to do
great manipulation with data set great
music machine with high dynamic range
images and those kind of things so what
you're going to do here is is just new
API to the government i'm going to cover
this reasonably quickly i have a sample
that's on your does it do it tonight
there'll be a dmg available for the
session and the sample using this it
will be on there so you can look at the
sample and you can so you can grab and
look at the code then but basically what
you're going to do is you're going to
create an image that you're going to
basically use them URL to get an image
you're going to create a media add an
apps and index 0 in this case so
basically won't get an image rep for the
first image in that in that in from the
URL we're gonna get some information
about it so that's the width and the
height set a rectangle and also we're
allocate some data based on this
rectangle so now we have we have the
image rep we have information about it
we've allocated rectangle big enough and
now we're going to do is crew make sure
we're covering color match which is
another great thing image io does is it
maintains color information for that
image so you can make sure your image is
a color correct coming in excuse me
we're going to create a bitmap context
we're going to draw the image into the
context and then we're going to release
the the context so now we have the
bitmap context and if you look at this
all the way down to the end of the
texture 2d there's that data parameter I
created and what that means for you is
all you need to do is allocate the data
you basically get the information for
the URL set up a few things draw it and
then you can text you from it just like
we did before so this is basically
exactly the same kind of texturing code
you saw before real simple what we got
eight lines of code from image I oh and
then the texturing code is six lines of
code so i mean this supports things like
openexr and those kind of things so it
supports those images that you would
have spent a lot of time to make sure
you you correctly handle great thing to
look at I encourage you to go to the
session after this session context
sharing I think of the oil in those
final sections here a couple 14 couples
things of context and we talked about it
before remember this is where you got
that virtual screen lyst of the same you
happy using the same set of renderers to
share context you share objects in that
context the text graphics vertex
programs fragment programs displaylist
vertex array objects well vertex buffer
objects those kind of things are the
things you share the state of the
context itself is not shared you're not
sharing whether your current color is
you're not sharing what you're tired
texture coordinate settings are and
those kind of things you're only
carrying the actual state the actual
objects and their state parameters it's
a virtual screen cruise configuration we
talked about that one thing you can do
is use a single pixel format to create
your context or create a single shared
context initially and build all your
other contacts out of that so you have
one you've captured back build
everything else out those can be thrown
away the peer-to-peer sharing this
week's runaway changes around and you
the last thing you do in your
application is throw away that initial
context you created and this is a simple
code example we create a column creative
picture format of contacts we used to
create a second pixel format and using
the same display and in this case you
can see we have a the AGL context we
curate the context the second one shared
with that pretty simple contact sharing
tips Q&A covers all this good reference
so last thing I think I were just on
time here is pixel buffer so pixel
buffers Apple open apple pixel buffers
extension string it works very much like
Windows pixel buffers with some slight
changes because the window in the
Windows operating system obviously is
not the same as Mac operating system and
some of their
pieces there HTC's and those things
don't fit quite with our concepts and
pixel offer so we uh we modified it but
have the basic same logical instead of
but just have some different different
data calls out basically
hardware-accelerated off-screen
rendering we talked about that already
remember talking about drawables it all
the same you're attacking it just like a
drawable so in this case you're in your
set p buffer use septi buffer as you is
a call and that's just like set drawable
also one thing is new here is the
support for remote rendering you can
actually use pixel buffers remotely and
render to a different machine so for
example what say you have an application
that wants to do a render farm or wants
to render on multiple machines and
gather the information back up what you
can do if you can attach to a different
machine SSH in run an application over
there no one needs to be logged in
doesn't need to be a monitor attached to
the machine you can render using the
hardware acceleration and then you can
retrieve the image and do whatever you
like with it I'll demonstrate that in a
minute we're moving on finishing this
out basically the pixel buffer is going
to allow you to do is to surrender to
something and directly texture from that
without having an extraneous copies in
there so we'll call what you use their
geotech text image 2d well in this case
of cgl text image p buffer same kind of
call if you look out there almost
exactly the same set a drawable texture
from it just like you texture from a
texture and talk about sharing object
resources and state can be shared with
the buffer it can be shared with the
context in a window drawable fullscreen
drawables can't be shared with be boxed
at this time we have a CTL reference and
also on again on the disk image it's
going to be up on the seed side for wwc
tonight there'll be preliminary
documentation for how to set of p
buffers that covers all the setup covers
all the API and covers some some
pseudocode than I'm about to go through
so I'm going to walk through this pixel
buffer usage again this is covered in
the documentation that you can readily
access but basically your integrated
context and pixel format we've talked
about that you're going to create the p
buffer which is just like creating a
window then you're going to set the
people offer as a drawable using cgl
step d buffer stuff again setting a
drawable same as everything else you're
all on the pball for you set that set
current context with a p buffers contact
same thing you do normally for rendering
then you draw with OpenGL so to set up
the text string again you're going to
create a texture object you're going to
bind to the text drops of standard
texture and stuff you're going to set
the text two parameters and then you're
going to create the p buffer texture
which is create text MHP buffer against
d'cruz GL text image to d.c GL text
image p buffer so again create bind set
parameters and that's the only call is
really different for here from normal
texturing is you can actually going to
use the people for the texture source
then you're going to draw on the people
you're drawing with the people
protection you're going to you're going
to bind to it obviously going to be
binding to the to the people for texture
object a naval appropriate texture
target with the jail enable and draw
primitives appropriate with texture
coordinates so again you're going to set
your drawable and then you're going to
draw and the destruction is pretty much
officer creation we're going to delete
the text object set current contacts
we're going to store the pball for
destroy the context or depicts a format
and finally set the context adult one
thing in here is interesting the fact
that I'm destroying the texture object
first not required for the good idea if
you don't destroy a text Roberts first
and you've actually destroyed data that
the texture object references by
destroying the P buffer can cause
crashes in your application it would be
it would be illegal to use a texture
once you destroy the people offer but I
do it this way just to yeah it's kind of
a safety thing there's no reason that
that text Roger script should persist
after you've destroyed the feed buffer
so do it in this way I'll save you some
trouble later on so I've talked about
headless and remote rendering a bit
remote p buffer is an additional new
pixel format attribute pics will buffer
drawable what you're gonna do in the
remote machine euro you don't not log in
some required monitor is not required
you're an SSH so the road machine the
reason we have the ssh in place is we
maintain security we just don't want
someone to be able render to your
machine or use a copy pixels to get
information from the machine this is
actually offered authenticate like you
would normally and then run application
on target machine using the remote few
buffers and finally going to retrieve
the content over you see is appropriate
and we're into a demo so we're going to
bring up demo to I believe if I got the
right one yes demo two so the only
reason I'm showing you
this is because this is the images
folder and us the images folder is empty
so this is my target machine i'm going
to log out of this machine and then go
over to demo 10 plus wait for it to log
out stuff so I'm walked out no no users
on there wouldn't even need to display
attached actually log into demo one
machine or switch over demo one machine
here okay so now we're in demo one and
what you see I have a terminal here and
I'm going to SSH into that machine here
and so now I am that's the the home the
base is the base of the the other
machines directory and what you'll
notice the piece phone on I care about
is an application called remote render
witcha notices me and is the application
list so I'm just going to run that so
the whole application that you have a
sample code no I can do that
make it big that better so I'm going to
run that application remote renderer on
this remote machine so now I've logged
into it run the application created the
P buffer it's restarting the rendering
I'm running our number of frames I'm way
for it to render render is complete and
now i'm an egg's from that machine so
I've also I've run my application I go
over to the other machine back to Rack
the demo to still logged out so when we
log back in and see what happened here
so now this was the images folder this
is where I start out and you notice I've
rendered about 120 images here and no
exciting content but you'll see the
results of this rendering
so basically i just rendered the kind of
standard spinning cube thing remotely
nothing on the display was touched
another windows need to work need to
appear you just do this so you could set
up a render farm for a number of
machines and you can just render content
to those machines and then retrieve it
back put it into a movie put it as image
image processing all through the remote
rendering API it's a really good API to
extend your rendering capabilities
beyond the machine that you have now we
can go back to slides and ready to wrap
up so what we talked about talk about an
update opengl continuously in continuous
improvement and opengl we're
continuously trying to get updates to
you you'll see them and software update
you'll see new features really want to
focus on quality going forward
architecture multi-client multi-headed
virtualized resources a lot a lot of
folks are using it we're stressing the
system pretty hard that's good for us
that's the quality bar pretty high and
you guys to take advantage of this can
take advantage of the fact that we have
this virtual I system OS dependent data
we talked about some specific things
virtual screens which are renders pixel
formats context and drawables for things
in you for things you need to know for
things you need to know and finally we
in that section we talked about
interfaces cgl NS OpenGL glut atl the
interfaces you need for writing
applications and we talked about some
functionality some things that might
have to write applications on things
that are new so new should be new on the
WTC s I believe is a new see updated cgl
reference which covers pixel buffers
things it's all there's also the session
disk image which has information for you
also I want to point into some things
for tomorrow for later the the ATR image
session which is a fallenness today
tomorrow at ten-thirty is the
optimization session tomorrow in the
afternoon session after lunch is a
second optimization session where we're
talking about using our tools which is
really great for folks revenues are
tools and fraud in the morning is the
glsl OpenGL shading language session
talking about OpenGL shading language on
Mac OS 10