WWDC2004 Session 215
Transcript
Kind: captions
Language: en
good afternoon happy canada day thank
you for coming I'm Tim churna and I
manage the quicktime video foundation
team and together with my team and the
courts effects team we're going to talk
about new directions and QuickTime
performance which is really talking
about the integration of QuickTime and
core video on Tiger we're going to talk
about a new video pipeline we be working
on so this lets you take advantage of
the GPU for video of acceleration and
video effects and it will show you also
how to adopt core video for high
performance rendering and timing and
we'll show you how to move away from
some of the quick-draw data structures
that clicked on uses so today we're
going to start out talking about some
simple ways to take advantage of the new
pipeline using a chai movie view for
carbon developers and QT movie view for
cocoa developers will also show you how
to make your pipeline customized by
using visual context and core video and
then finally how to integrate this with
OpenGL and the new core image so before
I start I want to talk a little bit
about how quick time did rendering the
for Tiger so movies would render into
graph ports and G world's G worlds would
be an off-screen memory and graph horse
would be an on screen windows so
quicktime could use a transfer codec to
go from an intermediate pixel format to
a final one and for certain formats we
could accelerate the decoding to
hardware but it was important to note
that if we are going to an off-screen g
world the movie would always play
synchronously so you wouldn't benefit
from the async about scheduling a quick
time so that stat kind of looks like
this you have the movie and on top eat
underneath that it uses the image
compression manager and say a primary
codec like DV or mpeg-4 to either go to
the destination or would use a transfer
codec and in some cases would accelerate
that's going directly to the hardware
but a one thing to know is that entire
stack from the movie down to the final
destination is owned by quick time and
it wasn't really a layering
so one of the challenges that we had and
that architecture was it was really hard
to play a movie into GL well it's easy
to do a bad job it wasn't that efficient
and add about two years ago at WWDC we
showed a solution to write your own
custom transfer kodok to get the drawing
notifications from from from QuickTime
so you could synchronize it with opengl
but it only worked for certain movies it
works for movies where there was a
single video track and certain specific
codecs and it wasn't very efficient so
that didn't work so well so we decided
to build a new video pipeline in tiger
and we had some goals the first goal was
to have good integration with OpenGL we
wanted to be able to support multiple
buffers in flight we wanted to be able
to separate the decode and presentation
logic and we wanted to layer this on top
of course the basically this is the
diagram that we showed it the graphics
media overview and you know the course
is basically core graphics core image
and core video and I wanted to show you
how we're going to build this pipeline
so we made a little bit more of a
colored version of this so how do we
build a pipeline we left this movie to
get to the graphics hardware and the
first thing we're going to do is we're
going to use QuickTime to source and
decode the data like we normally have
done we chosen to use OpenGL to do the
rendering to the display and we've
created these visual contexts as a
structure to source textures from the
movie for downstream rendering and we
use core video for timing and buffering
services and we can also use additional
OpenGL functionality for transformations
of the images as we go through and of
course we can add core image for image
based effects so these labs two modules
that I've shown are basically where your
application would customize the FX
pipeline and this is where you as
developers can take advantage and do
whatever you want to do and we'll show
you some of that coming up so this is
the architectural stack that we've built
and once again this is the video
pipeline and to talk about this and
we're going to have the other people on
the team talk about the various parts
but to start out with some simple
solutions I like to bring up Shawn to
talk about simple solutions
thank you so you want to get video in
your application well what's the easiest
thing you can do maybe you don't care
about QuickTime or OpenGL and sectors so
you just wanna get your video playing at
some window what's the easiest thing
well Coco developers they can use QT kit
I mean there was a whole session this
morning on how to use QT kit and then
that there's the QT movie and QT movie
view so if you use QT movie do in your
applications you get all the
acceleration and all the benefits of
korva do underneath without having to
deal with any of this new stuff similar
with carbon developers you can use the H
I movie view now this is a replacement
for the carbon movie control and also
gives you all the benefits so here you
can see the movie view takes care of the
visual context it manages the OpenGL
rendering and it deals with core video
doing all the timing and so forth same
with a chime with video so since H I
movie wasn't talked in depth in the
other sessions I'm going to go over it
quickly here so this is now the
preferred way to put movies in your
carbon windows it's now a full-featured
a chive you unlike the carbon movie
control so it's going to work in your
composited windows and you can also use
interface builder and put this in your
nib files and of course it uses visual
context to get all the live resizing and
the GPU acceleration again use this one
instead of carb movie control so here
are some the api's you create one of
these things with a chai movie you
create you can also use a chai
object.create and pass it to class ID of
a check movie view or if you're using an
interface builder you can create a
custom a chive you and use the class ID
inside interface builder and you'll get
that in your nib file so once you have
one of these you can use the set movie
call and the change attributes called to
change the movie in the H I movie view
well this is a slight difference from
the crowd movie control where it was the
same movie from the time you created it
to the time you destroyed it you
couldn't change the movie and there was
some attributes you could
run time so you can turn on editing turn
off editing tell it to start listening
to cut copy paste and so forth so that's
how you use it and since it's a chive
you it's makes heavy use of carbon
events so for example you can listen for
the control you can use the control get
optimal balance event and i'll tell you
how big this movie wants to be you know
take into account the size of the
controller bar if that's visible and
sometimes these movies made change sides
so you can listen for a new event the
movie view optimal balance changed and
that will tell you get that event you
know okay let's relay off the window
because the HIV of you doesn't know how
you want your window laid out it's not
going to do that for you alright with
that I'll hand over to Jim bass and
he'll show us some leaves use an action
thank you so can we have a demo machine
great thank you so much I'm Jim and I'm
going to show you some simple
applications to show the simple solution
for showing salaried video we'll start
off by looking at finder the canonical
example that's been used for hmm
overview already want to navigate over
in column view over to some place where
I've got some movies and you'll see that
as you you know you're used to seeing
inside finder preview over here and of
course you can play the movie I'm going
to jump around we here we'll get
something place more interesting first
you can scrub now one of those things to
note here is that there's a lot more
information in the tiger and then in
Panther in terms of metadata information
about the movie being displayed in this
panel you see the duration the width and
height and even the codecs are being
used and media types are being used in
the movie so which is kind of nice
addition and tiger you can also grow
well you can we play we can grow that
movie this is actually one of the HP
fiction movie so it's pretty large
and this is H iMovie view is being used
inside finder to handle this and you can
see is scrolling and updating and doing
all the normal behavior you'd expect and
of course this works for other movie
types in just in your normal audio video
bring up the VR movie see it here scroll
over and you can still control it you
have the normal VR controls and all that
works nicely another place on the finder
where you'll see H iMovie view being
used is inside the preview much pain
itself and you'll notice over here not
only do you have you also have the extra
information that i mentioned before and
the preview pane in the general area and
you also have the movie being previewed
here and it plays well in there as well
okay next I'm going to show H iMovie
view player which is a very simple
player that's built on top of just that
it's a carbon player built on top of a
chimo v view and first thing we'll go
ahead and launch it and i'll open the
movie will start off with a harry potter
familiar movie you can just click around
see the controller comes up you can cook
around and use it also you can make
controller visible invisible wall it's
running and make it editable you want
and you can I don't know if you can see
but the thumb for displaying where you
are in the movie has changed to DD at a
thumbs also added some other controls
out here just anyone using the normal
hih I technique for having a button and
then sending a message to the view to
implement playing soft buns so if you
didn't want to have the controller you
can implement your own UI for
controlling the movie
and I want to show you a couple out
something else that the H I movie of you
does support is the folk isn't adding
within field unless you can see it or
not but might add between the text here
I'm going to blow it up a little bit and
using the command option + key to be
able to zoom in and minus will zoom out
you can see that the text fields
highlighted I can tab over by tab over
the text field and start typing and I
you know the space and do anything but
if i come over tab over to the movie
window and a hit face why sergeant movie
as you'd expect though the supports
focus in a command option minus two unju
meeeeee now you might have noticed that
the controls up here we're not being
tabbed too just in case you're trying to
do that and want to find out you know
curious why doesn't work on your machine
i'm going to actually turn it on now go
back so i can just come up here and find
where to turn on the tab key you see I
want to tap be able to tab between all
controls and windows so now when I go
back to a channel you player if I tab
over charge of season so that's had over
to let's say the controller visible I
can turn it on well at least you can see
the tabbed over okay was it Mel one
thing to realize is that it's not just
for playing movies and only for movie
applications whose main purpose is to
play video content it's also quicktime
also gets used a lot for playing little
animations i'm going to bring us alarm
clock and you might recognize his alarm
clock it's the alarm clock from ical and
that animation is done as as a
yeah it's just a QuickTime movie okay so
now that you've got a chai movie view
supports really at easy to add little
animations into your carbon apps and fun
and wanna and this part of the demo with
a kind of a classic quicktime movie
ensure that the other kinds of
animations and they also still play so
it's not just the new h.264 or video
sound movies so is there any audio okay
well I'm going to quit it now because
the punchline is not as good without
audio okay so I'm not going to move over
to we talked before about a chai movie
you but also cutie player okay cutie
players built on top of QT kit okay and
I'm just going to show you just briefly
quicktime player by bringing up a movie
we've seen this quicktime player several
times this week already i just want to
show you some of the you know being able
and mate up and down and taking it's
taking the power of the visual context
and exposing it through this it can also
go fullscreen fullscreen
we'll go for that
I fought many wars in my time so imma
for Allah ok that's enough I'm sure
you've seen enough this week so one
trick here is that if you want to know
that whether cutie movie put that
whether a cutie movie kit are sorry
cutie movie or pee-peed movie do then
you implement bye cutie kitties and year
so if you can control click and you a
little pop up here so we can sometimes
get that and we'll come back to that
later in another example next I want to
show is go back to a little earlier this
morning so Adrian show as a quicktime
plugin a version of the quicktime plugin
that he created very quickly leveraging
the new WebKit facility to add
javascript binding and the new safari
plugin capabilities that you can apply
to an up you know your objective c class
and the already won over the code that
what i want to show you here is briefly
inside the info.plist is where the mime
type that you support is described okay
so if you use this technique to create
to use movies inside your plugins don't
forget stay away from Ron line types
first of all and its really intended to
you can create really Wiley complicated
plugins that plate that happened to play
movies okay and we'll take a quick look
at sample HTML and just in case you're
wondering how do i go to exactly some
I'm to mime mime type even though I've
got the embed source to be a movie you
actually specify that this particular
embed is this tight in Safari will find
your plug in and call that and we'll
just quickly run Safari and show that
page and here we have the plug-in being
used it it supports these buttons down
here
do some JavaScript this is the fifth
movie this year that was a show
yesterday will be a full full showing I
understand the next session let me just
skip around here a little bit and the
idea behind having a be able to called
quicktime stuff directly inside a
plug-in is that you can add additional
features like full screen so directly
from a plug-in you could go fullscreen
your movie it's kind of cool so oh ok
[Music]
and the last thing I want to show you is
another reason I show you the plug-in
was that with dashboard if you saw
Monday dashboard is done by you have
some HTML which drives the UI elements
and then underneath it talks to to
JavaScript and talks to plugins of
course you can take use this technology
to be able to do things with quicktime
inside dashboards so I've got a simple
dash of dashboard gadget here the idea
is that this is a it's a stupid movie
player here which will play the
different scripts of movies we'll just
stay on 2004 here and here's a play
button a guitar playing stop play Bob
also good dashboard for this one says
when you make go away pause this come
back pick up the boss
okay and just to prove that if you can
sweetie kiss your gut that told I could
support and also just random support
full screen inside staff you know that's
work
[Music]
so one another reason that I showed you
that with gadget is that what the
dashboard is that if you go back take
your SDK back and you try to do this
yourself it won't work not with this
version anything the problem is that
dashboard with the new with new quick
time we using OpenGL and creating the
surface the GL surface with that with a
window from dashboard fails right now so
I had to kind of work around that and of
course I'll be fixed so in case trying
to save you some grief don't don't try
this at home ok with that I'm done and
just as a final wrap-up you know one of
the good things with ease of the new h I
movie view and cute teen movie views is
that it helps it lets you add quicktime
technology quicktime playback to the
rest of the OS were we're well
integrated now with carbon and the nice
new classes and coco back to Sean all
right enough for the easy stuff so you
want to customize your pipeline here our
diagram again of the pipeline I'm going
to be focusing on the quicktime side of
it and later i'll hand it over to Ken
and Frank and talk about the other
pieces of the pipeline so why do you
want to customize it maybe you want to
perform some image processing you want
to use that new cool Cory Mitch stuff or
you have some heads up display you want
to draw some FPS counter or you want to
put some cropping boxes over or
something I don't know maybe you have a
game and you want to have the movie
playing inside the overall theme well
you can't use a time of you are cuting
with you to do these things they're just
designed to have a single rectangular
window or a view in a larger window so
you're going to need to use OpenGL you
need to use core video and the visual
context so how's this pipeline work well
not going to have the view
so you see the first thing in here is
the movie first you're going to need a
movie so you may be thinking well I can
just call new movie or new movie from
file or new movie from scrap or new
movie from data referendum ov from data
fork or so on you have 10 choices well
now you have 11 think of this as new
movie from anything so all the
parameters are passed in through a list
of properties and all the results are
returned through properties so this is a
superset of everything else you can get
the same functionality of all those
other new movie calls with this new one
here but the biggest difference is that
you can now create a movie that does not
inherit the current G world this is the
subtle semantics of all the other new
movie calls that whatever port happened
to be current you know last call to set
port your movie would get that and
that's where it would render so if you
just created a movie and started playing
it would might go up all over your
screen or something so with this call
you can specify a visual context to use
upon creation you could set it to nil
and have it note render nowhere this is
the recommended way to create all movies
and in fact if you pass all zeros of the
function it behaves exactly as calling
new movie with zero so the next piece is
the visual context so what is this thing
it's the abstraction of the video
rendering destination for QuickTime
movies it's a replacement for the g
world this is where we get our new
rendering performance from we can take
advantage of the GPU we can have
multiple buffers going at one you know
we're not restricted to certain movies
so one of the fundamental bottle mix of
the old quick quick draw rendering model
was that everything went through a
single buffer the decompressor would you
know decode into a buffer and then the
hardware would start pulling out of that
onto the screen and the decompressor
couldn't start writing into that until
the harbor was finished reading from it
so there was a just a fundamental
bottleneck so now we have multiple
buffers you know the decompressor can
start decomposing into a completely
different buffer while the harbor is
using the other one so we got rid of
that restriction and we're not
restricted movies that have single video
tracks with single codecs that support
the right particular format
and aren't transformed or something you
can use any movie with the visual
context and because we've decoupled the
decoding from the presentation you get
more asynchronous behavior like our
video media handler will actually be
decoding ahead of time from what you're
actually displaying so how do you create
this thing well in Tiger we're shipping
a open GL texture context now the visual
context is actually an abstraction or
abstract based class in the
object-oriented terms there could be
many different kinds of visual context
but for now we're shipping the open GL
texture contact and it gives you a
stream of OpenGL textures for your video
frames so you're going to need to set up
open jail first and we use the core
OpenGL objects to create this thing at
CDL context and cgl pixel format and
most of you probably don't use chloro
congeal directly so you'll be using
cocoa or carbon to do that and cocoa
developers can use the NS OpenGL context
to and get the underlying CDL context
same with carbon using a GL but the
carbon thing is new and tiger there's a
new function to get a CDL context from a
AGL context same with pixel format and
here's a little example code creating
the texture context now the textures
that come out of this thing can only be
used with this CTO contact you pass into
it unless you create a shared context
which is you know more detailed opengl
topic so now you have these two pieces
you want to connect them together so you
set movie visual context this is a
replacement for set movie G world and
it's how you direct a movie to use your
visual context and note you can't have
two movies playing into the same visual
context at the same time so this call
will fail if another movie is already
connected if you want to play multiple
movies you're going to have to do the
compositing yourself you have to
separate texture context you'll get the
textures and use up in jail to composite
them now this can also fail if your
hardware does not support the size video
it's just not sufficient for this movie
more on that later
okay now calling that movie visual
context with null is also a little
special unlike set movie G rolled with
null this will actually tell your movie
to not draw simply b.g world actually
when you pass null it would tell it to
get the current year old whatever that
happened to be you know just like the
new movie calls but this is actually
going to tell it to not render and one
important note when you have a movie
that's targeted at a visual context and
for some reason you need to actually
target it at a G world first go through
a no visual contact to actually make it
stop using the visual context before
switching important little detail here
ok the ICM can drive a visual context
new as well not just a movie so you can
use ICM decompression session create for
visual context and this gives you access
to the visual context at a lower level
than the movie toolbox and there'll be
more on this ICM topic tomorrow at
three-thirty and next generation video
formats for quicktime all right so you
have these two things they're connected
how do you get the textures up there's
three API calls you need to use there's
cutie open GL texture context is new
texture available so it's a long one you
pass it a timestamp saying is there
going to be a new texture for this time
in the future that it's different from
the previous one I've received so you
can see this is sort of a polling model
you ask it periodically is there new
something available and when that
returns true you'll move on to the next
API copy texture for time now this will
pull out that texture give you a copy of
it for that same time stamp that used in
the previous function note that this
uses the core foundation retain and
release semantics so be sure to release
those textures and you're done with them
otherwise you'll chew through your video
cards memory in no time last one here
cutie OpenGL picture context task this
is used for giving the texture context a
little time to do some housekeeping it's
getting a check out with textures see if
they're ready to be reused or something
and it's a non thread-safe thing so more
on thread safety later with that a
little bit ken talk about those textures
that you get out of that thing
Thanks on alright so now he gets into a
core video a little bit and give you
guys a quick overview of what that's all
about so I've seen this diagram a couple
of times before I mainly going to go
over the core video timing and buffering
aspects down here so there's two main
pieces to core video today there's
buffer management we wanted to have a
common data interchange format between
QuickTime and OpenGL that was always
really tricky before G world's we're
very non optimal you could get you know
you could get the bits out of a G world
and get it into a texture but it's kind
of a pain in the butt and you didn't
write the drivers knowing how to do that
correctly is kind of tricky the other
part of it is displaced synchronization
in this whole scheme what you want to
know when to go and ask quick time when
should I ask for the next frame and I'll
talk about that in a minute so first a
little more detail on the buffering
model all the buffer objects that you
get out of QuickTime they're all core
foundation based objects so as John said
UCF retain CF release on them there's
sort of an abstract base class if you
will that defines a couple of
interesting behaviors there's this
concept of buffer attachments for
metadata things like timestamps might go
there colorspace information that sort
of thing there aren't any one's really
defined yet but we expect that there
will be at some point there's also then
a couple of concrete buffer types you'll
run into in the tiger time frame there's
Seavey pixel buffer and see the Open GL
texture and again diagram you can sort
of see how that works so just real
briefly on cv pixel buffer this is how
internally the ICM would like decode
memory based data like if it wanted to
you know for a codec it needs to be able
to put things into main memory you can't
just doesn't want to put them right into
vm so they'll use a CV pixel buffer for
that I guess that's all I had to say
they're sorry and then this is the
foundation as well for the new ICM api's
and again tomorrow 330 Sam we'll talk
about that a little bit more but here's
the one that's interesting for you guys
to see the Open GL texture object
so this basically represents a high
level abstraction wrapper what you
whatever you want to call it around an
open GL texture and its job is
internally to deal with the details of
how do I get you know texture data if
it's y UV the to view is at y UVs is
that rgba is at ABG are a RGB whatever
it figures out how to get that into
OpenGL it knows about all of our custom
extensions that we use in quartz you
know the void memory copies that holds
that whole thing the API is pretty
straightforward you basically can get
back the texture target these days is
pretty much always going to be texture
rectangle give you back the size what
the texture ID is that sort of thing and
you can use OpenGL to query more
additional internal information if you
care the one thing I want to point out
is you should really ask us for the
texture coordinates using either of
these two calls here if it's something
like DV video the texture might be 720 x
480 but you might really only want to
use like sub-region of that because
there's undefined regions and you don't
want to display garbage to users so the
next part of core video is the display
link and this is sort of responsible for
driving the timing it isn't at the
entire system so overall with this new
visual context thing it's kind of a pull
model sort of like core audio the idea
is that every now and then you basically
go and ask QuickTime hey is there a new
frame available for some upcoming
display time and displaylink job is to
basically tell you when you should go
and make that call it provides timestamp
information as to when the next sort of
vertical blanking period on the monitor
is going to happen on for whatever
display you're on this contrary to
popular belief this does apply to flat
panels as well they really do have VBL
sort of timing idea even if they don't
really have a blanking period that
timestamp that will give you is a little
data structure that's actually required
by the QuickTime visual context so it
knows when the next display time is
going to be the interesting thing is
that it's not like the time right now
when the callback is made we're actually
trying to estimate when the next display
period on your display is so it's a
little bit forward in time but that
gives you time to do the OpenGL
rendering set up calls corner image
whatever you want to do before the
display
opens their separate render and display
callbacks that are sort of happened one
right after the other you can kind of
use that however you'd like we might
define some more behaviors there in the
future but it's pretty straightforward
and all of these callbacks happen on a
dedicated high priority thread sort of
like core audio we're not so high that
will you know bump off the windows
server or core audio or screw anything
like that up so don't worry there and
it's similar to the same high priority
thread that we use for deep in the DVD
player and you can create one of these
things from a CD direct display ID or
multiple ones and there's calls to
switch back and forth if you move you
know from one display to the other
there's calls to do that so this to give
you sort of graphical idea of what goes
on here the thread that's running
basically picks up information from i/o
kit that says hey here's on the last VBL
happen here's sort of the time span from
the last one so we can sort of
guesstimate or estimate when the next
one is going to happen and then feed
that timestamp to you that triggers the
callbacks and then you can basically you
know go all the way up through your
custom pipeline ask QuickTime hey what's
the next frame and then start pulling it
back through all the other custom
effects with that brings Sean back up
and he'll give you a demo how this stuff
works alright so now that you've seen on
the pieces for the pipeline that are
required visual context the buffers the
display link show your little app that
uses it I'm EV may have seen this in an
earlier session in the week so this is
using the display link to get it's
rendering callbacks and is pulling the
textures out of the visual context you
know like we talked about and Rendon
rendering them with OpenGL
so since we using opengl pretty easy to
do so just kind of thing with the
texture around rotate it around so forth
soap rising before we thought well i
render one sector and you can hold on to
that because we have multiple buffers
now you don't have to use that buffer
and get rid of it you can just hold on
to it so you have an array of these
crane left old chasch so forth so you
can see for rendering many many minions
video frames every single which people
that pass you can see here here's the
dimension to this movie playing at full
right Vandor rendering 46 frames every
single time more than that this week
number these are high definition frames
and so you can see once you have the
video up on the graphics card you can do
a lot with it in the column because a
lot of horsepower their kings first you
need to picture all right how did you
find my wheel you are yeah even affect
Louise no going fun there's that
so I was showing the you know set movie
visual context ap is that we are using
we also added capabilities to use the
ICM api's hey so this is using a
sequence grabber and extracting the
frames off with DV camera feeding them
directly into the ICM which has been
connected to the visual contact so here
we can you know ignore the 00 here a few
more look at that almost 300 frames here
this will be sample code it's not on the
DVD it's not doable yet but in a week or
two will have it to you so there's that
alright so there you go back to the
slides okay so now for Frank he's going
to show you how to use OpenGL to do more
interesting effects Thank You phone good
afternoon I will talk a little bit more
now what benefits we can add to this new
pipeline by customizing it and for that
we use the OpenGL processing goes to the
next slide there we go you've seen a
diagram I hope everybody else is now in
his mind and I will talk a little bit
more about the gray area here so what
can we do with OpenGL and we use our on
your video pipeline so the added
benefits that we bring in by using
OpenGL is first of all we are blending
so you can compile the stuff on top of
each other and you can use the blending
effects that you have an opengl i will
show that a little bit later on in the
demo and the other part is geometry so
if you think of like putting this into
like some gain for some scenes that you
want to play the stuff and you've seen
it now nice the insurance demo like how
you can twist things around and do all
the funky kind of stuff with it so in an
overview
what this is really know about so the
first thought I want to take a look at
the way the fear so you don't have to be
an OpenGL go to really use our stuff as
Ken pointed out already earlier we take
a lot of the pain away from you by using
all the hot stuff with all the texture
management for you so this is really
really easy for you to use the other
really important thing to keep in mind
is now we free up the CPU so you can do
additional processing that's happening
on the graphics card the CPU is now more
free to decompress video so we have left
framed openings you can use more screens
at the same time on the other hand
office we've seen like with the resizing
your UI is more fluid you have a better
feedback on the source stuff because the
CPU has more cycles to burn on your user
interaction so that makes for life here
resizing and blooming really really easy
for you to do so for those who are new
to open da let me go a little bit into
the terminology and this is down the
very five seconds overview of the soul
part OpenGL normally draws and like
primitives it's like rectangles and let
us the basic foundation what they draw
with so what we can do now with images
it's what we call the texture and that's
you have a terminology couple of times I
mean that's pretty much like we skin
these drawing primitives and that's how
they end up on the screen there's as you
probably miss now all the OpenGL talks
but there's a lab session laid on which
you can get a little bit more your
fingers on what OpenGL is all about if
you already pointed out sweat safety is
an important thing since OpenGL is not
reentrant we have to make sure that we
work with red safety and for that part
we have to make sure that we use like
pthread mutex logs or we can use like NS
log from some part or we can use like a
shared context that gets you around you
can use for multiple sweat and core
video as we already seen a user's a
separate thread to get around a thread
safety issue that you normally would run
into them the you have to use these logs
also when you use our new API the Qt
Open GL texture context but there's at
least one part the is new texture
available that is thread-safe you can do
that outside the locking pot and for
those of you who want to use app kit
want to point out that you have to
override the
state call and wrap this is a lot
because otherwise that kid will do some
opengl costs you will run into some
thread safety issues so getting out a
little bit into the deeper part how does
the whole thing work so you've seen how
the pieces and i want to show you in a
quick overview how we really get not the
whole thing to the screen so in the
first part i'm setting up here at
display lling and you can see i create
one i set up my two callbacks for the
render in the display pod and after that
all i have to do is busy start it and
now i have my timing service learning
and i have the two callbacks that will
all the time biggie getting calls from
our display neck on the rendering path
we've seen all the pieces here that come
together so this is now my render call
back and all what i'm doing here is I
check if there's a texture available if
yes then I can throw away my love when I
don't need to keep on I don't need to
send the effect that Sean and strong so
I'll just throw the old texture wake at
the new one and be ready now actually in
the next step to bring this up to the
screen so how do we bring it to the
screen it's a little bit more details
here the first part I have to make sure
that i clear out whatever was there on
the screen I just give you overview here
there's details that you might have to
look into like depending on what you are
doing there and the second part is i
bind the texture and what does that mean
it's like opengl now knows this is the
textured i want to draw is this is the
one that I'm skinning my whole rectangle
with and then I draw a rectangle and
this is simply a quad as you can see
with the coordinate and they're mapping
the texture coordinates to my rectangle
and that's all I need at the last step
Ebola I bring it with flush to the
screen and we will see the image on the
screen that is how simple it is for you
to draw and now i will show you the
whole thing on the demo please domestic
ok
this is a little sample application that
I wrote it I call life video mixer and
what I will bring in now is just three
video files which off a little pool
beard game that we had and we just swap
this with three cameras at the same time
so I'm trying to imitate now his studio
here so what i can do is i play the
movies and i have the different camera
angles at the bottom part here and open
the other tells me the compositing so i
can superimpose now the close-up of that
shot say well actually let me see like i
want to see the other camera angles see
yes you're struggling with the spot a
little bit and i can do the spoon d and
you see that's like no problem to run
this on any kind of cpu and i can use
funny stuff which will you call like
multi texting i can use masks and put
this video in like some Frankie shapes
you can use this channel when you said
and I can have to say well now they're
laying on top of each other okay let me
take this one move this up in the scone
you see our knife influent this ones or
the movie is playing back and I'm not
really playing back like three screams
here it's like a semi-transparent shape
here and let me position this into this
corner
and even for the background part I can
do this all the time and place up and
down who's that we would like to go back
to the slide so to quickly summarize
once again what we've seen you I've used
the display link to have some precise
timing and that helped a lot and I've
done these kind of applications before
you and it's like I can tell you forfeit
tons of code and here it's like really a
few lines to do this and then I use for
the compositing the GL plane parts if I
mean just flowing a little bit the
terminology for you so that you can find
out later on at the books okay what's
this all about I show you how to use
masking by using multi textures which
makes this really easy has been a pain
to do that before hand using for the
resizing part we can do this simply with
the GL viewport I normalize the
coordinates that makes it very easy for
me to work in different coordinate
spaces and with these little ingredients
I can create a really easy application
it shows a little bit more fun this
video and with that part I would hug and
the state back to again alright thanks
right alright so this is sort of my
favorite part of this so yeah you can do
all this fun OpenGL stuff but you know
earlier this week dishing to be a ladder
is shouldn't doing all the school
effects processing so how are we going
to get this into this whole new pipeline
so integrating core image is actually
very straightforward I'm just going to
sort of briefly cover a little bit of
the core image API here in case you guys
missed the session in this case you just
create a CI context with your OpenGL
context and pixel format and then once
you've got that CV open GL texture audit
object out of QuickTime visual context
you basically need to create a CI image
to represent it and see I quarter image
has a very nice API for just creating a
CI image out of an arbitrary open GL
texture so in this case I basically
fetch out the textures name its size is
it slipped that has to do with whether
or not the origin is the upper lower
left-hand corner most of the stuff
coming out of QuickTime will be flipped
and then basically create the CI image
with all those parameters so once you've
got that you can run it through a CI
filter just like any other
you know Kourt see I image you basically
set that see I image that you created to
the input in this case just the input
image to the filter and then you can
basically pull the result right back out
again as another CI image then you
basically use see I like you would any
other thing and you called in this case
I'm using drawimage at Point from
wrecked I think that's the method name
on that one and again you'll note that
i'm using the get clean wrecked in this
case to make sure that i'm only
rendering the sub-region that's defined
by the original cv open GL texture so a
couple of notes on this both the CI
image well first the CI images are
immutable so every time you get a new
frame on a quick time you're basically
going to have to create a new CI image
for it there are there's a little bit of
trickiness with that though both the CI
image and the cv open GL texture
structure will basically have a
reference if you will not in a sort of
see if retain or objective-c reference
sense but they'll both be referencing
the open GL texture object so they sort
of need to come and go together so if
you're going to keep your CI image
around make sure you keep the underlying
texture that you've gotten from quick
time around at the same time one way you
might simplify that for yourself for
your cocoa programmer you could subclass
see I image to basically do the CF
retain for you and then when you want to
be release the CI image you can get you
know let it go away so do a little demo
of this I was trying to think of
interesting demo ideas and something we
tossed around a month an idea we tossed
around a month ago you know not do CP or
anything else was do some underwater
video color correction so I happen to be
on vacation a couple weeks ago and shot
some underwater video and this is like a
couple of sample images that you can see
out of it these were taken the first
image on your left is it about I think
it's like 17 feet or so the one on the
right is down around 50 feet or so and
you can see that there's a color shift
there more and more the red disappears
as you go deeper so the other
interesting thing is I need some way to
calibrate that though it's like well for
what depth of my at how much correction
am I going to do so for that
or is it here well the red oops go back
to the slides real quick slides please
thanks the color matrix is a good match
for doing the color correction but I
still got to get the depth information
so my little trusty dive computer here
the neat thing about it is is that every
10 seconds it records what depth i'm at
i can look a little serial cable pull
that information back out and get a dive
profile with it right and i'll use a
heads up to open jail for a little
gratuitous heads-up display of the demo
so get a demo on please alright so i
have this demo lovingly called coral
video i think it was Tim's idea cinquain
so what this shows is this is like a
little video clip that I shot one of the
days i start out somewhat shallow again
about 20 feet or so and just sort of
swim along the coral and then end up
loop some down here on the side a little
bit so you notice i can play that you
can hear me breathe everybody thought i
should leave that in there but i don't
think it's going alright enough of that
so so down here on the right I've got
sort of dive profile here's where I
basically got in the water and then swam
around a bit up and down all over the
place and then right here I put on a
couple bookmarks in the dive computer to
show where I was going to shoot the
video clip so I could find it again so I
kind of need to do a little bit of a
manual calibration step here and say
well okay this is about the start point
of the clip the dive computers bookmark
starting accurate to 10 seconds so
there's a fudge factor in here for sure
so now that I've set the current time
you can see as I move around the depth
sort of the dive profile display matches
where I am on the clip so I can go to
the beginning here and I have all these
little color correction controls i can
use I can you know you know play with
the hue saturation brightness that sort
of thing go away but in this case I
really just need to add in a little bit
of red and sort of calibrate that as my
start point along with the depth then as
I move along to the end here
sort of I need to redo it because I'm
deeper and I need to add a little bit
more red back in to make it look right
and this is not scientific by any means
so nobody use this code to like do this
for real it's just complete linear
interpolation but the sort of cool part
now is that if I sort of put this play
right here is I go down the side of the
corrals I swim around you'll see that
the colors basically stay correct as I
descend from again this is I think about
looks like I'm about 25 feet down to
about 50 feet of water and button in the
space of about a minute so it just sort
of keeps the colors consistent the
entire time not normally when I would
shoot video like this I would do it with
a little special red color correction
filter and I wouldn't have to do in
software but I thought would be cool
just to see how good of a job I could do
with core image just to give you an idea
that's what the original video looks
like so there's a pretty noticeable fix
up here and this is using another little
CI transition little wipe transition to
basically but it just stops halfway
through the transition to show both and
again there's a little depth display
those little depth on the upper left is
rendered with OpenGL just just another
little heads-up texture so that's about
it Becca slides please all right so the
summary on that is you know look it's as
I shown it's pretty easy to get this
stuff in the core image you know once
you've got the stuff out a quick time I
lies basically doing just feed the video
and color correction data into core
image really really straightforward and
use OpenGL in this case to do an
additional heads up display actually
there's one thing I didn't mention it
before and it'll be in a sample code if
there's a i wrote a cheesy little
deinterlace filter in core image as well
that's in that app just just as a little
side note i forgot to mention earlier so
a couple of caveats with this obviously
with this whole thing so you know here's
the bad so there are limitations to
using all this new pipeline stuff to use
sort of visual context at all you
basically need to be on sort of quartz
extreme class hardware and that's mainly
because again we need texture rectangle
very little video is 256 x 256 or
whatever there's also drawn size
limitations if you're on an older piece
of hardware it might not either have
enough memory or night would not be able
to support texture resolution as big as
the video you're trying to pump through
it so that's something else to watch out
for for doing the core image stuff again
has been shown earlier this week you
basically need a radeon 9600 or higher
or nvidia geforce FX or higher one thing
you can do though is if the video coming
out a quick time is going to be too big
for your vm you know limitations or
whatever you can set the movie box
smaller and quicktime will software
scale it down and then you'll be able to
feed that through GL without using as
much fear and what's your smaller
texture size so with that bring it over
to Tim for summary
[Applause]
wrong way I want to see my name again so
we basically built this this is the
architectural stack from in tiger and we
built this pipeline which you've seen
this diagram the important thing is that
you can basically use the movie views
for high-level access where you don't
need to actually get into the details
but if you want to customize your
application you want to make it a
special application you can use the full
pipeline to distinguish your app from
another one and take advantage of OpenGL
and core image use the seed everything
you've seen here is basically working in
the seed a couple of notes the H iMovie
view that's shipping the one the finder
is using the one that you can use its
functional but it doesn't actually use
visual context yet we had a couple of
integration issues that we need still
needed to work through will do that for
Tiger you'll find that some movies if
maybe if you rotate a movie using
QuickTime it might not play exactly
right so that was an issue and then the
call that you need to get a GL get pic
cgl pixel format rich children's null
and we need that pixel format so that's
currently not working that's a little
note so for more information
documentation you can get the
information on visual context from the
quick on documentation that is in the
tiger docs on your CD and then you'll be
able to download the sample applications
that you've seen some of it's already up
there today in the 215 package and some
will be updated as the week's go on so
the hands-on lab which is basically the
back the graphics and media lab tomorrow
morning there's going to be a bunch of
people from the GL team who can help you
now that you're all wanting to know how
to do GL and then in the early afternoon
there's going to be the people from the
core video core image and the QuickTime
part team that's been doing the visual
context and then at the end of the day
after the session on the new icmap is
and I BTW coating technologies they'll
be eat more people from good time
upcoming sessions for you if you stay in
this room
or come back into this room you're going
to hear the update on audio and more
information about audio capture as well
secrets grabber changes and I'm sorry
and also tomorrow afternoon again next
generation video formats will be talking
about h.264 and changes to the movie
toolbox and the ICM to support ID be
coded video if you're interested in
being seated with QuickTime as we start
seating for the next version send your
name company products and technology
interest to quicktime seed at apple com