WWDC2001 Session 402
Transcript
Kind: captions
Language: en
my name is Tim churna I'm manager of the
QuickTime Pro video team at Apple so
we're gonna talk about Pro video so why
Pro video well it's good to have a team
for pro video QuickTime has a lot of
customers as you've probably seen either
by seeing Tim's session this morning or
seeing the interactivity session that
was just before mine or seeing the the
broadcasting session that follows so
QuickTime has a lot of customers but
it's also the foundation of a lot of
core high-end video technology shipping
and apps from both apple and other
companies such as Adobe and media 100 so
has unique demands in that space the
demands tend to be must have high
quality it must have high performance it
has to work well with the hardware that
you get we have hardware now from
companies like matrix and Pinnacle in
digital vu and Aurora which work using
QuickTime in the apps we talked about
and you got to have consistent results
because you're going to take this
material and maybe go on air or make
make it in a movie so it's very very
important so that's why we're
specializing in pro video that's why we
have a pro video team so what about you
guys this sessions targeted towards
developers who are writing video editing
or video processing applications you
know can be from the high end can be
from just simple ones which can make the
experience of using things like iMovie
easier it's also targeted towards codec
writers who are writing either software
codecs or Hardware codecs and this will
basically give you some extra
information so things we're going to
talk about today we're going to talk
about some improvements we've done in
QuickTime 5 which makes your rendering
experience better we're going to talk
about improvements for supporting
hardware cards such as the ones I talked
about and ways that you can take
advantage of a synchronous operations
and multi processing on our high-end
Macintosh systems and some future things
we're going to do with our
FEX architecture and finally some help
in migrating your video hardware towards
OS 10 as a platform so I'm going to talk
about rendering and when I talk about
rendering I'm really talking about
taking some compressed material
decompressing it and typically then
you'd apply some sort of effect to it a
video effect let's say a blur and then
we can press it back to the original
material that's a typical workflow
experience or experience inside a video
editing app where you looks like I
wanted to do a cross dissolve between
two streams of DV and you would render
it so you decompress the two streams of
DV you combine them
Yury compress it back to DV and then you
can send it out firewire so the
improvements we've done in that space
are we've improved some gamma processing
we've also added a new pixel format
called our 408 and of course we've
improved the DV codec which is something
that Tim talked about and I'll show you
some of the results we got there so my
favorite first topic is gamma and I'm
gonna talk about gamma sort of a little
overview but basically gamma when I'm
talking about it I'm talking about the
non-linearity of intensity reproduction
which is basically that you have an
input value and you have an output
intensity and there's a there's a
relationship between the input value in
the output intensity and it's not always
linear like the diagram at the right it
has a curve it has a power curve and and
this relationship could apply to either
the camera a video camera or could apply
to a just a CRT monitor or maybe a LCD
monitor they it can also apply to the
system as a whole
for example the Macintosh or your
television so the issue is that there's
different gamma for the different
systems that QuickTime deals with we
have video which has a gamma which is
established at 2.2 and the Macintosh the
gamma is established at 1.8 and windows
it's basically using the native values
of the CRT which is 2.5 so why is that a
pro
for video for video rendering is that
because QuickTime knows that video such
as DV is at 2.2 and it wants to make it
look correct when we display it on the
Macintosh for applications such as
iMovie when you're when you're doing the
preview on the desktop we do a gamma
correction stage to bring the image to
look closer to what it would look like
if you had a NTSC monitor next to your
Macintosh monitor and so that works
really really well except it makes some
problems for video rendering I'm going
to show you the gamma correction that I
was talking about afterwards in my demo
so the solution that we've come up with
is that we allow applications now to
specify the gamma that they want they
can say I want you to give me the what
the source gamma was or I want you to
make it gamma 2.0 or I want you to make
it gamma 2.2 so not only can you specify
what the gamma would be via some gamma
API as we've done you can also find out
what the gamma actually was which is
really useful
did it work and codex can specify the
gamma that they prefer they can say my
private or custom pixel format
compression format has a gamma of let's
say 2.2 and therefore when you
decompress it if you ask for the source
gamma it'll be 2.2 so there's no more
guessing you can basically choose the
camera that you want to process your
video rendering in and you get what you
want
so pixel formats key thing about
rendering as you go from a compressed
pixel format to some sort of a pixel
format you're gonna do the rendering in
and the typical choices out there would
be RGB or Y UV and RGB has this
advantage in that it's native for
graphics a lot of people have used it
for many years and it also has an
optional alpha channel so you can use it
to do compositing fairly easily y UV has
the advantage that it's native for video
and typically it's stored in a 42 format
which means that there's two samples of
luma for every chroma pair the U and V
refer to the chroma and the Y of first
Illuma so it's subsample which means
it's a it's good for storing it
of represents what your eye can see in
other words your eyes more sensitive to
luma compared to chroma but it it's
basically was hard for rendering which
is kind of my next slide and of course
there's no alpha Channel so the problems
with RGB and y UV RGB has the extra
color space conversion to go back and
forth from ye V since the data typically
be native ye v for video it also can
clamp the video because the space of the
color space is smaller than the Y UV
space so with ye V the problem is that
there's no alpha channel so if you
wanted to do compositing you're kind of
out of luck it's not really friendly for
that and it's also hard to process
because it's sub-samples so if you just
wanted to move your Y UV image by one
pixel all of a sudden you have this
problem because you have to move the
chroma and luma around or the chroma
around because it was subsample and also
the standard black value for y UV video
is 16 so every time you do an operation
on ye V you're busy adding and
subtracting 16 on it so we didn't quite
like that
so QuickTime came up with our 408 and
our 408 is really nice it's video
friendly which means it's y UV based
it's not subsampled that means it's 4 4
4 4 4 so it's 4 4 so anyway so it has an
alpha channel as well so as alpha Y UV
and the other key thing about it is that
the Y value is offset so that black is 0
so that you can just simply if you want
to do a dissolve between two are for
weight values you just you know average
out the the Y the Y values and you got
your halfway point so it and the
position of the components you know are
408 is basically matching up with some
of the components in a RGB so that if
you're just migrating your blitt loops
it's pretty easy so it's it's a good
format and you can use that to improve
your quality and that's what we did with
the DV codec for rendering we improved
it by using the gamma and our for weight
that
I talked about we also went in and
improve the quality consistently and the
main area is what we wanted to achieve
were consistent results between a g3 and
the G for we figured if you got a faster
machine it should look as good or the
same as it would on a g3 we improved the
color fidelity and we reduced the losses
when you did multi-generational
rendering so when you render a clip in a
in an editing application you don't want
to have the rendered material look
because it's worse than the original
source and we also improved the
performance by both improving the code
path and also making it accelerated on
MP McIntosh's so now I get to show you a
little demo of this stuff working and we
switch to a demo for please alright so
what I'm going to show is a little
application and what I'm doing in this
application is I'm taking a DD clip and
I'm decompressing it to an off screen
and a pixel format that I can choose and
I'm taking the resulting decompressed
image and recompressing it back to DV
and I take that DV frame and I
recompress it Reidy compress it and keep
doing that in my test I do it sixty
times so I basically can see the results
of a multi-generational render so I can
see the losses that we had or have with
the DV codec so let me just open up a
file this is uh Andrew this is a Kevin
Mark's son and he's holding a ball which
he's moving so you can see over here
there's a lot of motion and that
basically has interesting effects on the
DV compression and decompression that we
were testing and you can see that you
knows a lot of detail in his in his hair
and so he's our test clip for today and
the first thing I went to show is it's
why we're using gamma Y we gamma correct
DV so this this is a source clip as DV
and right now we're gamma correcting it
so that looks good on a Macintosh
monitor and I can turn that off and now
it's often it looks a lot brighter and
now what I'm doing is I'm actually using
Gamma api's to specify what I want the
gamma of the display to look like the
gamma actually not to display but of the
pixmap the port picks map so I can
switch it back to video gamma and it's
not gamma correcting so it looks
brighter to bright I can set it back to
the default gamma it looks dark so
natively it's gonna use the default
gamma and we're gonna show how you can
use the the gamma api's to fix up your
rendering so the first test I'm going to
do is I'm going to do a test where I
render the clip through to vui and when
I did that you'll see the resulting clip
degrades every step quite a lot because
I I didn't actually set that I wanted to
use 2.2 as my gamma it's converting
every step on the decompression but it's
not properly converting on the
compression so now I can do the same
test on two V Y and now you'll see that
it looks basically perfectly I can play
this and it looks perfectly I can scrub
through it and you see that I use the
gamma API to say please decompress this
at 2.2 so when I recompress it there's
no gamma shift so I've avoided any gamma
processing it at all in that rendering
cycle I want to talk a little bit about
the pixel format that I'm sure that I've
chosen to use so you're going to see two
impacts of me choosing are for weight
over RGB and the first impact is
performance and the second one is
quality so I have two seconds of video
that I've just rendered basically done
this multi-generational test two one
frame and I've done 60 frames and so my
two seconds of video took 2.4 seconds to
process on this machine through RGB and
so that's just a little under real time
and you can see it is if I go to the
last frame you can see some artifacts
appear because of the losses going
through RGB and so that's not good so we
did the same test going through our 408
which is the ye V format and the first
thing that is pretty impressive is it
takes 1.4 milliseconds to do so that's
faster than real time to do the
decompression and a
impression and the other thing that's
really notable about it is the quality
so I can't see that image change in fact
it really doesn't change at all so just
as a comparison
I ran the same test running it on
QuickTime four one two that's all this
stuff was all shown on QuickTime five so
a quick time for one - the first thing I
can read I guess you can't read but it
took five point eight seconds to the
same test versus one point three seconds
to do the test on QuickTime five so you
can see the improvements in performance
the other thing is as we play it you can
see that there's a fair lot of artifacts
that we have had in anyway
QuickTime 5 is much much better so let
me just quit these things so that the
next demos are good and that's pretty
much all I'm going to talk about can we
go back to the slides please
so now I'm going to ask Jean Michel
Barrett to come up and talk about some
improvements in hardware support
hi my name is Jean Michel vaatu and I
woke in a quicktime pro video group and
I'm going to talk about the cover
feature that we have added in QuickTime
file in order to improve hardware
support so the first thing that need to
walk in the remote control okay okay
that's called Hubbell improvement right
okay so the first thing that QuickTime
file you used to do is to assume that
all connect will decompress right away
and for software implementation is
pretty easy to understand that you can
start the compressing whenever you want
when you have to deal with a piece of
hardware it's much more difficult
usually I mean third-party developer
have been able to manage to deal with
this issue because I mean the time that
was taking to set up their first
decompression was not that much actually
but during the development of QuickTime
fiber we run into some third party which
we are trying to bring up their hardware
and support QuickTime and this time to
decompress the first frame was quite
huge and what was happening is that
cretan was getting upset because he was
totally unable to understand that the
first frame was going to take a while to
show up on screen but the next one after
that will be fine that's a concept that
we didn't have before if I go I'm trying
again okay so the solution was to make
QuickTime aware of this latency and the
way you are reporting this latency is by
using a new API that we have put on the
correct side which is called correct
image connect yet the compress latency
so basically your connect does report is
internal pipeline duration and make
QuickTime aware that it's going to take
you a long time to start decompressing
the first frame the next one which are
coming after that will be fine
so as soon as QuickTime I mean use a
connect
which report latency what internally
we're gonna do we're gonna start your
video trike earlier and the movie will
start when your hardware pipeline is
totally fooled so you have a chance to
decompress the first frame at the right
hang so that's what new latency support
in QuickTime five oh and we also
extended this latency mechanism to audio
track so the concept is identical and
the way QuickTime is talking to audio
devices through the sound output
component so we've added a new selector
call Si output latency using some
getcomponent getting full and same thing
there if your audio device has some
internal pipeline you just need to
report that to QuickTime and will offset
the audio track as well so creature can
deal with different latency between
audio track and video track the only
assumption that we still have that if
all the video codecs in this video track
need to report the same latency so
another assumption that QuickTime did
have before FIFO when you have a system
which has multiple correct able to
decompress the same kind of data we
needed to choose one right so the one we
choose of course were the one which was
the fastest one because every connect is
supposed to report their speed I mean
internally if you had to DV codec
installed on your system where I mean
getting the speed for each of these
codec and making the one we use the one
claiming that the world's fastest one of
course this scheme assume that this
connect don't lie right well they do
it's too bad alright but they don't have
that much other option basically what's
happening is that you pay for this piece
of hardware and you stick that in your
system and they want to be the one that
QuickTime is going to use by default and
the other way for the only way for them
to make that happen was to claim that
they are faster than for instance the
software upon implementation but it was
getting worse because when you start
having two pieces of hardware in the
same machine I mean everybody was trying
to look at the other guy could I figure
out what what their speed and claimed
that they were faster than the other one
right of course that's not really very
old solution and we used to call that
the correct speed war internally
whenever he was trying to claiming that
they were faster than the other one so
the what we did in fiber was to finally
let the application decide which codec
they wanted to use when so we did end up
this correct war at least we hope so so
the when application can specify this
preferred codec is to use this new API
call media set preferred Craig buy you
dinner to provide QuickTime a list of
codecs you prefer to use right
internally what QuickTime is going to do
is to still sort all of them by speed at
the end of the sort what we're going to
do is put the codec you've give us in
this list at the top of the list so it's
definitely a much better solution that
the the the speed information which was
the only information we had before in
QuickTime and it makes application setup
much easier when they decide to set up a
user project trying to understand which
piece of hardware or software they want
to use you may have your system set up
for instance doing firewire RGB input
and have another piece of hardware which
is capable of sending DV data to an
analog output and you really want to let
the user and the application be able to
select which one they want to use at one
point so just one more thing about how
we are correct
if your hardware has implemented a
custom compression type what's happening
is that when you create your content
with this codec in your movie if your
end user does have the ham we install in
your system you're fine you can play
back this movie if you try to have this
content play on the system which doesn't
have your hardware then you need to
provide a software implementation of
your hardware correct right well the
problem that the user has no idea was is
looking for when it's running into a
movie like that which it doesn't know
where the content has been at or it
doesn't know which company is making
what correct so it's quite a bad user
experience frame so the solution is to
use I mean on you mechanism in QuickTime
5 to do this automatic component
download when all that you have to do if
you have a custom hardware today is
register your software implementation
with Apple and I mean we will get it
directly for more also as soon as your
end user will run into it so that's
pretty much it about hardware and
QuickTime fico now let's talk about MP
and quick camera Macs
thank you Joe Michelle my name is Sam
bushel and I'd like to take a little
time to talk to you about quick time on
multiprocessor Mackintosh's now
multiprocessor mcintoshes are great
right and they're great because they
have more processes if you have more
processors than the other guy then you
win right well maybe in practice people
want to buy a machine with two processes
because they'd like everything to run
twice as fast it turns out if you're an
engineer you probably have some idea of
why it doesn't quite work so well and so
as engineers we have to do a little bit
of a work to make this make this hope
satisfiable now sometimes the user is
running more than one application at the
same time and those applications may be
several of them are doing compute bound
tasks in that case a Mac os10 we
automatically get symmetric
multiprocessing that'll schedule and run
all of the applications that are
available to have work to do and so that
side of the problems pretty much sorted
out for us now on 10 but sometimes only
one application is doing any work in
that case we have a little bit more work
to divide that work up across the
available processes now in the QuickTime
case there are a bunch of different bits
of work to be done on the system but the
majority of them tend to be done by
codecs and so the work that we've done
with QuickTime to support a
multiprocessor computers and multi
processing is primarily focused on
making the codecs run faster so it's
it's a team effort
in QuickTime generally if you have an
application that uses QuickTime and
there are some codecs involved the
application calls QuickTime QuickTime
calls some component the codec runs for
a while doing some work when it's done
it returns to the application so let's
look at how this this team effort might
be made faster to take advantage of dual
processor computer if you're lucky you
might be able to take the work that that
codec is doing
and divide it evenly across two
processes if you're not so lucky it
might not be applicable but it might be
possible to run that decompression or
that codec work asynchronously and let
the application do some other work maybe
some other decompression for the next
frame at the same time in more detail
this is the first approach if you can
split up your work across a bunch of
multiprocessor tasks then they can be
run all at the same time and when
they're all done they all return so this
is a still a synchronous API
the application asks you to do the work
and when you're done you return and
you've taken up all of the CPUs
available in the meantime this is the
best situation because the applications
don't need to be revised they can keep
using those synchronous api's and there
are high performance gains possible as
we've demonstrated with the DV codec the
trouble is it's harder and it's not
something you can easily do with all
algorithms sometimes step one has to be
done before step two and step two has to
be done before step three and so you
can't do step one two and three all at
the same time in those situations you
need to take a step back reevaluate how
you'd like to go and maybe it's okay to
run the entire job that you want to do
in a single MP task asynchronously from
what the rest of the application is
doing this is a smaller change to the to
the codec and in fact it can be a really
small change if good time can help you
out the trouble is it doesn't actually
make it make that task any faster it
just takes us as long maybe if the
application is something else to do then
it's a win overall but the corollary of
this is that in order to take advantage
of this situation the applications do
need to be revised and maybe restructure
to take advantage of this using
asynchronous API s so in QuickTime five
we've used both approaches to taking
advantage of multiprocessor computers we
have revised the DV compressor and D
compressors you've probably heard a
number of times by now to split up their
work across the available processors in
the computer
we've revised some of the other
compressors and decompresses in quick
time to be able to run asynchronously
and I have a little demonstration of
this that I'd like to show you which is
over here I'm demo for now this is a
application that I wrote for debugging
and analysis purposes but I'd like to
use it here as a technology
demonstration to give you an idea of
something that might be an applicable
use of both of these kinds of
technologies both the the method a and
method IB for splitting up work on a
dual processor computer we have back
here is a dual processor 500 megahertz
g4 and my application you probably can't
see all of the the text on the screen it
doesn't matter it's not very interesting
apart from the fact that it has a bunch
of different bit rates listed and we
have a DV camera here and it's pointed
at you and this is what you look like I
can wave this around so you can see that
it's live you can wave hello see look
there are people waving so what we're
doing is taking input from the camera
coming in on firewire we are
decompressing frames to a wavy buffer
we're then scaling those wavy frames to
four different cycle three different
sizes that had four different frame
rates
we're recompressing those using h.263 at
a bunch of different bit rates
simultaneously on the same machine so
different pieces of this work of being
accelerated in different ways the DVD
compressor is automatically splitting up
its work into what can be done by
several processes and the h.263
compression is being done asynchronously
so although each compression activity
can only be running on one processor at
a time you might be compressing several
different frames at these different
sizes the goals are for that it tries to
to meet for the bit rates correspond
more or less to a bunch of different
modem rates so you might might have the
first that the 12 kill a bit of a video
for a 28 K modem 24 kilobit per second
the video for a 56k modem something
towards 80 kilobit per second for a dua
is DN or some of the hundred odd Gilbert
connection and something higher as well
at full 3 frames per second but if
you're close enough then you can
probably read that we're not currently
achieving 25 frames per second which
makes this look like a foolish demo
except that I can point out that the
reason that's liking behind is because
it's showing you the answers and if I
turn off the preview so that you don't
get to see yourself on the screen then
we do reach 30 frames per second pretty
efficiently
so there's actually quite a bit of CPU
left available on the machine so you
could take what this demonstrates is
that you could take QuickTime 501 and a
dual processor 500 megahertz g4 and you
could prepare multiples compress video
streams and then you can probably
broadcast them to streaming reflectors
which will go out to a wide range of
people all on the one machine this would
be a useful product and that's my demo
so among the developers here some of you
are probably writing applications those
of you who are the thing you can do on
[Music]
multiprocessor Macintosh's is you can
call quick time using the asynchronous
compression API is instead of
synchronous once if you're a codec
author then you might want to accelerate
your codec to take advantage of
multiprocessor machines either using
approach a or approach B it's up to you
let's have a look at these a
asynchronous compression api's first
QuickTime is always had an asynchronous
mode for the image compression managers
compression API basically the last other
there's lots of parameters that describe
what you want to compress and how you
want to compress and so forth the very
last parameter says is a completion
routine and this you can pass nil in
which case it won't return until it's
done or you can pass the completion in
proc and refcon in which case it is
allowed to return immediately and when
it's actually done with the compression
activity then it will call your callback
routine and you'll know if this is this
is safe to use even if a codec doesn't
actually support asynchronous
compression in that case it will return
after calling your callback after
everything is done the trouble was that
there wasn't there's a missing piece
here because a lot of people who want to
write compression applications want to
use a higher-level service called the
standard compression component this is
the component that provides a nice
friendly dialogue box that you've
probably seen a hundred times and that
component only had a synchronous API so
in QuickTime 5 we've added an api
analogous to the asynchronous
compression in the ICM i've said the
word compression a lot of times in that
slide so if you want to read more about
that i recommend looking at the booklet
down 5 documentation there's lots of
good stuff there
if your codec author then as I said you
have the two choices you can accelerate
your codec by calling the multiprocessor
api's yourself creating some tasks when
we call you to do some work splitting
that work up across your tasks and then
waiting into Laurel done and then
returning or calling the completion
routines that's great if you do that
you're pretty much on your own although
there were some some pitfalls I'm gonna
warn you about in a second that you
should be careful to avoid if you take
approach B of running the entire
activity asynchronously and then then
calling and calling the completion
routine when you're done running the
activity on an MP task that is if you're
writing a decompressor and it's based on
the base codec then QuickTime can help
all you need to do is write a little bit
of code that promises that your draw
band call is MP safe since draw band
calls for video codecs generally allowed
to really have to be interrupt safe
because they might be called it the
third task time this isn't a big bleep
you really shouldn't be calling any
other api's besides ones but just for
other functions defined in your own
sources in your draw band call
so generally those things as long as
they're PowerPC native are also safe to
run in MP tasks so if you read a little
bit of code that promises that
everything's cool there then we'll run
you in MP tasks if applicable if we're
on a duel if we're on a multiprocessor
multiprocessor machine and trying to do
a synchronous decompression so there are
three pitfalls I'd like to point out so
you have in mind so you can avoid them
moving forward I've said that you
shouldn't call anything in draw band
some of you who have read the
documentation on apples multi-processing
api's will say no wait I've I've read
about this you're allowed to call of
these memory allocation routines and an
MPEG and Mac OS 9.1 a later you're
allowed to call all sorts of other
things like the file system well you're
not allowed to do that from a codecs
draw band routine the reason is that the
remote procedure call would implement
some of these allocations and other
calls
that remote procedure call can only be
serviced when someone in the blue task
calls white next event or one of its
friends if that doesn't happen then you
can deadlock and sometimes codex can be
called in situations where we can't let
someone you have a chance to call white
next event so if you need to allocate
memory do it as before before doing
anything in draw band a second pitfall
to avoid avoid using your own calls to
the MP API is entirely in a codec unless
you're running on Mac OS 9.1 or later
the reason for this is that any page
faults you hit if you're unlucky any
page faults can only be serviced in one
of those where next event calls and as I
said they might not happen this is fixed
quite nicely in Mac OS 9.1 and it's not
a problem at all on Mac OS 10 finally if
you have if you're writing a
decompressor and you divide your divide
your work up into MP tasks you should be
careful that you don't have one of them
write to one part of the screen and the
other write to another part of the
screen because you could see unpleasant
tearing artefacts which can can be
annoying well if you do that you'll see
that's all for me I'd like to hand over
now to to Tom daddy thank you thank you
so I like yet another member of the
professional video team and I'm going to
talk to you today about is some upcoming
changes we've got with QuickTime effects
architecture now up to this point we've
been talking about things that are
available in QuickTime 5 you could take
advantage of today we're going to be
showing off some stuff that's going to
be coming in future versions of
QuickTime but the reason we're going to
talk about this today is that some of
these features you'll be able to get
ready for in advance of the code
actually being available this is of
interest to you either if you're a
developer that takes advantage of the
built in QuickTime effects or if you're
a developer who creates QuickTime
effects yourself we're going to be
talking about two new optional
specifications that affects can provide
or applications can take advantage of to
allow the grouping of effects in two
classifications or groupings that make
sense either for the user or for you in
the application we're also going to talk
about a feature called effect presets
which allows that effect it has a large
complicated set of parameters to provide
user with a very simple user interface
to getting forgetting to them and that's
enough of that let's do a demo right
away
okay demo for please so there's no point
in showing you something new without
showing you what was there before so
this is the existing effects dialog it's
been provided since QuickTime 3 it's a
standard way for applications to get to
parameters and features of the QuickTime
effects as you can see effects can have
a large number of various parameters and
it's nice to have a standard way to
provide a user interface for this but
another thing you might notice is that
we do tend to have an awful lot of
effects here and a big long scrolling
list is no fun for anybody particularly
if you can't make it any bigger because
you can't resize the dialog or anything
else well let's get rid of that and
solve the problem so when the first
things you'll notice is that the list on
the left-hand side here has been grouped
into classes of effects another thing
you'll notice is that we've got plenty
of screen real estate now so let's just
make that dialog nice and big and widen
that on out at plenty of room now yay
work we're in the 90s so what the user
can do is they can choose particular
effects they might be wanting to do so
for example filtering they can see the
effects that are classified as filters
now this is go ahead and pop these all
open populate the whole dialog there
this is information that's provided by
the effects and is available both to
this particular dialog and also to your
application one of the other things I
should point out is that we're going
along here you might see some other
features that aren't exactly in my list
that I originally had for example the
resizing and the regrowing of the split
bars well you can pay attention to those
and decide whether or not we're actually
going to do anything with them we talked
about presets so here's an example of
effect that's using the new preset
features this is a slide effect and it
provides two presets for a slide from
the top or a slide up from the bottom
user can simply choose the preset that
they want you've got a picture that more
or less shows than what's going to
happen and a name that probably helps
them as well what's actually behind each
of the presets are is the full list of
parameters that are available to the
effect user can go see that as well and
see that this particular slide is going
with an angle from 0 to 0 which is a
slide from the top users can still go to
this optional parameter section here
with the custom and make a change for
example set the starting angle here and
then set the ending angle to something
nice and big and now we got a slide
that's kind of spiraling around there
another example of effect uses the
presets it's a new channel composite
effect who knows whether we'll ship this
or not I'm just showing you this is an
effect that performs a combining of
channels from multiple sources to
produce a new source is often done when
you have a mats that have been pulled
from video particularly in a
professional video or film market and
you want to combine them together to
perform to obtain an alpha track that
you're going to then go to alpha or
composite with other tracks when you do
this the mats or sometimes hold positive
pull negative the actual alpha value may
already be in the Alpha Channel it might
not might be down in the RGB values so
we provide mechanisms for selecting
these basic
options that users would need but if
they want to hidden behind the presets
are all the actual parameters that are
used so for example you can pull the
values from different channels and
different things and you get very
strange combinations once again there's
no reason for the effect to have been
written with this limited set of presets
that we see here
but those presets are most commonly used
so that users who don't need the more
crazy features or optional features the
effect don't really be concerned with
them and I think that's all I have to
show here that's it so let's go back to
slides so we talked about the major and
minor classes of effects or groupings so
what are they well the major class is
used by applications that need to filter
the lists of effects that are presented
to the user
in other words limit those effects to
only those that make sense for a
particular applications market segment
the minor class is used for grouping
effects together in the demo you saw
when we have the effects grouped into
twist down triangles those were the
groupings and we're using minor classes
to define that as I said the effect
major class is used for filtering for
example if you're an application that
provides a slideshow type service to the
user you want to probably only show the
user transitions that make sense for
transitioning from one slide to the
other now up until this point there's
been no way to tell the difference
between a to source effect that performs
a transition operation and a to source
effect that's performing compositing
operation like a chroma key
you just had to either limit the user to
to source effects or show them all the
effects and hope the user figured it out
with the major class applica FEX have
been divided or classified as to what is
their function in this sense and you can
limit the scope of what the user has to
choose between like most of the options
that are in affects the major class is
defined by an atom that's placed in the
effect description container now you see
here listed the class type and ID for
this
the values that we provide now in the
case of the major class because
applications are essentially going to be
hard coding themselves to certain
classifications for the major class this
list is rigid and has to be defined by
Apple and agreed upon by both components
that are being implemented in
applications that want to use it so
we've defined this list here which you
can make use of now if you and your
effect don't define what your major
class is you're going to be grouped into
miscellaneous which may or may not be
what you want so if you're an effect
developer creating effects util you're
going to be wanting to add this atom if
you're an effect if you're an
application developer that needs to
limit the effects the user sees to make
the users experience easier easier you
can take advantage of this to do so
effect minor class is used for the UI
grouping like I said and that was just
to reiterate that was used to make the
twist down triangles in the dialog you
saw there once again no surprise it's an
atom that's placed in the effect
description that describes the minor
class you see here the atom types and
IDs that are used for that and the
contents is one of a number of ones that
Apple has defined here well we welcome
input from third-party developers we've
already solicited some already as to
what types of groupings that you would
like to see and any of the standard ones
you see here Apple is going to be
providing the strip will automatically
interpret those and provide the strings
to display to the user within a
standardized dialogue so here's some and
here's some more and here's some more
and once again if you don't tell us what
kind of minor class you are then your
miscellaneous if you aren't happy with
these
unlike the major class the minor classes
can be extended by the effect you can
supply a custom string that corresponds
to your minor classes name the types and
ID's of that atom are specified here and
that string will then be used in place
of the standardized I'm sorry in place
of your OS type for your minor class
which you probably wouldn't want to see
in a scrolling list if you however
specify a minor class that's one of the
standardized ones will be supplying the
string for that so it's not something
you need to worry about
and finally effect presets effect
presets are Adams once again in the Adam
parameter description container that is
defined by the effects if you're an
effect that wants to take advantage of
these things you can place these preset
atoms in your effect parameter
description and the standardized UI that
you saw just here will make it will take
use of them as will any applications
that have been revised to take advantage
of them the one of the nice things about
all the things we've talked about so far
is you can place these atoms in your
effect descriptions today since no one's
looking at them they're ignored and when
software has been revved to take
advantage of them then they can appear
in the case of the preset there's three
things you need to place within the atom
you place the name obviously that's the
name that's displayed inside that dialog
when I show my demonstration there's a
preview picked that you need a place
there that's obviously the picture
that's displayed when the user has
selected your particular preset the
important thing about the picture it is
a picture and it does have to be at
least 86 by 64 pixels in size if you
make it smaller than that we are gonna
scale it but it can look pretty chunky
it can be bigger in which case it's
going to be scaled down and finally you
have to have contained within the effect
preset all of the parameters that are
necessary to cause that preset to become
active if you have three parameters you
need to supply those three parameter
values that go along with that preset if
you have a hundred and six parameters
you need to play supply the 106
parameters that go along with that
particular preset you can't just leave
out parameters that quote-unquote don't
care for this particular preset because
there does need to be a correspondence
between the values that are present when
the user has selected the preset and
then when they go into the custom effect
portion of the dialogue they'd like to
see those values populated and without
I'm going to turn back over to Jean me
who's going to talk a little bit about
hardware and OS 10
okay and back again so apparently
somebody figured that with my first
phone slide without any graphics and
without any demo I couldn't make the all
room full of sleep so they give me
another chance this time so we're gonna
talk about hardware again but this time
related to Mac OS 10 and what I'm gonna
try to do is explain to you if you are
coming from the micro SD 9 wall only and
you have been doing stuff that was
solely legitimate to do at least nobody
was preventing you to not do that how
you should move your hardware component
to be able to run on my question
so first let's have the bad news out of
a wave alright so you can no longer of
the direct access from your component to
your own Hardware that's definitely
something you could do an ID and you can
no longer I'll do as soon as you run on
the west end you need a driver layer
abstraction before when I say before
it's mostly online I mean you could make
all these access from your own component
because 9 was basically I mean having a
flat space when everybody could access
to anything anywhere in the entire
system so the the the need of a driver
is no longer an option for 10 and the
last bad news is that bringing up your
own hardware and debugging it is a
little bit more complicated long term
because basically you're gonna have to
live with two different pieces one is
gonna be in the kernel space which is
your driver and then you're gonna need
some kind of debugging tool and the
other one is the component itself which
lives in the user space and you have
some different kind of tool to debug
that so I mean bringing up your stuff is
going to be a little bit more
complicated than before so let's talk
about the good news now well if you
already have an existing Drive only
layer online or at least if you have a
library to make
your hardware I'd say you pretty much
almost all set all that you're gonna
have to do is move this piece to ten and
your component should be up and running
if you don't have one today even online
we do recommend that you go through the
exercise online before you move to ten
because you will be able not only to
make sure that you've isolated all your
hardware access and on the tab on the
top of that moving forward you'll be
able to maintain the same version for
nine and ten which will be something
that your customer will appreciate and
the last thing that when you're done
you'll probably never want to come back
to night I mean the memory protection
scheme build inside 10 is definitely
going to help you figure out all the
issue who have been you have been
fighting for all of the years so what's
the Mac OS 10 driver model well as I
said before this driver belongs to the
kernel space and unfortunately
components live in the user space if
your hardware is based on existing
family like USB or firewire what you
should do is provide a driver using this
family so you're the stuff you have to
write on your driver is much more
smaller and bringing up a full driver
implementation so if you deal with this
kind of these device you should take
advantage of what OS 10 providing in
term of family they are helping you with
a bunch of stuff if you have a very
high-end system that OS 10 has not been
able to modernize Ryan we don't really
know what you are doing there it's way
too complicated to put that on the other
side then you don't have that many
choice you have to implement an ID
library or a CF plugin and the driver
API you are coming up with is your full
responsibility I mean nobody is going to
decide what kind of API is going through
this driver it's up to you and that's
why I was talking about nine I mean if
you can validate only stuff running nine
I mean your life would be much easier
when you
2:10 so the big nose on my question
right as I say you cannot read why don't
you stir anymore or even physical TCI
memory from within your component it
belongs to you your space and the the OS
will not let you do that
the other point which is less obvious
when you trying to bring up your
hardware on 10 is that QuickTime has
some completion proud to some component
mostly codecs video digitizer and all
this stuff and this completion products
belong to the user space they are within
quicktime which are which is in the user
space at work as well so if you are
doing if you are calling this completion
from before from your own in tripoli in
online everything was fine you can no
longer do that on 10 you're gonna have
to come up with some new mechanism that
we'll talk about later
and the last thing a that you cannot
hold the city CPU anymore this used to
be something that you can do from a
component on OS 9 at least but this OS
is free I mean a pre-emptive OS so it's
Fred will priam to you not only to call
another component that your own
component will be cool from another
thread so if you expect to hold the CPU
to execute a couple of action on your
hardware this is not to be able from
within your component anymore so do you
do only stir from 10 well only access
you are wrong your driver I said and you
are going to have to create a thread at
least in your component to call only
squeak tempo completion product in order
to manage to live in the same user space
how you do that basically there is some
services from the from the US which
allows you to send messages via mike
pirat
so your kernel I mean a thread will be
able to wake up your usual userspace
thread and you will be able to call the
QuickTime Pro
QuickTime completion products safely in
there so
little couple of things to know about
this stuff I mean crossing channel space
is not totally free I mean doesn't take
that long but it takes some time so when
you're bringing your hardware make sure
that you minimize the number of cool
that you do to your own driver
especially during initialization process
I mean don't set don't try to come up
with API when we're going to set a bit
track also it's going to take forever to
to have your hardware ready the last
thing that thread out very cool as soon
as you understand what what you are
doing but you should never forget that
as soon as you had a thread on your
component side you're gonna you're gonna
add some load on the other side so
before going ahead and creating a new
thread thing about I mean trying to
share them between all your component
and your different piece of component
that you exposed quick time well I guess
that's pretty much it for or quick time
proportion professional video session we
have a couple about other session
running today and tomorrow and the one
you should definitely go if the quick
time feedback forum which is usually
pretty packed so if you have any
question that you want us to answer I
mean you should definitely be there and
be there early because you might not be
able to get into the room so if you have
any
I mean question about all the stuff we
have been talking about in this
professional video session you should
contact Jeff flow at Jeff flow at
apple.com
and it will be able to get in touch with
us so if you need more detailed
information about all the stuff that
we've been talking about you should
definitely go to developer at Apple that
come / quick time we have a full
documentation about what's new in
QuickTime 5 and you will find in there
all the stuff we were specifically
talking about there is also another
place on the on this website which is
called the ice floes when you have some
explanation of
all this rendering this pixel format and
the effector that her mom has been
talking about if you didn't quite
understand all the stuff that Tim was
talking about about y UV space gamma
afro rate and all this issue behind that
we do recommend to read this move from
child point on which is called technical
introduction to digital video there is a
lot of information in there to
understand why it's so painful to go
through all this rendering process and
the last thing is about a quick timeline
event this October in Beverly Hills
California so if you do only stuff about
QuickTime you should definitely go there
it's a chance for you to meet other I
mean so party developers it's a chance
to meet all the QuickTime engineering
crew which is usually going over there
and it's a chance to hear about what's
new in QuickTime and that's pretty much
it thank you very much
you