WWDC2003 Session 005
Transcript
Kind: captions
Language: en
good afternoon thanks for joining us
this afternoon I'm Brett halle I'm the
director of probity engineering today
we're going to spend a few minutes and
talk about how to interface professional
video hardware to Final Cut Pro for one
of the big challenges in the video
production space is you know how to get
rly get all that data into the computer
and an effective way whether it be video
and audio and all the various pieces and
then when you're dealing with that much
data be it you know standard definition
or high definition how do you manipulate
it and manage it and then display it and
do things like effects and in a very
highly efficient and in quick way to
talk about how that is done and how you
can produce products that we can work
with their final cut pro 4 i'd like to
invite Ken Carson the Final Cut Pro
engineering lead to talk about this good
afternoon if you we just released final
cut for a couple weeks ago came out we
introduced it at NAB this year and with
that we've really shown our commitment
to the high end space for video
production whole suite of tools and
today we're here talking about how we
can interface hardware with our
application and what you can do to help
us produce even better tools for the
industry so we'll be covering a number
of topics here there's a good number of
you in the audience who know more about
this than I do at this point having
developed a number of products for us
then there's a bunch of you probably new
to this and i'll be trying to split the
difference between those of you who know
it all on those of you who don't know
much about it so we'll be describing two
kinds of video hardware this works with
final cut those that just do the input
and output as opposed to those that do a
lot of effects processing on board
beyond what the host computer could do
we talking about our key extreme which
is our new software model for effects
processing and we'll be talking about
the AV
playback architecture which is all built
around quick time as an extension to
that we then use an enabler file the RT
enabler to like Final Cut know about the
capabilities we'll cover that and talk
about the opportunities for the
developers that are with our product
following that he'll have geo come up
and talk about the uncompressed codecs
that we've developed and released with
this version and the pro io protocol and
drivers step that we've developed so
with final cut for actually all versions
of Final Cut really it's built in
QuickTime that provides a lot of
flexibility for how it works what kind
of hardware can be attached to it as
opposed to everything coming from us and
you get what we provide you can handle
with through quicktime a wide range of
resolutions we can handle for example
320 240 jpeg all the way up to HD online
using hardware solutions software
versions so you by computer Final Cut
Pro that alone is enough to let you work
with software versions for example we
offline our T the DV and firewire out
for DV adding on Hardware gives you the
ability to perform SD and HD and that
can either be the i/o or accelerated
versions it's a little history on how it
affects has come about within Final Cut
final caught one came out in the spring
of 99 and the ability to do real-time
effects was a real obvious missing piece
there so version 2 took a little while
to get there we released the first
version with final cut of with real-time
effects based on quicktime that model we
were using hardware acceleration for for
Molly effects later that year we
released final cut 3 which introduced
the g4 real-time effect for that we took
the model that we had developed with
for hardware with QuickTime and applied
a software engine and attorney that
replacing the hardware that would have
done the decompression and the effect
with software so that you don't need to
add any additional hardware just to do a
simple effect and this spring we
introduced final cut for with the Archie
extreme model which really enhances what
we can do in software we are now
handling full-frame images different
resolutions a lot more effect and
expanded to be able to use the video out
from our software effect in version 3
there was no monitoring and external
devices and we've also extended to come
the number of effects that are available
for the hardware developers as well so
to go back through the two different
kinds of the video hardware this is a
really big distinction that you can
build a board that has a lot of
processing power built into it and
perform the effects in the hardware and
if you don't go with the system like
that you can provide a lot of
capabilities just not possible in the
base CPU on the other hand you can also
build a system that just provides the
video I oh um she take just the computer
and the software the computers come with
firewire so if you're working as DV
you're all set but if you're trying to
work in standard definition or HD video
there is no plug in the back of a
macintosh lets you connect direct here
video monitors or tape decks so you have
to have a piece of hardware in between
there that hardware can be as simple as
just doing the conversion from the
uncompressed video to the analog and
digital video world I'm going to go
through some of what's new in our key
effects are in which are too extreme so
one of the things that we've got in
there that's with the new
affect all of the effects scripts are
now capable of being real time with the
exception of those that read particular
frames back so that's a very small set
of them so the sex scripts is the
language scripting language that all of
our effects had been written in if
you're familiar with the program the
effects builder tool is built into the
application you can see all of all of
these scripts are written another aspect
of the new effect in version four is a
variable-speed effect so it's lets you
do keyframe speed real-time speed and
variable rate speed we've taken
advantage of OpenGL to do a number of
our effects processing for those of you
been catching some of the OpenGL
sessions you can see that there's a lot
of capability out there and in the GPU
and we're taking advantage of that now
we added beyond version 3 we have in
version 3 we had just to offline RT and
DV as our file formats that were
supported for real time in version four
we've added uncompressed formats as well
at this point we are only enabling those
when you connect a hardware board to
them since there is no io directly
available for the uncompressed format
internally on the video processing that
we do is generally y UV based in this
version we've optimized it's that we're
using the combination of 422 processing
and 4444 processing based on the effects
being performed to get you the best
performance another feature that's been
added is real-time 32 full down
insertion so with cinema tools bundled
in the application and the advent of
these new cameras that are shooting like
the panasonic dvx 100 put 24 FPS data
onto a TV ntsc tape we can capture that
24 FPS data out of that do true 24 FPS
editing and playback reinserting the 32
full down so that you can display it out
to an NTSC device so I'm going to come
over and demo a few things and highlight
what's unique in this version that the
hardware developers are going to need to
know how to get to so I've got that
sequence here you can see the various
tracks and video and audio I'll just
play through it once and then describe a
few things that are going on in here so
you can see that the effects being
performed here none of this is rendered
now first off over here in the RT pop up
we've got a number of settings this
being DV we've enabled a choice of
qualities to perform for playback if I
switch up the high quality will see that
more of the things in the timeline will
not be real time so if you look at the
colors over here the section being red
indicates that it doesn't think it's
real time in the screen it thinks is I
switched to high quality because this
other clip is now beyond what we can
guarantee will be real time and software
but we've also added another feature
called unlimited RT now this removes the
time constraints that we have but in
here for figuring out what possible on a
particular machine and in general we
have to be pretty conservative with
those estimates because you know
different machines you have different
kinds of disk systems if you have short
clips it's possible to play through
something that's more complicated but if
you try extending that clip it won't
work so in the normal model that we use
we are pretty conservative about it and
if I play through this section where
we've got multiple clips later layered
on top you can see that even in full
frame it is real time and we weren't
getting dropping any frames
I want to highlight the different render
bar colors that we've got up here a
change from version 3 over here in the
left we've got this section with the
cross dissolve that's for being for
performed in real time now that I'm in
full resolution that's coming out in a
kind of grayish green color because this
is full quality there's no advantage to
rendering this if I switch back to the
medium quality mode you'll see it as
bright green which is what we used to
use for the real-time effect over here
this section now that I'm in unlimited
RT this shows up as orange to show you
that it's beyond what we think is really
going to work correctly but we'll let
you do it and the section over here is
yellow this is a proxy effect so a proxy
effective em scrubbing through you can
see it this has a soft edge on it but
we're not able to perform that part in
real time so we're showing it in yellow
and as it plays you see is being
performed with a hard edge
one of the things that we've gotten this
version is the ability to work with
multiple codecs that handle the same
format so over here in a system settings
under effects handling we've got on the
left side a list of all of the formats
that have enabler entries saying that
they can be real time in some form and
then over on the right there's a pop-up
that lists the different manufacturers
who are providing drivers for that
format so this is an ntsc DV sequence so
we're using the Final Cut Pro codecs and
I added an abler that would list a
couple others here so that if I choose
another Kodak or choose none and come
back here we're going to see that the
timeline changes from all those being
real time to being red requiring
rendering because nothing is going to
perform those effects this was a problem
in particular previously with TV where a
couple of different vendors had TV
drivers and we we would disable our TV
if we found one of them installed and
they needed external control panels to
turn on and off their versions in order
to turn ours on and I'm expecting that
was version for a lot of people are
going to be providing uncompressed
solutions that are going to be
overlapping down here and this will let
people within the applications which
who's going to be running the effect one
other thing that I wanted to highlight
our here as a new feature is the
floating-point rendering now we don't do
floating-point processing in real time
but the option is over here in the video
processing tab to choose to use the
floating point of rendering engine if
you for ten bit uncompressed video
you'll want to do this as a way to
preserve the image quality otherwise
you'll be going through 8-bit paths
alright so let's go back to the slide
so one of the important things to
remember about Archie extreme and using
software effects is that they'll just
keep getting better in fact what we've
got here you know this is a dual g4 1.42
gig machine but you can imagine that
with what's been introduced here at the
show with g5 and with the software
improvements in Panther that the
capability of the system will just keep
improving on the hardware side all the
things we've been adding for software
effects really translate as well so for
example the variable speed effects are
available for the hardware and in
addition is the ability to mix formats
in a sequence that we've added which is
can be enabled with the hardware
products at this point previously if you
had more tried to set up a sequence for
say uncompressed and you want to drop a
DD clip into it you would need to render
that in order to play it back even if it
was possible for you to decompress it in
real time we weren't able to get the
data cube properly without a glitch in
between the formats in this version
we're able to do that using the new
filter that we've developed which I'll
get into later so on the floating point
rendering this is 128 bits per pixel so
it's 32 bits per channel we're doing
this with y UV plus alpha and it's what
we call the r4 FL format this is a
variation on the our 408 format which we
use internally for our y UV plus alpha
image processing our for weight is 8-bit
pretty much based on the standard 60 one
style of y UV but it's a fully sampled
444 format and this with 32 bits per
channel is using approximately the same
layout of course video tape machines at
this point don't handle 32 bits per
pixel
they top out at around 10 so as a more
efficient format for storing that kind
of data there's the b210 format so these
b210 is storing attendance / channel
Paxton it's four pixels and five bytes
530 to go towards something like that
now we've developed a lot of optimized
code for doing the conversion between
our for FL and V 2 10 and 2 the UI
formats and we're making that available
to the developers so that we get
consistent results and we thought about
leaving it on all on our side and having
the interface read data from either to
the UI or p210 format and doing the
conversion on our side where it'd be a
lot simpler but it means that there's
one more buffer copy going on if we let
give you the code to you you can put it
in your components and the buffer copy
on the decompress can be doing the
translation at the same time now one of
the other advantages of using the
QuickTime format in between here is that
our developers can write drivers to do
any format they want you're not
restricted to the end of each of 10 or
to the UI formats that we've devolved
for uncompressed you can come up with a
16 bit per channel format if you'd like
or you can come up with other more
highly compressed formats as well so
here's block diagram shows how was
performing the effects now this is
pretty much the QuickTime model is how
you do things on the left hand side
we've got the video path and then the
center is the audio path so the video
goes through the QuickTime movie then
you've got the codecs that process that
and send out to the hardware and the
hardware interface is to the external
roads you
tape machines or monitors the piece is
shown in gold are the parts that you
would be writing the blue parts are part
of the system software and the
application final cuts up there at the
top the audio pass in the middle is sort
of the same in the quick standard
QuickTime model the audio would be
driven directly off the quicktime movie
within Final Cut it's kind of a
roundabout path in the QuickTime movie
is being called driving at all but it's
going back to your final cut to provide
the data directly to core audio so that
we get the multi-channel output at this
point now standard real time effects
that you'll be using first three that'll
talk about are based are effects the
quicktime is defined so those composites
halfa gain and cross dissolve the
composite is selected within the Final
Cut interface through the layering of
tracks if you've got more than one layer
video they get combined through the
composite effect alpha gain is really
the opacity filter so that takes a full
hundred percent opaque track and add
some capacity to that so that if you
then layer it on top of something else
it's blending into the background and
the cross is also on transition lets you
take two clips put them next to each
other and cross the gulf between them so
building your effects off of the
standard quicktime codecs makes it a lot
simpler to develop these and this is
really the core set of being able to do
anything in real time in addition to
those we've developed a set within Final
Cut the providing enhanced capabilities
so for example in version 3 we
introduced color correction filter and
provided real-time effects to correspond
to that so we've defined
the color math and that's available as
well and built a filter that performs
this an effect script that provides the
UI to control us a speed filter but to
define variable rate speed and play it
back in real time now this can be done
either as a single track where we're
just holding frames and playing them out
in the new duration or you can also do a
dual track playback and blend the
results so in a dual track version we
provide you the frame that's a little
bit before the point in time that we
really want to be showing and the frame
that's a little after that point in time
and you can blend the two based on the
ratio of how far it is away from the
optimal time the RT data filter is
something we developed as a way to get
around the problem i was describing
before about using multiple formats and
tricular in one sequence and this also
simplifies the development of the
playback as well because in the standard
QuickTime model a you're playing normal
video without effects on it you're going
to get a call per frame to be
decompressing each of them frames and
then handling it and passing it through
when you're performing in effect effect
is really treated as one frame of very
long duration or one sample really and
you're responsible for producing all the
in-between frames based on the timing
provided with this data filter you can
construct a seat well Final Cut will
construct the sequence for you so that
everything in the movie is flowing
through the effects model that if you
have no other effects you always have
the data filter at the base it also
provides a way for us to pass off some
data that is otherwise not easy for you
to get for example what's the frame rate
it's one of those tricky things for
quick times so flexible that they don't
really care what the frame rate is but
in the video world the frame rate really
matters and doesn't change your hardware
really needs to know what it is and
sometimes it's not as obvious as it
should be when you're trying to figure
it out based on the QuickTime movie so
we can give you that directly through
this the our key skills effect lets you
include skills in the sequencing
real-time stills are set up so that we
render them when we construct the movie
and put them into Ram so it on playback
there's no delay accessing them from
disk or creating them if their
generators and the interface provides
the ability to control how much for the
applications memory you're going to
dedicate to steal cash and we've got a
lot of code in there to handle releasing
and reallocating the stills as you
switch the sequences in the front of
course if you if you're using more
stills and fit in memory and you're
switching it's got to be regenerating
each time you switch tabs so there's a
price to that and then the motion effect
implements the controls that are in the
Final Cut motion tab so this gives you
the ability to do scaling centering
rotation cropping distort drop shadow
those are the sorts of things that are
in it in the motion effect well
when we take the effects that you've
described in the Final Cut timeline and
convert that into a QuickTime movie we
have to break down the hierarchy of the
effects as they're going to be processed
so this is an example where we've got a
moving still over cross dissolve of two
color corrected DD sources so in the
upper left we've got the still effect
which is essentially a generator in the
QuickTime model because there are no
sources to that then below that you can
see that the two video streams which
feed the color correction effect those
two color correctors growing the cross
dissolve and the motion on still with
the motion goes through being composite
on top but within the quicktime movie
structure it actually looks a lot more
like our timeline and that you've got a
base video track and then you've got
tracks for the various sources so over
in the movie world you're going to see
just the composite there and the two TV
sources so for example the composite
that's over there on the right is the
only thing that you would really see
directly in the movie and when they try
to play back the movie Krypton goes
around and asks all of the installed
components to handle that format for
that particular effect in this case the
composite blend effect they're looking
for anybody that can handle that and if
you're writing these components your
component should say you can but when
that happens you're then responsible for
handling all the things that are below
it so you need to examine the tree make
sure that it's all stuff that you
understand and then you have to expand
that out and mask that to whatever
hardware or software components you have
that can perform these pieces
okay the capture architecture is very
much like playback architecture just a
few pieces that are replaced data is
flowing the other direction you've got
the sequence grabber that's really
driving the QuickTime side of things
with and then the pieces that you need
to write is the video digitizer and the
core audio device so in the audio side
it's flowing through the sound manager
and core audio again the audio is the
Center path and videos on the left and
as usual device control is off to the
side as a separate path that really in
most cases won't affect anything that
you'd be writing that the application
would talk straight to the devices
through something like a USB to serial
converter or if you're using a firewire
protocol could be ABC control so again
this is pretty much the standard
quicktime model for how to do things in
which you're trying to figure out how to
write to quicktime drivers there's a lot
of information available in the
quicktime website and I oak it layer for
how to talk to the hardware's is also
standard for the device drivers but
there are some things that you need to
know to work with final cut that are a
little different one of them is that
getting the AV sync on playback
accurately suppose you made it to the
QuickTime session yesterday afternoon
Sean Michelle was describing the new
api's that'll make this a little better
in the next version of quicktime
unfortunately that's not part of final
cut for at this point but we will be
switching to that at some point but as
it stands now since the audio in the
video are running through those two
separate paths that we saw their
independent and you have to do some work
to make sure that they come back
together and are in perfect sync when
you play back on the playback side is
pretty straightforward in that as
playback begins the first frame of audio
that you receive in the first frame of
video that receive
should be lined up even though they come
at separate times we know and you know
that those are the two things that need
to be played together to begin the
playback for capture it's a similar sort
of problem and that's not yet addressed
in the new version of quicktime so the
way we've worked around that one is to
have you insert a known pattern into the
audio since the audio streaming before
the actual video frames start to appear
you just insert on caught we provide a
call to give you the pattern to use you
insert that into your audio to up until
the point where you begin to capture and
once again in your drivers you know
exactly which video frame they're
starting to capture with and what audio
corresponds to that so when you get to
that first frame of audio that
corresponds that first frame of video
that you are capturing stop inserting
our pattern put in the real video the
real audio stream and will after the
capture walk through look the audio at
the beginning strip off all apart that
matches the pattern and line up that the
next sample of audio at the beginning of
the video and as I said the VTR control
would generally be through external
devices like USB now one of the other
areas that is different from the
standard quicktime model is the archie
neighbor this is what's used to let
final cut know the capabilities of your
RT system these are implemented as FX
scripts using FX script language and
their run as startup scripts as opposed
to the normal scripts that are
interpreted and run to perform effects
in real time the first important aspect
of if enabler is the effect mapping now
what the effect mapping accomplishes for
us is ability to use the same filters
and UI and effect scripts for real time
as we use for not real time that means
that the users don't need to learn
different sets of filters or choose
different filters depending on their
hardware configuration they just edit
the same way they always do and within
this effect mapping your script will
define which of our software FX scripts
you've got hardware or other
acceleration for and through the suspect
mapping we scribe what of what quicktime
components of yours get called in place
of effects scripts one of the big
advantages of that as i said is that
you've got only one set of filters and
effects that people need to learn to use
but another aspect of that is that you
can take the project file that you built
on a high end system with lots of
hardware you can move it over to say a
powerbook and work pork on a portable
system which doesn't have access to your
hardware and the filters in fact still
perform correctly because they were all
built off with the effects scripts we
just fall back to the effects scripts
you can see the right result and it
works out that way to somebody could be
developing their projects in offline
mode without the hardware and then going
into an online sweet and using the new
features and capabilities that you've
developed get things in real time second
biggest aspects of the RT enabler is
effect costing now the effect costing is
necessary to be able to predict before
we play it back whether something is
going to require render Ino can be
played in real time now first glance
that seems like a simple problem but in
fact it's not at all obvious when you
get to the middle ground as to whether
something real well perform in real time
or not you know if you've got a piece of
hardware that's got three physical
decoders on it you can pretty well
guarantee that a fourth stream of video
is not going to work that's pretty
simple aspect of the costing but if if
you're doing it in software or a lot of
the hardware these days
is more general purpose and it's more a
matter of how many other things is it
trying to do that limits the
capabilities so we've got two models
that we can use one of them is based on
a time budget and the other is a
resource budget and you can use the two
of them at the same time so the resource
budget is like the example I had where
you've got you know physical key
compressors and you can't handle more
than three streams of video so you can
put a limit saying you can only do three
on the other hand the time budget is a
model where you describe how long each
of the effects takes and how much total
time you're you're willing to take to
perform the effects and before we
construct the QuickTime movie we run
through it add it all up and see whether
all the pieces exceeds the limit that
you've set and if it exceeds the limit
we'll put up the red render bar scene
sorry this needs rendering we can't do
that in real time otherwise we'll give
you the green render bar and see it play
it play back in real time so a couple of
changes that we made in version four to
the enablers one of these is the
providing the name the manufacturer name
to show up in the effects handling tab
so if you don't add the new entry to the
enabler over there where I showed you on
the effects handling tab so we don't
know what to provide as as a name we'll
just put the manufacturer for CC which
isn't very useful to your customers
another enhancement that we made to it
on if you looked at the Final Cut
enabler there's one that's inside of our
application you can see that it's a
pretty big file and we've added some
commands that help save the redundancy
make it easier to write the pieces and
maintain the code here putting in there
and for example if you're defining every
all of the capabilities for DV ntsc DV
you don't have to duplicate all
that again to do pal and with the new
commands it's pretty easy to just
duplicate and until it what's different
rather than repeat a lot of the data
another new thing that we've added in
this version is a debugging enabler
which spews data to counsel about what's
going on while we're interpreting the
effects and figuring out what we're
going to be building in the QuickTime
movie one of the trickiest things has
been when you build your effect you've
got all your components installed you
think everything's set up and you put it
all together and you get a red render
bar it's like okay why isn't it real
time there's a lot of reasons it could
not be real time and short of you
getting together with us using your
debugger and our debugger it's been
pretty tricky to figure out some of
these cases this way we're providing
more information about what we're doing
on our side what we thought didn't work
then this is going to help a lot and
we'd like feedback on other sorts of
things that we can be putting in here
that'll help you out so here's an
example a very simple example of what's
necessary to enable video out and the
uncompressed software path in version
four so the first first line is setting
the enable to the UI effects flag and
that enables software processing of
uncompressed video in real time within
the application the second switch the
enable software RT video out flag lets
us call your video out with our software
effect so this makes on the RT enabler
looks simple and if you're building just
an i/o device this is all you really
need to be able to do
you college effects processing will be
handled through our software we've
already got a complex in a blur that
describes how to do all of the effects
that we perform in this case the
parameters one at the beginning of the
enable software or key video out is a
description of what the relative cost of
your video audio is compared to our
standard versions so you need to do a
little profiling to see what scale
factor to put in there so that our
costing works out right but otherwise
it's pretty straightforward so the
opportunities in the area to support
Final Cut hardware acceleration always
going to be able to provide more effects
better quality in the FT world you know
there's a sinewave for example now is
can do a number of streams of full
quality T on SD with wide variety of the
effects that we have in Final Cut in the
HD world the bandwidth requirements make
hardware real necessity still it's a
number of products already that provide
three to pull down in real time from the
video outs so we're playing it back as
though it's 24 and they provide a spigot
that converts that to 34 playback
another aspect of what you can do with
hardware is provide other kinds of
codecs so you know mpeg-2 motion jpeg
there's no Bigfoot night who knows you
can do whatever you want
so with rt extreme that really demands a
an IO style of device is you're going to
use the software effects built into
final cut all you need to provide is the
interface to the video hardware that can
you be done either through pci or
firewire this number of boards out there
now that are doing this with PCI geo is
going to be talking about how you can do
this with fire right now one of the
things that you know this the i/o device
provides is a way to connect if he cares
but another aspect is to give you true
color monitoring on ntsc or pal video
devices another big area of
opportunities this isn't exactly
hardware-specific but affects scripts
and after effects plugins fit into the
effects model within Final Cut nicely so
to get back to HD this is really the
biggest opportunity area that I see
coming up and that as Archie extreme
gets faster and faster taking over the
SD video world is within sight we're not
there is still a lot that Hardware can
do in that area but in HD it's going to
be a while there isn't anything that's
going to get you high quality HD out of
the computer as it is today advantage
it's the same basic architecture of SD
it's just bigger wider higher faster and
of course the new hardware that we've
been introducing the g5 with pci x will
assist you with the high bandwidth needs
in this area and the extra raid is a
good way to get a lot of data quickly
off of the hard disk all right i'd like
to introduce javonni agnoli to take over
and talk about the uncompressed codecs
that we've developed
[Applause]
hi nan ok my name is jovanna Golan I'm
going to be talking about two things
today the first is going to be the
uncompressed 42 codecs that we've
included in final cut for release I'm
also going to be talking about pralaya
after that so in that 40 release of
final cut we've added standard 8 and 10
bit uncompressed ycbcr 42 codecs and
these are also available to developers
for inclusion in their products say you
wanted to you know ship a pci board that
work with after effects the main reason
we did these codecs is so that we could
facilitate the exchange of uncompressed
data and remove the need for vendor
specific codecs we saw a lot of problems
with developers I mean not users final
cut users trying to exchange
uncompressed data and they needed to go
and send their friend who wanted to read
the data the the vendor specific coded
codec that had created it of course
these codecs are tuned to take advantage
of the latest hardware we've done a lot
of work to improve the speed of them by
writing some altivec code and another
big one with these codecs is that final
cut for supports them natively so the
format's of these codecs are pretty
straightforward and they've been around
for quite a while actually they were
originally described by Chris perazzi in
December 99 in QT ice floe 19 and that
put the link up here so you guys can
check it out and we commonly refer to
the Codex by there for cc's so Ken's
talked about the two of you on the beach
in 10 code
that's essentially what we're talking
about here AP a-- and Timbit
uncompressed ycbcr codex have a lot of
capabilities but things that you guys
are most interested in is what they
compress and decompress too and the tube
you I codec supports the are 48 and a v4
late FCP formats as well as 32-bit a RGB
and to view why the candidate adds
support for the SCP are for FL which is
the 32-bit floating-point format and be
64 a which is the after effects 16-bit
RGB format we also support the QT gamma
level API is in these codecs and so if
you have a destination pixmap that has a
different gamma than the source image
will do the gamma conversion for you and
as I said before they've been optimized
for the latest hardware so the main
reason we're telling you about this is
because we want you to use these codecs
and just you know the benefits are
highest quality it's uncompressed is not
a lossy codec we want to get a common
file format among our user base so that
we can ease the interchange of data
among their users which we believe will
improve the use of workflow dramatically
and it's also important that you support
these formats if you intend to use RT
extreme because art extreme is tuned for
these codecs we use the two voi format
internally to render so if it has to do
a conversion to another format that's an
extra copy there that means loss of
performance and best of all you're
getting free optimized code and can't
get better than that we're going to give
these codecs to you to distribute
if you'd like and that means less work
for used as a developer you can focus on
doing something cool with your hardware
instead of writing codex so the next
thing I'm going to talk about is pro
I'll which is I don't know a new way of
getting audio and video in and out of
the Macintosh and first I want to talk a
little bit about the philosophy behind
pro I oh and why we went about doing
this as can talk previously we see this
trend of ever-increasing processing
power in the CPU and GPU and decreasing
cost of storage and so that led us to
developing RT extreme which takes
advantage of those capabilities in the
host machine and with that we we needed
a mechanism for easily getting audio and
video in and out of the Mac primarily
interfacing with professional devices
such as high-end be TRS and cameras and
so when we went and try to figure out
what kind of bus we were going to use
week's decided on fire wire because of
its easy plug-and-play capabilities and
it's also really simple instead of like
having to plug in pci card or whatnot
you just plug it in similar to DV camera
so what is a pro I architecture it's
basically got two parts we have a data
transfer protocol specification and we
also have firewire unit specifications
and the data transfer protocol
specification mainly deals with how to
send uncompressed audio and video over
the 1394 a bus and so in that
uncompressed stream we're sending
synchronized audio and video the video
can be an to see or PAL eight or ten bit
ycbcr 42 video and we also send the
eight channels of uncompressed audio
which can be 16 or 24 bit and you know a
variety of sample rates and that all
comes out to about 200 30 megabits per
second and then once we add like
protocol and packetization on top of
that we're really pushing the limits of
the 1394 a spec but we we believe that
it was important to fit all this into
the foreigner megabit per second spec
versus just because of the wide variety
of machines out there that have these
connections on them so getting back to
the protocol specification also covers
things like activation and the framing
of the data and some flow control
techniques and the second part of the
architecture is the firewire unit
specification and if you know anything
about firewire but there's a lot of
resources on the web a unit
specification kind of allows the device
to define its capabilities on the bus
and so we have three units there's the
AV unit which is the it does most of the
heavy lifting it's handles all the
transfer of uncompressed audio and video
and we have a serial unit which handles
allows us to transmit rs422 over the
firewire so this is really useful for
debt control and the third unit we've
put into the architectures the firmware
upgrade unit which allows you to upgrade
your device at some later date so if you
want to upgrade the firmware its there's
a unit that allows you to do that so
what are the developer opportunities
with pro I oh well what we're trying to
get is Pete
you developers to develop pralaya
devices we really think that there's a
broad market for these things and it
really you know we have a lot of success
with the first devices today so you can
develop an analog only device that would
you know convert component or composite
signals XLR into into digital media
digital data and send it over the fire
war bus or you could develop a digital
i/o device that would take in SD I is a
data etc or you could do a device that
did both or maybe have an idea of some
other connection that we haven't thought
of and we'd be happy to talk to you
about that and you know update the
specifications to include that so the
first pro I'd of pro io device that is
out there today is the aja io and it's
kind of the Cadillac of devices and I
just put this up here as an example to
show you what these devices look like
basically we have a bunch of analog and
digital connections pretty much anything
a professional video user could want to
connect to their existing VTR s or
cameras and then you see there's a in
the lower middle part of the screen
there there's a firewire connection
that's the bus connection to the mac so
we've basically you know capturing data
or playing back data through all these
different connections and then
transferring it over the firewire bus to
the Mac there's also rs-422 connection
there for deck control which is you're
in the pro video space you know is
absolutely necessary
so along with the architecture we're
also providing drivers the the prior
driver release is a standard set of
quick quick time drivers for pro io
devices so if you make a device that
follows the spec it should work with
these drivers in that set of quicktime
drivers there's a beaded and day-out
plugin for capture of audio and video
and also be out and VL transfer codec in
clock claudio how plug in output device
for playback and it's basically the
quicktime driver set these need to
interrupt with any quicktime capable
application so the idea is that these
are drivers should work not just with
final cut but with other applications as
well additionally we support the final
cut specific API is it can talked about
earlier the cover a B sink and a deck
control and those are you know where the
Final Cut themes of course we do the
extra bit to support our app in addition
there's a apple firewire serial driver
which allows you to send zero data over
the fire our bus to the pro i/o device
and of course all the system optimized
for the latest hardware lots of good
altivec loops etc and best of all is out
there right now is released yesterday
I took Ken's capture architecture
diagram and kind of grayed out all the
non pro I Oh parts and just to show you
what we're providing in the driver
package and also what you as a developer
need to do if you look down there
there's the pro hardware that's what
developers need to develop end and then
you can look and see the we provide a
pro IOV ditch and oil pro I oh I'll plug
in to get audio and video into final cut
or any quicktime enabled application and
then we also have the Apple file or
co-driver to beat the control and all
those talks through the eye-fi our
family which you know sends queries
across the firewire bus so playback is
very similar we have transfer coda it
could be out clocks and video out and we
have a cow plugins and audio out and
again they communicate through the
eye-fi our family to the pro io hardware
and there's the Apple file or seal
driver there for any debt control so
obviously I can't tell you how to build
a device right here so if you want more
information on uncompressed 42 Kodak
licensing or pro io licensing or I want
to take part in an upcoming pro io
kitchen you should contact jeff lowe
because our WOD our representative and
he will help get you the right
information i think i'm going to call
bread up it's going to wrap up for us
yep
[Applause]
so we spent some time going over all the
various ways that you can be approaching
get involved in developing hardware for
final cut for we strongly encourage you
view that there's a number of other
sessions that you can get involved in
some of them coming up here the or we
have happened early in the week and if
you've missed them they should be on the
DVD but we will have a session tomorrow
for dealing with plugins and encourage
you to do that if you're interested in
creating for example effects scripts or
things like after effects modules things
like that though we'll be talking about
the various plug-in models there and we
will have a feedback form for final cut
on friday afternoon at five it's already
mentioned just flows the guy if you're
interested in in any of the professional
film and video area he can help you as
mentioned we will be having a kitchen
later this year to get into details
about actually how to make fun of these
devices and what the protocols and a
wire protocols and so far there's a
number of places you can get more
information and obviously you can take
while the scribble all this down but i
believe all this information is up on
the web for you but there in addition to
the kind of standard stuff that's out
there in terms video formats you if you
haven't had a chance there's quicktime
sessions and audio sessions and i'll get
sessions that will be you know very
important for you in terms of a driver
development
you