WWDC2004 Session 704

Transcript

Kind: captions
Language: en
and we're here today to talk about
pre-processing principles and we're very
very lucky to have I think one of the
premier folks in this area Ben Wagoner
he's from Ben Wagoner Digital everybody
knows Ben Ben's been around for a long
long time but we're gonna go for about
an hour we will hold all questions to
the very end of the session I do when we
do the questions please come up to the
microphones so we can can catch your
voice on the recording okay thank you
okay we've been let's dive in so how
many people saw this session at the 2003
show that's how many of you okay it's
good because I only thought I was doing
this about five days ago so it's the
exact same slides but the content will
be new so for my own spin two things
so let's it works now and it's you and
there's me and it's all good that's why
I am wonderful okay so today we're
talking about pre-processing you think
about pre-processing as everything you
do - between the source frame of video
and the frame that action goes into the
codec so pretty simple definition here
and it's so big this stage up here
the focus stays gonna be on the web the
video I'll mention mpeg-2 stuff here and
there who who cares about pre-processing
for DVD take care that I'll be talking
if I'm mad about pre-processing for DVD
now I have nothing about interactive who
cares about pre-processing for web for
cd-rom anyone doing pre-processing for
high-definition oh that one yeah okay
it's cool stuff we'll talk about that
I spent far too much time doing some
working with a damaged e5 tapes in HD
last fall so so but the first half a
session is gonna be be talking about
slides and showing some screenshots and
that kind of stuff the second half is
I'm going to be doing some
demonstrations so I got a lot of source
Clips here that's a lot of tools here so
I let you guys pick the demonstrations
you guys want to see so be more
extroverted among you start a planning
what your questions are gonna be and
focus here just hands on stuff that's
gonna let you get better quality video
up on the way up or your DVD or ever
you're doing it so
we're trying to increase the bang for
the bit out of our digital media so
we're gonna define for processing a
little more detail explain why it
matters talk about so the core
techniques I'm gonna do some demos of
some cool stuff in the Mac I call work
over there and mainly web video but some
about DVD as well and you all do more
than I was planning on because you guys
care about it a lot there actually some
pretty cool tricks for DVD you can do
these days like 704 wide encoding I'll
talk about that
so pre-processing I mentioned before you
think of your video stream as a series
of frame he's got a DV get a whole bunch
of 720 by 40 frames
whoo-hoo here mainly works in Powell and
you guys okay so 720 why to buy either
480 or 576 tall frames you know your web
video we visit 320 by 240 go to DVD it's
probably the same size your video source
you know we're just so you're doing
everything to transform the source frame
into the optimum output frame for the
codec this is both the most artistic
part of compression because
stylistically you have to like make some
final decisions often the process
trade-offs that kind of stuff it's often
the hardest part I mean knowing what the
data you want to get the right data
right you just type in the data a value
you want to have knowing like what's the
right luma level for the have it to look
good you know that requires some thought
I probably spend on a typical project
it's going to kind of a high volume
stuff maybe 90% of my time in the
compression process arm reprocessing for
challenging content just because the
other stuffs pretty easy you know a
codec kiwano data rate you want you know
the audience audience we can tweak and
tweak pre-processing if you're
incredibly quality obsessive which I
recommend you all become because there's
too much bad web video out there so why
matters it's all about maximizing the
bang for the get bit you want to get the
maximum communication value per you to
data if possible to your end users you
know and it really does matter a lot I
mean correctly process video to dirty
kilobits can be way better than badly
process video at a thousand kilobits so
you can also think of it as like you're
almost like you're buying you're buying
your customers more bandwidth you know
by treating your bits better
so you want to make sure every pixel has
date of the matter is every bit as
create something you care about and
you're not wasting bits and pixels on
things that aren't actually
communicating to the end-user so there's
kind of a couple frames here let's see
this is scaling correctly okay so this
is just from a movie trailer this is
from the biker boys movie trailer anyone
see biker boys you neither but I had the
Train oh there we go
any good yeah I don't think so that's
the problem
but I had the movie trailer it's often
good to work with not very interesting
content when you're experimenting things
who get distracted by plot back kind of
stuff so this room biker boys let's go
with the source frame looks like pre
typical interlace frame projectors kind
of squashing a little bit they get the
idea here and because it's interlaced
you wind up having the two different
feels there's motion and all that kind
of stuff now if we process it correctly
a couple thing we'll go from that to
this and the shape of the frame should
be changed a little bit all right if we
take the source frame to modify it at
all encode it we get that this is a
hairy kilobit Sorenson 3 Pro and that
looks terrible what's the word terrible
because codex modern codex based on DCT
is something else like that do really
good with gradients but sharp edges take
a lot of bits to encode so when you're
using say the thin horizontal lines of
fields you wind up having that be very
hired in code and almost all your bits
wind up trying to code the lines you
don't care about it's supposed to carry
the content go to the content you do so
same beta rate pre-process the frame we
get that instead so is that perfect now
but obviously from an end user
perspective that is a lot worse than
that same bitrate so from the it for the
end user these got a better experience
no sacrifice
all right and that's not really the
exaggerated case and I see a lot of
people trying to do this on the web and
I and I wish they would not one of the
reasons why the quit time is such a
reputation for having high quality high
quality technology is really because the
people at Apple who do the QuickTime
movie trailers are so much more
confident those people who do the movie
trailers you can find it Windows Media
common real Mediacom
just you know it's it's much more about
pre-processing than codex look at you
they give you quality so most critical
feature admission tier is deinterlacing
so you've seen video before you're even
lies in your odd lines contained
information that's temporally separated
by half a frame duration so if you're 60
frames at 30 frames a second the two
fields will have an image a sixty of a
second apart
it was interlaced video obviously
obviously it's a progressive video don't
have that going on at all anyone here
not grasp interlaced video I mean the
video people whose mainly a video person
you're mainly a computer person all
right
does anyone not grasp interlaced at this
point all right
okay you're you'd be too embarrass to
tell me if you were but at least I can
now claim I asked so and what happens is
that one frame you saw where when you
have a lot of motion areas a lot of
motion you wind up with this kind of
crosshatch effect in areas where there
is a lot of motion it looks normal
computer display BEC has always
progressive so if you're going to a
computer device it's always progressive
projectors are pretty much always
progressive there really aren't very
many truly native interlaced display
devices that are being designed at this
point we have like a legacy televisions
of that kind of stuff but clearly
everything is turning to droop or to
progressive display device and have
interlaced content is just getting
converted to progressive on playback so
the future is progressive this is my
feeling and if you're Dilbert for any
kind of web web codec you need to be
interlace it because none of the web
kotas we care about support interlaced
mode you interlace one it just looks
stupid I'm even before you compress
it'll look stupid because you someone
sort of baseballs I've seen one baseball
you see two baseball's that have half
their lines to it that's very confusing
and also just because the codecs find
those sharp lines are on the edges of
the movie two objects so hard to encode
almost all your bits wind up getting
spent on the stupid part of your image
and very few are left for actually
making the herbage to look good so big
degradation in quality of course
progressive Condit doesn't need to be
dinner lice because our new progressive
if you're delivering on DVD with mpeg-2
because that's also a fielded medium
you're just gonna keep the field same
field mode so if you've interlaced
source you make a DVD you're gonna keep
it interlaced sarau
you have a progressive source you make a
progressive and take to file for the
optimum results and these days most
modern max DVD player will automatically
D interlace on playback of inner eyes
concat a lot of older systems thinning
your graphics card doesn't always do it
I'm not sure what the actual rules on
that but it used to be that the mac DVD
player couldn't play interlaced content
very well at all but it seems to be a
lot better more recent versions so the
basic method method of de-interlacing if
you will is just okay I've got my even
lines of my odd lines have a different
image in them well I'll just throw out
all my even lines to throw out all my
odd lines and then to process the image
from there so if you have a 720 by 480
DV frame essentially you're just
throwing out half the lines he left with
a 720 by 240 DV frame and they get
stretched or squished or whatever
processing like any other kind of normal
Photoshop cell image processing and that
works the problem is you're thrown away
after image data if you're doing like
little small web video that's not a
problem like Putin broadcaster does that
internally you know if you're doing 320
by 240 or less for broadcast a huge
quality drop but if you're going to go
to bigger frame sizes you can actually
line up with a lot of compression
artifacts so you're going from because
your schemes have to get stretched so if
you're gonna do a 480 line out from us
internally at 240 you wind up doubling
the height and width a typical scaling
blocky artifacts there so if you're
really doing deinterlacing what you want
to be doing is it what's called adaptive
deinterlacing and all the tools we care
about these days support some flavor
after-dinner lacing lots of different
names basic idea of adapted interlacing
is to text the parts of the frame that
are moving
deena release those but the parts the
images that aren't moving so there's no
temporal difference in them and if
something doesn't change spatially from
field to field in that time you know
doesn't doesn't need to be interlaced
so the adapted analyzer will leave those
parts alone so in a case where someone's
thrown a baseball yeah you lose half the
resolution of the baseball I'm sorry
it was half the resolution in the
baseball but you know if some sort of
baseball I got a big static background
you know the fence or whatever the
background the fence will stay won't
won't get the interlacing will remain
sharp and that works well for our visual
system you know you can either detect
kind of motion or detail but we can't
really see fast-moving detail so did I
set up some occasional little guess
wrong but most of the modern
implementations and 99% the time is
gonna give you the right result
sometimes like scrolling credits you
might see things some kind of weird
results the day after the interlacing
probably probably the worst case so try
not to have scrolling credits
I've actually at some times he's gone
through and we implemented the credits
just type them in again and we rendered
them and progressive just to get around
that problem now the most important
thing for your NTSC folks for film
content and you pal people can happily
and pridefully ignore this part you
don't have this problem is inverse
telecine so film runs at exactly 24
frames a second video runs at not
exactly 30 frames a second it really
runs at 23.8 29.97 and it to those firms
is two feet it has two fields in it so
when film gets converted a video natella
sending machine to ntsc video what
happens what's called 3:2 pulldown so
the first frame of film becomes three
fields video the next frame of film
becomes two fields of video three two
three two and three to pull down and
that bc works too you get your 24 images
become 60 fields per second and you're
good to go and you know it works about
as well as you'd expect of course the
motion is never quite smooth because
some source frames will last three
fields of video and other ones the last
two fields of video so motion that
would've been very smooth the beginning
would have been a little bit jerky
because you have that because the
duration of each frame pull and play
back up a little bit off and that's why
you watch movies on pal
I'm moving on pal horizontal motion and
pans and that kind of stuff always looks
a little bit smoother than the same
movie would look at NTSC the way that
PAL conversions does it's just the 24
frames a second or sped up four percent
to 25 percent and just remains 25 frame
second progressive and that's so easy
and I should move to London because half
of my life is dealing with NTSC
weirdnesses like this but but we got to
do it right and that's it's hard to do
and that's why you can make money
charging for doing video work so what
you wind up with when you've created a
file with it with this and we've seen
this a lot is a file where you'll see
three progressive frames and two
interlaced frames repeating and I'll
show you sample that in a few minutes
and that's the easy way to test just go
through QuickTime Player Freimuth go
through frame by frame in a section with
motion you'll see three progressive
frames two interlaced frames pretty
progressive Twitter list that's that's
what you'll see and the nice thing with
that is if you have a tool which hasn't
in burst health any algorithm and we
have several on the Mac it'll be able to
reverse that process instead of having
to delay sit on parameter data out
it's just able to restate the original
source frame so it's able to reassemble
both fields into the original 24 frame a
second video and that's great for two
reasons one it just you know it's we
don't we keep the full image data and
second are able to store restore the
original time base so let me out but we
can actually encode at 24 frames a
second instead of doing the 3:2 pulldown
thing so I should get smoother motion
same source on computer playback they
would have had on the video playback
because every frame will have the same
duration of exactly twenty-fourth of a
second now one complexity is what you do
the transfer the film is really slowed
down a 23.976 frames a second to match
the way that 60 comparative 59.94
details don't matter that much but for
most tools if you're encoding with using
inverse tell us any it doesn't support
changing time you have to actually
encode to 23.976 frames a second as a
magic number other tools will you change
the time base you can easily switch that
around and this is if you have film kind
of content that's pretty much any music
video movie trailer feature film
primetime drama those are all going to
be canta those created either in a film
or the 24 p high-definition camera and
if we have content that that's like that
that winds up a 3:2 pulldown
you
absolutely I had this available the city
pays off hugely in terms of Alba quality
one complexity is when someone has
sources it's done on video but then it's
edited as source that came out as 24p
that gets edited in a fashion where it's
not keeping track of where the original
frames were now Final Cut is trivial you
just have 24p project take a 24 piece
source it'll do it frame accurate and
when you export it with 3:2 pulldown
it'll keep what's called the cadence of
the three in the two consistent however
if someone just takes much to tell us
any video files put some in a final cut
at a video framerate project an
interlaced 29.97 file you know when they
do edits they're not gonna wind up
putting edits exactly where the Edit
would have been in the film originally
and then when you take that into a tool
you can wind up with issues where I just
can't figure out where the frames were
because it was called the cadence so
instead of getting the three two three
two you make it three two three two
three two one three two three two three
two three two two two three and kind of
weird patterns that or video edits get
dropped in and if some tools just
completely fail like After Effects when
given that kind of content other tools
like cleaner
guilty' pretty robustly ideally just
have content where it's done correctly
but cropping and cropping is a place
where I see people messing up a lot
especially you need you'll see a lot of
web video out there well there's a few
pixels of black and the left and the
right or on the top of their bottom
something like that
the reason about that is video monitors
televisions the don't show the edge of
the video signal kind of my definition I
mean consumer TV does not have an under
scan mode you just wind up with a safe
area around the edge that you know you
know is gonna get left out obviously we
can play back on a computer on a screen
it's going to give you every single
pixel the upper left-hand corner pixel
is going to be shown on the screen or
else if it's horribly miscalibrated
because of that when you're converting
from content composite for video there
may be junk around the edge of the
screen and never be seen on a television
but that would show up wouldn't you look
at the same frame on a computer and the
simple thing there just to remove that
crop it out
and a very minimum you want to crop out
edge blanking
maybe you know with the thin black lines
the top of the bottom left the right
often DV source has no blanket at all
but typically analog source is going to
going to and sources being captured at
486 lines tall it's almost certainly
going to have a few black lines on the
top and the lower the resolution you're
gonna go to the more you can crop
because we shot the safe area which is
because the region that's known to work
in all televisions no cement Agra is
going to put critical content within ten
percent of the edge of the screen
because they know that they kick plenty
of systems aren't able to similar some
have TVs that are going to either give
you a distorted image or no image at all
at the very edge of the screen so
they're not gonna put it in critical
there the your lower third text is not
going to be intrude into the 10 percent
boundary around the screen the actors
heads aren't gonna be in there all that
kind of stuff so the lower the
resolution you're going to you can
actually crop pretty aggressively into
the safe area make it the foreground
objects a little bit larger another
thing you always want to do is you want
to crop out letterboxing there's no
point in sending in black bars to a
customer for most stuff it's not for web
video because you can make your web
video any size you know if you leave any
kind of black a bar in there from Lutter
boxing you're just sending your spinning
bits and spending CPU cycles on playback
on nothing much better to just crop
those out and call it done and also
because many codecs especially low bit
rates give you artifacts with sharp
lines that very sharp black line of the
letter boxing winds up messing things up
a little bit you can get distortion on
that top when you go to DVD of course
you know if you're doing the DVD only
supports up to sixteen by nine
anamorphic so if you're having any kind
of film source that's more than 16 by 9
aspect ratio at like now most times like
0.85 to one or two point three five to
one you're going to leave in somewhat
our boxing and that's inevitable the DVD
but for web video you never need to have
letter boxing in your video just
reference there so the this is the five
percent boundary here called action say
has the 10 percent called title safe so
the general rule of thumb is for the
action safe area motions in this area
both seen as motion anything out here is
fair game
we're not be presented anything and he
or she will see it kind of moving but if
it's text you might not be able to see
it'll be distorted around the edge said
the title same area the image should be
pretty clear I mean you're not going to
get any distortion at all theoretically
so text should be visible all that kind
of stuff so there's never me anything
critical outside the action safe you
should assume everything inside title
safe is gonna be critical and you know
depending on the content somewhere in
here is kind of the range between where
things get critical or not if you just
come look at that if you can imagine
with all these a thing there we go
yeah it's your pointer if you look at
the bounding box there if you're doing
like a little light 160 by 120 Web movie
promote amuse or kind of stuff you know
the difference between having a frame
that shows all their stuff and the drums
kind of neat and you know his head but
if you just crop down to that you can
see his hands but better his facial
expressions better and can help out a
lot doesn't scaling scaling is taking
what's our after we've cropped our bit
our source Missy crop is excelling don't
take these pixels outside this box into
consideration you're doing your scaling
and then the bitmap it gets change in
size to the output bitmap that actually
gets handed off with a codec and two
things one is into especially for web
video you need to make the videos
smaller to play back on the web and two
you're also gonna be correcting for
aspect ratio in here a one thing this
typically freaks out web and computer
people is I start talking about
non-square pixels because from the
computer world the idea of a non-square
pixels like talking about a square wheel
but the video world all the professional
video formats use pixels that are
rectangular in shape so 720 by 480 it
can be either four by three aspect ratio
or a 16 by 9 aspect ratio but if you
just looked at 720 about 480 as square
pixel that actually be a 3 by 2 aspect
ratio and that never actually occurs so
any kind of DV file is always distorted
either squished or stretched
horizontally depending on the format is
when you're going to a web video format
that's gonna be square pixel and almost
all of them are gonna be square pixel
you need to be able to correct for the
fact that the source is non square pixel
have to make a two square pixel it's
kind of
stretcher to squish it in order to make
sure you correct for that
the basic goal is if on the video
monitor there was a circle on the
computer monitor and playback after
compression you want them to be circle
as well and that's pretty straight
that's go let me see a lot of stuff or
things are about stretching about ten
percent too wide on the web quite common
people do people figure okay what 720 by
480 I'll cut it half size it'll be 360
by 240 and there we go and I'll put up
on the web and you can assume any time
you ever see 360 by 240 web video file
up and someone did it totally wrong
because they do not get the aspect ratio
correct if you have 4 by 3 source you
want you want them to have your output
resolution in square pixel also be 4 by
3 so 4 by 3 source a 320 by 240 would be
good because 320 divided by 240 is goes
down to 4 by 3 6 further before 85:12 of
384 anything where you have 4 units wide
by 3 highs and it work if you do 360 by
480 everything's gonna be 10% to wide if
you deal with actors very much when you
make them about 10% extra 2% more fat
they complain a lot so a bad win there
and your circles or ovals all that kind
of stuff
so just want to make sure that you're
we're going to a square pixel output
format you want to make sure that your
output resolution matches your your
output in your source frame aspect ratio
so two two great examples for web video
they are you're coming from 720 by 480
a typical DV content you wanna have a
320 by 240 or 4 by 3 and it's a 16 by 9
source for 32 by 240 works just fine so
the you're picking the output the only
difference here is the asset issue of
the source file the resolution is 720
about 40 in both cases and if we're
doing pal these are both good numbers as
well because again even pal is 576 by
720 but it's also either 4 by 3 or 60 mi
9 so you need to have a 4 by 3 or 69 and
output frame size some codecs would like
don't like don't like odd numbers and
that kind of stuff as a rule of thumb as
long as height and a width or both
divisible by 16 you're in pretty good
shape with sorenson and mpeg-4 and that
kind of stuff
in peg 2 has a few very specified frame
sizes it supports or dvd stuff I'm sad
you just have to pick one of those and
typically you're going when you're doing
a DVD in code you have to worry about
this at all yours gonna stay at that
you're activating a scaling at all if
it's a 16 by 9 720 by 480 you're gonna
go to a 16 by 9 and 720 by 480 very few
cases like if you have a 70 by 486 for
example in the crop from 746 down to 480
and it's really important when you move
from a 46 source before 80 line source
you don't scale it but you crop it
because the 46 is actually grabbing 3
more lines out of the bit I mean 6 more
lines out of the video signal then the
480 does so if you scale it you'll get a
little bit of distortion and a little
bit loss of image quality as well the
world if you have a 46 source like am
you know like s di caprio something like
that I want to make a DVD out of it I
want to crop 4 lines off the top and 2
lines off the bottom may be tempted to
crop 3 & 3 because that sounds the right
numbers but if you crop an odd number of
lines depending on the tool you may or
may not wind up reversing your field
order so as the odd lines come even
lines and then when you play back things
particularly pigley so much better just
pick four at the top to off the bottom
you don't have to worry about this
change is happening and else the quality
of your scaling algorithm matters as
well professional encoding tools we use
you know a sign or bicubic kind of
scaling if you just do like a QuickTime
export you know for QuickTime Player
going for a really big font under really
small file you'll often get a little bit
lower quality scaling just cuz you know
most times not meant as a fresh line
coding tool and the player itself
compressor will give you a much better
result than
QuickTime Player will the exact same
settings because there's a higher
quality scaling algorithm okay
so we're offering for web video or
anything really our goal is if we're
gonna scale we only ever want to scale
down because anytime you scale up your
interpolating that's like going into
Photoshop or After Effects and trying to
make something bigger than it was and I
always get soft and not will often get
blocky
it's not it's not a good experience when
you're streaking down you you maybe lose
some detail but the resulting image will
at least be sharp so let me kind of walk
through scenario here you have a 720 by
ready you're doing a doing a
non-adaptive to interlace or if you have
contour the entire frame is moving and
if you have a case for the whole video
frame is moving at once
adaptive deinterlacing does not pay off
at all because you know all definitely
the interlacing does is it doesn't delay
the parts of the frame that aren't
moving the entire frame moved you're
just gonna like how to limit on the two
fields if we do the safety area crop
have 10% that would take us down to a
640 a by 2:16 size and then we won't
convert it from there to a 320 by 240
which seems like going from a 720 about
480 by a 350 by 240 should be scaling
down by a lot but after we would be
interlaced after we've crop we're
actually scaling up vertically even
though we're going that way horizontally
which leaves a lot of a tricky things
about pre-processing and you're working
with interlaced vertical is vastly more
important and tricky than horizontal us
because you know you have all the
horizontal could possibly want but the
vertical is really where you're trying
to preserve so I think it's down to like
figure out what you typically gonna hard
to preserve as much vertical detail as
you can you know typically don't want to
crop even one extra line a vertical you
don't need to and then just the
horizontal is necessary so when you're
going to 320 by 240 or higher from NTSC
or 384 by 288 or higher with pal you
want to crop as little as possible
you're definitely always gonna crop out
ahead of blankie and/or letterboxing
because there's no data there but and
you want to don't want to crop any extra
stuff in the safe area and you want to
use adaptive the interlacing if if it's
true in their life source and if it was
the film source of 3:2 pulldown you
definitely wanna use em vers tell us any
and restore the true progressive mode
and with ever tell us any even if you
have a frame that's fully in motion
because because it's just restating the
original source frames that's gonna work
for you just fine okay next is luma
adjustment and Lumas basically but not
quite the same thing to bring this and I
recommend you read at Charles point in
the book if you want to care about what
the differences you instead of think
about video photos into class there are
filters that affect luma brightness and
filters that affect chroma or color
the best thinking about them separately
and typically you're gonna do a lot of
work Illuma for a lot of cases but
typically chroma you don't really mess
with it very much because it tends to
survive the process a little better and
also we see mainly luma so it's just
that's where that pays off and the
classic luma filters are contrast
brightness and gamma or the level the
levels filter and after effects and
tools like that are also adjust luma so
this is a complex issue now any if
anyone's been doing in QuickTime for a
while you probably have in your head you
have to raise the gamma a bunch when
you're encoding on a Mac or PC playback
anyone doing that that rule of thumb you
guys all knew all right ok
you know I'm talking about all right so
if you were doing all that and you
didn't know about that you can not feel
bad because you largely don't have to do
that anymore so there are two classes of
codecs and QuickTime the ones that will
correct for jam on the fly and ones that
won't so for example we used the
Sorenson video 3 codec if you code that
file on a Mac I playback on a Windows
box it'll appear darker and windows box
than law on a Mac use the mpeg-4 codec
it'll appear the same brightness on a
Windows box as it will automatically
will automatic for the gamma this is
confusing
unfortunately so you have to like know
it's good if you're gonna do that's one
of the reasons I recommend it if you
really if you're doing QuickTime for Mac
users you Sorenson news it's a better
code I come I kind of get stuff but if
you really try to make a QuickTime file
for a cross-platform audience the mpeg-4
codec has the advantage that you're able
to it'll correct for as gamma 4 you
won't appear too dark on the on PC it's
also possibly use QuickTime movie
alternates to make different versions of
the file different gamma for Mac and
Windows and movie alternates to
automatically switch between them on the
web but that's that's for different
class and so the - I mean kind of
welcome to filters there first we have
brightness and contrast and these are
often grouped together brightness
basically exagerate s-- how different
the thing is from grey so you have an
exact middle gray i started that's
contrast brightness just shifts the
entire loom arrange it just like adds X
amount even do brightness of plus 20
every pixel will become 20 in this
brighter so pixel
you zero will become 20 become 20 pixel
pixel value 200 become 200 why people
get in trouble the brightness because
people say I want my video to look
brighter so I'll turn the brightness up
and it's actually almost always the
wrong thing to do you virtually never
want to actually raise brightness as a
filter itself if you want to have the
video seeing perceptually brighter or
eating as the gamma filter or talked
about next
because brightness just adds adds to the
entire range what was black well can't
be black yeah just one unit one unit of
brightness what was black of zero
becomes one all of a sudden - you're
black that black becomes sort of a dirty
gray so if you're using brightness
filters you're almost only ever needs it
with negative values and the goal of
using the brightness is to make us
elements that should be black like you
know like white text on a black
background for title card or fade to
back that kind of stuff you might turn
the brightness down a little bit just
that becomes all the way down to zero
now typically we have like rendered
graphics like you're rendering out from
Final Cut or whatever with a black
background it's gonna stay black
throughout you don't have to worry about
that but if you have any kind of analog
source the those lumens for each pixel
will be a little bit randomized so you
know even even if when it was rendered
they're all zeros goes out to beta SP
comes back again you get some zero some
one some threes and fives that kind of
stuff so you can just use brightness a
little bit say brightness minus 5 the 5s
down to 0 the twos still go down to zero
the zero stay at zero and all this sort
of a noisy background becomes all the
way block so you can that's a negative
brightness a slight amount can really
help make it crisper and that's from a
vibrancy to it and also if you have a
case where you have a black background
that's really noisy but making it really
black I mean a big rectangle above
number zero over and over again it's
very easy to compress if you have
totally random analog noise that's
actually kind of hard for a codec so
it's actually one code better by doing
that contrast what it does is it
exaggerate its how different pixel is
from gray so at a total mid gray
Conner's has no effect on obviously
black absolute white has the most effect
on the closer you get to either black or
white the more
at conferences it's gonna have now a few
years ago when you're doing encoding
with QuickTime for the web you almost
always had to add a plus 27 contrast
value in order to ear blacks come out as
black coming from a video source to an
output source the good news is now
QuickTime handles all that in the
background so even so I still saying
people who are still doing this and then
why don't we do like a double contrast
effect and they get really crushed
blacks and whites so again with really
clean digital content you normally don't
need to add contrast anymore but that is
use for analog source because again that
helps push the blacks a little bit more
to black and if you have a little bit of
analog no he's gonna make the whites
flatter into a flat or white and it'll
just be seem a little more vibrant and
can encode better so when I'm using
these holders it's almost always because
I'm trying my blacks blacker my general
rule of thumb is I'm gonna use one unit
of brightness for every unit of contrast
I'm gonna have so if I have a choice
between using a -10 brightness to get my
blacks black or a -5 brightness +5
contrast the combination of plus 5 minus
5 will give me the same black effect but
it'll leave whites about the same so
it's in effect they won't make the
images dark will use the brightness
filter overall so that's let's take a
look at showing some of that let's F
later on but it's a rule of thumb you
want to you don't want to use only
brightness or only contrast to crush
your blacks you want to use a
combination of them and that'll leave
the LUT rest of the luma range a little
more intact other luminal to recur about
is gamma now give them in this order
because if a all truly virtuous
processing tools with a gamma filter
after brightness and contrast because
when you laborious ly use brightness and
contrast that your black down to zero
you want to stay at zero if you have
gamma after that gamma is not gonna
leave it at zero you know the gamma
filter before brightness and contrast
you wind up having the gamma filter
changes for the zero lands and it gets
much more complex so anyone anyone out
there making compression tools there
alright so a gam is almost the inverse
of contrast that it has the most effect
that the middle grade has no effect in
the extremes and gamma people say they
want it to look brighter gamma is really
what they're talking about doing you
because
that makes mid-tones brighter believes
blacks black on whites whites so if
you're just trying to make a video clip
look brighter or look darker and you
know you began the filter so the place
to start
now the complexity here before max by
default use on gam of won't value 1.8
video but a lot usually get the value of
2.2 Windows machines use something
between 2.2 and 2.5 and it's not really
defined I used to be big probably ought
to make different files for mac and
Windows for all this kind of stuff the
codecs inside QuickTime that you use
what's called the tutu vu y color space
will automatically correct the local
gamma so if you play back the file on
it'll store it internally to point to be
played back on a Mac it'll just assume
as 1.8 playback in a Windows machine
I'll assume it's 2.5 and playback
correspondingly the complexity is it's
not actually reading whatever the gamma
value it is if you've gone into your
monitors control panel it specified a
different gamma value it's gonna ignore
that value just assumes every Mac in the
world's a gamble 1.8 if you told the
system differently it doesn't care same
thing with a QuickTime with Windows it
has no way to actually get the real
value it just assumes it's a 2.5 al you
but still it's a good thing in general
and so the good news is if you want to
make a single file to look at the same a
Mac and Windows you use a 2 v UI codec
of which impact 4 is best distribution
option right now if you were using
Sorenson you want to play make it make
it make it sort some file on a Mac
they'll play back in Windows right you
do need to add about a plus 30 gamma for
its lucky pinnacle across okay next guy
filter we look at is noise reduction and
a noise reduction is kind of like a
smart blur filter now you do behind a
note behind that it's a try to take out
parts the image that aren't image but
are noise I mean grain or you get a
random analog stuff blur those out but
try to keep the actual content like
sharp lines text all that stuff intact
and these are hard these are hard things
to do I mean even the best algorithms
will blur more than you want them to but
it's better than just like throwing a
bit Gaussian blur over the entire image
the way different tools implement this
vary a lot you got you know some have
like things whole grain killer green
suppress which can work for sometimes in
no ways I've also temporal noise
reduction folder
varies a lot the thing is if you have
source that's got really bad analog
noise you're pretty much host these
filters can take you for a mediocre for
like bad like nearly mediocre quality
but you're never gonna actually be able
to get good quality output by using
noise reduction if it's noisy it's
better than nothing
but it's much better to have clean
source to begin with and you can't ever
fake a clean source like you can never
make high-definition for real a lot of
standard F I suppose Audio normalization
for the most part audio does not require
a lot of pre-processing especially if
it's already brief it's something that's
mixed for TV or DVDs like that it's
gonna encode pretty well for the web at
reasonable bit bit rates the one thing I
want to do is audio normalization if you
have a clip or the overall level is just
too low
what normalization will do they'll find
the loudest single sample and then it'll
either raise or lower the overall volume
keeping the data damn Macarena intact
but just changing the overall amplitude
to specified value and typically a - 3
DB is good for most modern codecs some
QuickTime tools used to default about -
6 DB because the old queue design one
codec had trouble with things that are
near peak but you don't care but that
anymore so minus 3 is just fine these
days you don't want to go all the way to
0 DB peak because that will why because
there's some codecs so that'll just you
could it's an approximation you can wind
up having to try to give a like a
digital value that it's not possible to
express with a codec which can give you
little audible distortion so it's a good
rule of thumb another - 3 dB you might
use some other filtering and typically
these are kind of going beyond the realm
of audio processing and pre-processing
off - like you know you're doing some
kind of cool audio work to clean up bad
source audio restoration stuff can put
the compress doing a compression
basically making the quieter parts the
audio louder or will even the loud parts
loud you're targeting a 3gpp kind of
playback devices playback in a cell
phone you know subtle dialogues not
gonna be audible for most people so
often let me the dynamic range there can
help a lot notch filters we've got like
home and that cuz if the background can
work quite well steel annoys removal
your compression tools are kinda have
this kind of filtering in there for the
most part so if you have Clips need to
apply this kind of stuff with you're
diving into some kind
Pro audio tool ooh I got a cube thing
like that so that's kind of the wordy
overview and now let's go over to the
block up here we do that that didn't
look promising oh okay so this is the
interactive portion of it we got about
36 minutes left here I've got film
source I got into you see I'm not even a
couple pallet clips here I got every
compression tool known to humanity
I could anyone have like a
pre-processing kind of project they're
working on with a picker tool they want
to see how to do but that guy stuff
someone got a question for me heavy to
demonstrate these techniques some of the
something we've got microphones you're
good if you go to the microphone back
there so we can get it recorded in audio
hi I'm Scott Thompson from new tech and
I just like to see some DV source maybe
oh we compressed down to maybe 320 by
240 okay you have a picker tool used for
dena processing defin particular i use
media cleaner once in a while if I do
that hunter okay like that sure we can
daven it wait a minute cleaner this
cleaner six I know anyone from discretes
been around of the show at all we
haven't had a new it's like a beta of
cleaner 602 better year old now but
that's the last thing we saw that
actually had a real release of cleaner
even a point release for every year now
so I fear clinic cleaner may be done but
you know it's still useful a lot of
stuff they have a pretty good design for
ramp reprocessing off this clip here
that it's called NASA move it's a pretty
interesting one here kind of a little
bit here so this so this is okay it's a
couple things to point out here so this
is a DB file and it has blanking on the
edges here see the little black line
there a little bit and we also litter
boxed so first thing we want to do is we
want to be able to crop out that stuff
yeah so this is a for birth resource you
get if it was boosted just essentially
about 480 we get an image like that I
want to send what cleaners the late let
us do we tell it to okay show to me as
the 4x3 source clip and i'll show it to
me correctly if it's 69 you can flag i
like that as well I want you to sign up
okay first thing I'm gonna do is just do
is crop it a select crop filter here I I
need you here this grab it and draw a
box yeah it's good to go now one thing
you to watch out for in Clips isn't if
they come from source from a lot of
different source files your you often
need to wind up scrubbing through it and
looking to see this any other frames
that wind up not matching now okay looks
good there yes sir seems like some
archival footage often you'll have
things where it's the free mail vary a
little bit those look good there's a
spatial Oh see right here in this frame
even though all the other things are
good it's actually some little bit more
a letterbox in this frame here there so
I should have to go in a little bit a
little bit tighter here well they have
to watch out for I'm gonna see this
monitor errors often the very edge you
see a little bit of a distortion hey see
that the first line of it is a little
bit whiter than the line before it see
you so it's you don't want to crop it at
least one pixel from the top and bottom
just to get let's have the distortion
range look a little bit funky your top
and bottom lines are off a little bit
off so here we're about cropping about
one or two lines from the start of it
and that's grabbed us our whole thing
there well pause I get it wrong
[Music]
alright so
um cropping and constrains that didn't
yeah he also just do a 69 if you want if
I was going that to DVD I would
definitely do a 69 but I don't know if
this is quite yeah that's my system and
I there yeah he knew their way so in
this case if you wanted if you know you
want to output to exactly 16 by 9 you
can set in that value or also you can
even do a custom aspect ratio if you
happen to know it's a pickerel thing
this keeps me at 69 and well we couldn't
I just do it off like that
and let's Casey one depression cropping
out a lot of other frames so we don't
need to the most compression tools only
let you do global settings it to its
setting overall if you really need to do
a lot of tweaking per thing I tend dive
into After Effects and do all those
cultures there or to if I need to like
do different processing upper frame
basis twins it being pretty labor
intensive and expensive but you know so
that's how we set the crop up and the
process inside is pretty straightforward
let's just say I'm making a I just say
the interlace I want to try to the
adaptive mode on that gives me the
deputy interlace image size to trim by
240 like that clear by default has a
sharpen filter candana you do not want
to you to start on filter for most pure
processing sharpen will sharpen makes a
little crisper before you encode it but
sharpen adds noise as well so typically
give you more artifacts on playback
because it'll tend to exaggerate any
noise as well as maybe I do sharper so
always leave sharpen off you daft noise
reduction unclear is pretty good you can
leave that on also accent how these
filters here look at gamma brightness
and contrast and the default settings
are somewhat random they were designed
for cleaner 5 and clearer six is a
different processing mode crowded as
videos and actually you wind up having
different values okay was a great little
utility you get with Mac OS 10 utilities
folder I use all the time for this kind
of stuff which is called digital color
meter I just you got Mac OS think it's
from 10.1 on is just comes free with the
OS
okay
and what it does is you just point at a
place in the screen it will tell you the
RGB value at that point and cleaner
really ought to incorporate this built
into it be very very useful but it
doesn't so we can do here is we just go
over here and look at the bit our values
and see how they look and like that
stuff and we can say like this video
pick a frame like this get a preview
window all I cannot change it - oh this
preview under here
hello did I not time oh I didn't apply
it yes they're not bees and cleaners
much lately it's kind of it's like so
the dominant tool my career for a long
time was getting so buggy these days is
kind of hard I wind up as I used to here
well I hit apply tonight maybe one of
the bugs I was talking about yeah
sitting into volt alright it's still
showing us through alright okay it's
just not keeping that setting for some
reason oh I know is constraint because
it's always a king if he's got a crop
said anything so okay are you doing now
there we go okay so this is where you
can pick up some of these black level
issues so they'll put us here you can
see these values here actually aren't
all the way down to zero and our goal is
you want to have the black levels down
to zero let's mention before i'm he's
gonna show how we'd do that a little bit
up so these are the default random
settings clean i will turn those off so
by default we get 8 to 9 and that kind
of stuff and a good starting point i
tend to use is you know minus 5 and plus
5 Act is not a bad starting point we put
those values zeros and twos quite a lot
better yeah maybe I'll tell you it's 8
and the contrast of 8
and a couple ones and zeros around okay
let's go la 10 - guess they're a couple
a couple green of values getting getting
out of there but it's close enough to
live with it I suspect they're doing
that kind of processing they're your
blacks good do you also want to make
sure that your whoops that you're going
back to and making sure it's going to
look at further frames because cleaner
and other tools I compressor only give
you a global setting oh did you forget
my crop now just killing me here
all right we're not going to invite you
for tea at all of course we're doing 60
mine is actually 320 by 180 yeah cart
value there confused myself by giving
you a through 240 okay all right that's
for sure there and before and after here
the a/b slider shows the effects of
image processing so alright there was
seen the effects of having thrown in
that brightness and contrast it looks a
little bit darker overall ideally if
you're doing image processing it
shouldn't feel like you're modifying the
video it should feel like you're kind of
peeling a layer of grime off it so it's
the effect you're trying to go for it
shouldn't seem overly dark and that kind
of stuff so before and after too bad
it's a little bit darker he's a little
more richness to it the original video
and kind of weird black levels all right
looks pretty good like that and then so
I'm pretty happy as in my ends
there and then the audio side I could
just do a normalized 90% and 90% and
cleaners about minus 3 dB
they're good to go and that's pretty
much all we need to do to do a
pre-processing in cleaner so did that
look like what you're talking about
were you were you need any questions
about that or specifics there all right
cool they pushed over there yeah we are
we are aware of the normalized function
and cleaner I'm also aware of it in
logic for example but we have a lot of
video coming in to Final Cut Pro and our
users do not really don't have to handle
normalization are you against Mecca
right there question microphone yeah are
you are you aware of a plug-in available
and audio plugin for normalization that
we could plug into Final Cut Pro hmm
because we have a lot of dirty audio
coming in and video our users don't
really know how to handle normalization
manually I can't imagine that someone
hasn't done one of those but I can't
I'll the top of my head a more does
anyone know of a normal ice plug in for
a final cut say that again oh waves yeah
waves but is it really a normalized
plugin you know like as simple as this
because waves is usually 2 2
too many variables yeah yeah you can do
it you can definitely do a pure
basically you're doing it a
normalizations like a compression we are
not touching the dynamic range at all
effectively look at the a so if you have
a compression filter that you can
deactivate it has a slider for how much
dynamic range you can adjust until it -
you leave it alone tell you the same
effect yeah no wait waves does a good
degree set for years l1 okay is it you
are out of your units okay it's also
pretty critical to just take the audio
into another audio tool and just we
process them then important if I look
out of that point - that's I'll do I
mean something simple like I mean the
free version of peak that comes with
Final Cut it's bundled with Final Cut is
concern I do a normal ice just fine
so yeah Final Cut you have at least yeah
hi my name is Daniel Benner University
of Texas I recently had a project where
we had a bunch of source video that was
shot in 1985 on s-video and I wanted to
know if you could suggest some best
practices for importing that in in a way
that it can be edited and final cut
without having to render all the time
okay so you've got which are the s s VHS
tape I mean it's pretty straightforward
I mean Ivan I'm in love with the Asha IO
systems you know they have like a for a
grand or so it's a they have leaves like
a full one that is SDI and Anna logging
by $1,000 when I think we just that's
small this is analog so you get an i/o
it'll take your s-video and put your XLR
audio saver if you have a professional
or industrial
SHS deck like I've got a AJ five five
five at Panasonic and that's a device
controllable XLR audio out s VHS deck
plug that certain the Vyasa it'll put
the firewire and if I into your Mac from
the final cut and it'll be able to just
you know do view device control Tim but
uncompressed capture balanced audio off
your SP just tape into final cut and
then you know you just GFI shall we have
multiple real-time effects on the g5
since off
we're from that capture using the
uncompressed codec would it be bad to
take the s-video from us video deck and
then go into the back of a DVD deck and
then capture via firewire that one yes
yeah because the DB codecs on 25
megabits uses only these a very limited
form in the colorspace DB is a fine
acquisition format so I mean you're just
like shooting things in the world with
DV it's fine but you try to convert
anything to its or got an analogue noise
in it or it's got motion graphics that
kind of stuff the DB codecs really not
robust enough to be the second
generation of anything so I mean yeah
you can do it it would work you get
video out of it but the right I'm
talking about give you a substantially
higher quality and also if you have a
good VHS tape and you have some good
time base corrector and all those analog
things which most of us unfortunately I
will forget about come back into play
right do good again good analog which
means you spend a bundle on cables and
rock camps all this kind of stuff so but
it's not too bad I mean I mean in the
audio system actually I'm very pleased
that it's like you know a
straightforward cheap you can plug into
a laptop all that kind of stuff set up
for do that kind of work does that
convert it to DV I know it'll you can if
you want to convert was just I can leave
it as uncompressed 8 bit or 10 bit it
can refer to DV 50 care to de'vide db25
okay whatever you want and motion JPEG I
Sam thanks great hey fonts panel in
Russia
yep you mean i i'm using i'm using a
cleaner time but concerned about its
future yeah what kind of product
equivalent product would you advise the
to actively unearth the products that
I've acted engineering on them for Mac
quick encoding tools I'll see of apples
the compressor which I'm not I'm not
sure if it's compressor is pretty good
for some stuff it's got some limitations
a capability to pass VBR with Sorenson
and cody and that kind of stuff and what
as the 2 6-4 codec becomes Dhamma and
QuickTime over next year I would expect
that compressor will be relatively more
useful because just it'll have access to
a codec that's a lot more competitive
than it does right now the sorts and
squeeze
was a major major new version that's
been analysis just when Ana beta should
be in a few months
squeeze for peer job holder tool really
aimed squarely at the cleaner space and
the source is working hard to make that
work and they've got two six four or I
think got all kinds of stuff in there
and that should be it looks probably
promising as well and a few months and
there's a pop where's compression master
that's out now there's really good it's
mainly the mpeg-4 tool does really it
could make great mpeg-4 inside Dutta
movie files and mpeg-4 files also 3gpp
files for doing tapassin a lot kind of
stuff if you want to use the mpeg-4
codec and almost a flavor of it the
compression master is my favorite prefer
tool right now but if you want to make
really good MOV files cleaner still the
best thing out there he's got some
unique features only it has as far as
yeah well you know peak data rate for to
capacity are you can automatically you
do an audio sync fix for for be frame
content that kind of stuff so I mean I
still you I expect as long as I need to
create a legacy cleaner content I mean
look see quicktime content I'm gonna
keep on using cleaner for keep it around
on the harder every years to come even
though it seems unlikely at this point
I'll ever see any more releases for it
or even bug fixes I don't know I'm you
know discrete says it got some engineers
working on something related to cleaner
but they won't say what or how or even
which version of it and that kind of
stuff and the beta of 602 came out
almost a year ago and they have even
like released it for real yet is still
in beta for all this time so and as
required
for panther compatibility so we're
talking about tiger now so it's not a
very good time so thank you yeah so
we're getting there i said a comment
about the VHS to DVD so the early sony
decks and things and have head-to-head
transfer some cable lists and it has TVC
and some other things on too so actually
does some sweetening up their signal as
well yeah so are there ways you make it
work with TV just the DV bitstream
itself I don't I mean if you're on a
budget you need just in you the video in
any form it can work but if what you
cared was really high quality that's
some limitations I actually got a
project here called via my infamous VHS
ugly file which I decided like what is
the worst nasties analog
garbage you can imagine which was hard
ice I had some some guys like a
fourth-generation EP mode via chest up
so well as bad as you could possibly get
to dimension so a problem too because
like you know about like you know have
like a personal s VHS deck and if you
can i play SP mode takes you know no one
ever uses a professional LP modes i
don't like go find actual VHS deck which
i hadn't used for years you know drag it
down my basement and catherine and
compact capture i've composite just to
make it extra special here so uh i can
talk a little bit about what we can do
to make this thing better which is not a
whole lot let me think here so yeah it's
bad clip isn't it it's got some good
music with it too
one thing about videos important to
realize is that no matter how bad the
frames are the motion is always really
good even it's a really bad home edition
s-vhs thing so the thing you really do
to try to like do it is we have bad
quality video do you like this is make
the frame size small keep the frame rate
high because you've got sixty fields a
second of motion here so now set because
32 by two 40 60 frame a second you have
video out of it cuz that'll give you you
know you trick it down a lot
it'll help average up some of that stuff
so let me let me drop into After Effects
I'll show you what you could possibly do
to make this interesting Club and After
Effects is overkill for a lot of
commercial stuff but you know you need
to do like weird kind of video
processing stuff it's still kind of like
it Swiss Army knife tool and the new
version is pretty good here and actually
comes with the synthetic aperture color
finesse plugin which is kind of beyond
pre-processing if you need to do like
you know take set that a shot badly and
really clean it up it's a really
wonderful plugin it's also available for
final head I believe they're not bundled
so just open up this horrible piece of
tripe here okay so
yeah question its rumor no son I don't
know I think there's a lower feel first
so when you're working with content
inside After Effects preserve edges best
quality only means it a foodie interlace
so not clear defined thing but here if
you are using a tea for pre frosting to
phone app that turned on you also do 3:2
pulldown removal in a very nasty UI here
we have to like guess it and 6 5 was
better in the past but after effects
totally cannot deal with any kind of
thing with the Kayden's break so you
have a 2 hour movie and there's like one
field that's off and order the middle of
it it can't do inverse telecine so each
the entire clip you work with has set up
perfectly straight key and star the
entire thing you guessed it no one does
know me either
all right um you see some like that so
let's say I was going to make a you know
try a really small version of this you
know I can't make a sort of the 640 by
480 timeline so that bad place to go
now one neat thing you can do is I can
actually make a composition that is 60
frames a second if have interlaced
source video that's it 2009's have an
interlace that's actually you know 60
fields a second of data and I can make
60 frames progress it out of that on
output so orally 59.94 like that so when
I've done that I'm actually able to
produce a help of this relief all the
way there so if two weeks can handle
doing things in after-effects there is
you can do it a full res in the comp and
then you do a nested comp at the final
output resolution or set it on the
output resolution it's often most
informative to actually make your cops
eyes you want to have it out you know so
if you're making a look a little web
video thing like that work it at the
lower size you just need to go through
and obviously deal the scale right and
since Cillian attrex is kind of weird is
you don't really can't way crop per se
you just kind of scale up position it
right to view the crop you want so that
he's not good enough we'll do 51 like
that another thing about this clip is
got what's called tearing and VHS mode
which is all this stuff at the bottom
was kind of it was kind of not all to an
angle here but just when you play back
others look stupid she got to you when
you BJ's Micropolis step out of the bar
to the bottom so I'm gonna zoom in a
little bit here so 1500 left so what is
goes yeah pretty straightforward and
we'll get something like that
the feature that people don't steal
enough from After Effects is the
important levels filter this is this is
the probably the best visual processing
filter the history of humanity here
because it gives you an integrated
histogram and black and white points so
you have a video clip that's not all the
way to black or all the way to white
you're actually able to just see the
histogram like okay this clip here it is
full range it's not a problem but you
can okay look at my mid-tone you know
like okay well I will actually be like
there whatever if this panel of noise
they know I don't like to be there all
that kind of stuff it gives you it's a
nice feedback this
here's full rain so it's like needies it
much but and generally for kind of video
like this often electric am i can do you
world a good cool
1.1 in gamma is a little more presence
you know it depends the clip here so
yeah we can there now yeah it was really
a little you can do on this I'm gonna be
get looking good but actually hundred
percent size they're a scaled-down it
isn't that horrible it really isn't as
horrible as you could imagine that's got
to do preview there and the one print
you do have a lot of difference per
frame terms of temporal there's a
temporal noise there's because the
varies from frame to frame you can
sometimes get some better results by
using you got the pro version you remove
green filter some kinds of video noise I
can actually do a decent job with you
know I won't belabor you the incredibly
complex edits that's the things you can
do on it then
you can see inside the the preview box
there and get a little bit a little bit
less effect there and get turn anymore
and the word that really comes in is
because grain is ran if every frame or
using the grain suppression of filters
so try to find errors that are totally
different for him to frame and suppress
those leaving with the actual underlying
motion is and again it's already be
perfect but it'll give you a little bit
their quality and take out some of those
errors there and so if you have and this
election will give me a full 60 frames a
second output so if you do have you know
that that old content try to put it on
the web emphasize the frame rate because
you've got so much frame rate and just
shrink it down to the point where the
artifacts content it's kind of disappear
or you can but also I mean it's always
gonna be garbage in garbage out you know
for really bad source you could make it
again you just make it less bad even
mediocrity is typically out of reach
unless you're gonna go in and rotoscoped
whole thing I mean really you know
creative just use a source to paint over
effectively he's the only way to do it a
lot of cases okay next question
there yeah yeah the microphone hyebin's
cliff wouldn't me just a quick question
you mentioned mpeg-4 a couple of times
but will you meaning impactful part two
there and yeah I parlous right now as I
say mpeg-4 I mean a before a part - and
if I mean in pay for part 10 I'll say
264 they'll probably change over time as
kind of in pick 4 becomes a not five
years from now when we say in pick
forward and mean - six for this part -
never we got all that much traction so
clearly the entire impact for industry
is gonna be waiting for 264 and that's
gonna be the mainstream application of
it because it's just so much better
but today QuickTime is only a part 2
part 2 simple so that's what we're using
for stuff right now thank you I didn't
get the fuels right there let me turn
this off because it's distracting when I
play back video when I'm frantic
yeah there were the real you know one
thing that bear mind is that the the
impersonal deal Apple people quick times
built an impact for encoder is to put it
charitably more speed optimize the
quality optimized
so even if you want to make a dot and
pick for file or MOV file mpeg-4 codec
there are other tools like squeeze and
compression master they'll give you a
lot better quality at lower bit rates
then apples encoders even if people have
had kind of a questions about mpeg-4 is
compression quality there are other
tools making compatible bitstream they
can give you better results both squeeze
anchor pressure master have a to pass
encoding mode you can tell it to go
really slow on a really high quality
well the QuickTime encoder is really
tuned is like you know massive actions
in real time real time broadcast kind of
applications and works great for that
but it doesn't have a slow and sweet
mode which some third-party products do
so you can make a quick tank to paddle
in big for file a lot better quality a
lot of the stuff we see out there right
now you just use the Apple exporter okay
who's next
someone's got someone I'd have to do
some demos otherwise it wouldn't have
any self and compressor or oh here we go
okay so an example of using a mass to do
some some specific compression where
you're masking out a mask like you mean
example yeah maybe I'm not saying that
correctly but to to to soften a portion
of the image field and it compresses
yeah almost winds that it's almost never
worth it in the end because he turning a
compression II of multiple frames I have
a single images I mean you news like
pacing you the motion tracking to like
yeah I honestly haven't had a case of
the video or that was worth doing for
years because typically any kind of
error in is gonna be overall I mean some
of me think it sometimes I thought I'll
do per frame processing actually what
I'm asking for it a good example might
be if you had a video of a talking head
and they shot it against a moving
background like the leaves blowing in
the wind or so it's a constant shot you
mind you might use After Effects too
yeah masks the person out and and blur
the background so it looked like it had
a shallower depth of field or something
yeah I mean you do that like you would
do it in After Effects play yeah well
actually that effects I mean I I did
have a case in Lobster we're doing this
high def project where I needed to worry
damaged e5 tape so that there are some
some they actually had a macro block or
sometimes I don't like the so like you'd
have like a 69 16 block of the frame
where the video wasn't where we still
only one fuel to be attacking the stuff
and actually the a6 like actually got a
rotoscope some of that I mean she can go
in and hand paint and uses a clone stamp
tool on a per frame basis and it's
surprisingly powerful I mean like six
five on a dual g5 I mean you can do
really good real-time rotoscoping kind
of stuff with it so that's definitely in
there I'm sure I kind of I know yeah I
used by person um after effects a lot so
I don't really do much mask stuff yeah
kind of a question is wondering if you
could pontificate on the elephant on
this as as content creators transition
to using HD and much higher quality
cameras and acquisition like the Varrick
am working and doing for progressive
square pixels you know giving us much
higher quality content to start with
will will we be out of a job
pre-processing in HD is much easier than
for standard definition because HD is
always analog I'm sorry alloys digital
and analog is half the problem
pre-processing and non-square pixels is
a big issue as well and DV is almost
always square pixel annele
which makes it a lot easier but you
still got some complexities you know
there's converting from 720 to 1080
computer playback I mean everyone who's
showing off hi-def could always pick
base playback today it's all 24 P some
you know if you have 60 i source
typically have to get converter
something like that today's codecs don't
do a great job computer code under a
great job at interlaced content and
impact 2 will work but takes a pretty
beefy machine in your real time in peg
2d interlacing at computer screens to 60
people playback Andrew it's a nightmare
so I still affair you need to do with it
- all sometimes get a little bit of
letterboxing you know if it's more than
16 by 9 aspect ratio
it's just occurred to me that that the
difference between what
less experienced user might get from a
compressing a really high-quality source
from HD down to web with you know
default settings and some of these tools
versus what we might be able to get that
differences is much less with much
higher quality yeah so are subtleties
there's also HD in s to use different
color spaces para mine so ACS is the
seven use of the seven or nine color
space and st and almost low computer
playback uses 601 so you actually have
to do a transform in there to get the
colors to come out mashing perfectly
accurately and that's handled
transparently in a lot of tools are not
always you have to get that right is an
issue I know a nothing happens is
typically you are gonna a lot of HD for
play gets compressed horizontally same
you can code a 1440 by by 1080 just to
make little bit easier to do yeah I mean
HD HD is masley easier to compress than
standard def in the real world you seen
a lot more computer for it but you know
that's that's coming along as well but
yeah it's I mean it's amazing to me I
mean part is that I only working hard
stuff pretty much but I mean so I've had
some HD project required a lotta
pre-processing but kind of in general
yeah he's gonna like drag it in and say
go when you're done it's it's because I
want it is almost always progressive but
it is almost always full frame and
square pics all that kind of stuff