WWDC2014 Session 508

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> Good morning and
welcome to session 508.
I'm Brad Ford.
I'm an Engineer in the
Camera Software Group.
And hopefully you're not
here to learn about Swift,
because we're going to
talk about cameras today,
specifically camera developments
on Yosemite and iOS 8.
If you want to hear about that
you've come to the right place.
If you're new to Camera
Capture in general on OS X
or iOS we invite you to review
our past sessions from WWDC.
They're available right
within your WWDC app or online
at developer.apple.com.
They provide great background
information for today's talk
and they also show you the
progression of our API set
over the last four years.
And seeing as we're so close
to lunchtime we thought we'd
present you a little menu
of our own.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of our own.
We'll begin with
a light appetizer
of Yosemite camera developments
and iOS updates followed
by our manual camera
controls main course
and then we'll finish things off
with some tasty bracketed
capture for dessert.
There is a lot to digest
here so let's get going.
First up is Capture in AVKit.
AVKit is sort of like
AppKit except it's for audio
and video thus the name AVKit.
It is to AV Foundation as
AppKit is to Foundation.
So, it provides view
classes and standard UI
for common media operations
like media playback
and now for capture as well.
Here's a first look at
what AVCaptureView looks
like on OS X.
It's a standardized UI for
capture and it's built on top
of AV Foundation's
capture classes.
If it looks similar to QuickTime
Player 10, that's no mistake.
It's QuickTime Player 10
actually uses an AVCaptureView
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's QuickTime Player 10
actually uses an AVCaptureView
in Yosemite.
Let's take a quick look
around the feature set.
It provides a nice HUD
with standard UI for record
and volume controls and
an optional drop down menu
for audio and video
capture device picking.
Now, for a quick refresher
on how AVFoundation's
capture classes work.
At the center of our
capture universe is an AV
capture session.
That's the object that you
tell to start and stop running.
It doesn't do anything
very interesting, however,
until you provide
it with some input.
We represent these
as AVCaptureInputs.
Here I have an AVCapture camera
and a microphone as devices
and the data needs
to flow somewhere.
And we represent these
as AVCaptureOutputs.
Here I have a concrete
AVCaptureMovieFileOutput,
which is used for
writing movie files.
The connections from inputs to
outputs flow through the session
and are referred
to as AVCaptureConnections
in our API set.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to as AVCaptureConnections
in our API set.
Now, let's refer all this
back to the new AVCaptureView.
How does that all work?
Well, in the default
case, the simple case,
you just instantiate an
AVCaptureView and all
of this is taken
care of for you.
You either instantiate it
or drop it into your NIB
and it will manage that
AVCapture session for you.
All you need to do is implement
a single delegate method
to make recording work
and it looks like this.
There's a single
method, which gets called
when someone clicks
on the Record button.
In the simple case all you need
do is pass the file output,
the call
to startRecordingToOutputFile
and you're done.
You have a fully
functioning recording UI.
The second case is the custom
or slightly more
complicated mode of running.
In this mode of running
you provide your own
AVCaptureSession
configured to your liking.
So, you could set it with a
custom AVCaptureSession preset,
a custom frame rate,
anything that you'd like.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
a custom frame rate,
anything that you'd like.
AVCaptureView will
still manage the inputs
for you and provide the UI.
That's it for AVCaptureView.
Let's move on to a great new
feature called iOS Screen
Recording on OS X.
Try to wrap your
brain around that.
New in Yosemite you can plug
your iPhone or iOS device
into a Mac and it shows
up as a selectable camera.
So, you can do stuff like
this in QuickTime Player.
You have the standard
recording UI.
[ Applause ]
And you can record what's
happening on your iOS screen
and then you can publish
for instance a how-to video,
give something to your mom to
show her how to do something
or you can do an app
preview, which we'll talk
about more in a minute.
There are some special
considerations, however.
First of all, iOS
devices are presented
as CoreMedia IO DAL
plug-ins the same way
as any third party
camera interface.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
as any third party
camera interface.
But, we consider them special,
because they're really a screen
grabbing device not a live
camera feed.
So, we don't want to
confuse shipping apps
by having this weird camera show
up suddenly and unexpectedly.
Therefore, you need to opt
in to get this behavior
to see iOS devices as
AVCapture devices in your app
and this is how you do it.
There's a single property
call that you make
on the CoreMedia IO
system object telling it
to allow screen capture devices.
Once you've opted in, if
you iterate through the list
of AVCapture devices you'll
find your iOS devices there.
And as you plug them in or
unplug them they'll come and go.
There is in fact, a whole
session devoted to this topic
and its tomorrow at
3:15 in Russian Hill.
Also, if you'd like to
learn more specifically
about how the DAL part
works, we invite you to come
and visit us in the labs.
All right, that's
it for Yosemite.
Let's move on to our iOS
8 Capture Enhancements.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Let's move on to our iOS
8 Capture Enhancements.
First up, machine-readable
codes aka barcodes.
Last year we introduced support
for barcode detection during
real-time video capture
and we support a long list of
bar codes as you can see here.
In iOS 8 we're supporting
three new symbology types,
Data Matrix, Interleaved
2 of 5 and ITF14.
Next up, we're continuing
with our efforts
to provide greater
transparency to users.
If you'll recall in iOS
7, for the first time,
these dialogues showed up.
The first time your app tries
to use the microphone you
get the microphone dialogue
and in some regions the
first time your app attempts
to use the camera, it would
show the second dialogue.
We only showed that
dialogue in regions
where it was required by law.
In iOS 8, however, we're
requiring user consent
to use the camera or
mic in all regions.
A couple of reasons for that,
it's a good thing for you
as developers, because you
get a more consistent behavior
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
as developers, because you
get a more consistent behavior
across all regions.
It's easier to debug your code.
And for us it's great, because
we have a more consistent
platform experience
and people feel
like their privacy
is being respected.
You can refer back to
last year's session 610
for coding examples of how
to deal with these dialogues.
All right, I hope you
enjoyed the appetizer,
because it's time
for the main course.
We've done this for
four years now, I think.
AVFoundation came
out four years ago.
And so for four years you've
been coming to the labs.
You've been talking to us,
filing enhancement requests,
filing bug reports and
we read all of them.
There have been some
really interesting
enhancement requests.
In fact, there have been a
lot of enhancement requests.
No, like a lot of enhancement
requests and it turns out a lot
of you want the same
things and we listen.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of you want the same
things and we listen.
We read all of those bug reports
and so we know what the
majority of you want.
Your top two feature
requests were direct access
to the H.264 video encoder
during real-time capture
and manual camera controls.
Good news, you're getting
both of them in iOS 8.
[ Applause ]
So, H.264 video encoder.
We're introducing
support for access
to the hardware H.264 video
encoder via the video toolbox
APIs, which have already
been available on Mac OS X,
but now they're available
on iOS as well.
What this means for you
as a capture client is
if you're using a video
data output you get
uncompressed buffers.
Now, you can compress them.
You can do I-frame
insertion, bitrate adjustment,
choose what kind
of GOPs you want,
a whole lot of features
at your disposal.
There is in fact, too much
to talk about at one session
so we're going to do it again
tomorrow, Thursday 11:30
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
so we're going to do it again
tomorrow, Thursday 11:30
where we'll just talk about the
specifics of H.264 encoding.
Don't miss it.
Now onto the meat of the matter,
which is manual camera controls.
Our aim here is nothing short of
making iOS the premier platform
for computational
and pro photography.
Now, when I say manual
controls what does that evoke?
What sort of picture
comes to your mind?
Is it something like this,
lots of dangerous
looking knobs and buttons?
Well, it's true.
Manual camera controls
aren't for everyone.
It's the age-old problem,
automatic versus manual.
Well, our automatic controls
work great for most apps
and manual controls
offer a greater degree
of creative control and
more freedom to experiment.
But, with great power
comes great responsibility.
So, while we're providing the
power you must provide the
responsibility and
the common sense.
The manual shifter sure
makes for a fun drive,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The manual shifter sure
makes for a fun drive,
but it's not going to prevent
you from going from second gear
into fifth or forgetting
to push the clutch in
and grinding your gears.
You see where I'm
going with that.
All right, we're going to talk
about four things in particular,
focus, exposure, exposure
compensation and white balance.
Again, referring to our
diagram of AVCapture objects all
of the manual control
APIs I'm going to talk
about are implemented
in a single class,
the AVCaptureDevice.
First up is manual focus.
Focus refers to the sharpness
of objects in the frame
and we have a great
autofocus mechanism.
Its job is to try to keep the
most important things sharp.
But, with manual focus you've
got some more creative control.
For instance, you could allow
a pro photographer to soften
up the image such as right here
or do a dramatic focus pull.
You as a developer might want
to develop your own
focus stacking algorithm,
so you can pull different
objects in and out of focus.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
so you can pull different
objects in and out of focus.
If you're a scientist or a
writer of medical applications,
you might want
to programmatically
move the lens position
around for experiments.
Here's how it works.
You've got a subject, here
it's the candle on the left
and you've got a
sensor on the right.
In the middle there's a lens
and the job of the lens is
to focus light rays
onto the sensor.
Focus is altered by
moving the lens nearer
or farther from the sensor.
And the farther the
lens is from the sensor,
distant objects look sharper.
So, here we've got a problem.
The candle image is blurry
because our lens is focusing
light rays from the candle
in front of the sensor.
But, as we move the lens
closer to the sensor we see
that the candle image
becomes sharp,
because the light rays
converge in the right place.
Now, let's talk a
few focus terms.
First is depth of field.
This is the distance between
the nearest and farthest objects
that can be judged
to be in focus.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that can be judged
to be in focus.
At the near end we have
what's called macro
and that's the closest distance
at which the lens can focus.
At the far end is infinity.
Somewhere in between there is
a sweet spot called hyperfocal
distance and that's the distance
that maximizes your
depth of field.
Because if you find this
position and focus there,
everything from infinity to half
of the hyperfocal distance
is going to be sharp.
The last is lens position
and that's what makes
all this magic happen.
When you move the lens position
you are moving the distance
of the lens from the sensor
and therefore altering focus.
Now, quickly let's talk
about what you can do
already in iOS 7 and earlier.
We provide great automatic
controls in a AVCaptureDevice.
Three modes: locked,
one shot autofocus,
which sweeps the lens
position through all ranges
until it finds sharp focus
and then parks it there.
And continuous autofocus,
which gives the camera freedom
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And continuous autofocus,
which gives the camera freedom
to refocus anytime it
thinks the scene has become
sufficiently blurry.
We also provide, there
we go, a focus point
of interest, which is settable.
It lets you tap to focus
on a particular area
so that it becomes sharp.
And you can also know
when the lens is moving
by key value observing the
isAdjustingFocus property.
Last year we offered
some specialty modifiers
for autofocus.
The first is range restriction
and that hints the AF algorithm
to limit its search
to a particular range.
The near range is good if you
have something that only wants
to search up close,
for instance,
a barcode scanning app.
We also have the far
range, which is good
for distant objects such
as oh, barcodes painted
on the sides of buildings.
And then finally we have smooth
autofocus, which is a modifier
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And then finally we have smooth
autofocus, which is a modifier
that slows down the AF
algorithm and steps it
in smaller increments so that it
avoids the throbbing artifacts
that you don't want to see
when you're recording a video.
Now, new in iOS 8 we
allow full manual control
of the lens position when you're
locking focus and we allow you
to key value observe
the exact lens position
at any time, in any focus mode.
But, I think a demo is worth a
thousand words so let me call
up Aparna from the
Camera Software Team
to give you a demo.
Take it away, Aparna.
[ Applause ]
>> Thanks Brad.
Hi everyone.
I'm Aparna.
I'm an engineer in the
Camera Software Team.
I'm very excited to be
here and show the demo
for manual focus control.
So, I have the app on here
and there are two
modes, auto unlock.
The auto is same as today,
but I have the lens position
slider here whose values are
updated based on the key values
for the lens position
property during the autofocus.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, as you can see I'm trying
to focus on this flower here.
The lens position is changing.
Now, if I want to focus at Brad
here, I have to move my phone
and bring Brad in focus.
Now, let me switch to
the new Locked mode.
Here I have full control
on the lens position
so I can move my slider
and bring my target
of interest in focus.
So, I am going to move the
slider here and frame a scene
with Brad in focus
and I can take a shot
and I can change
the slider position,
I mean the lens position and
bring the flower in focus.
So, that's manual focus
control in iOS 8, thank you.
[ Applause ]
>> Great, thank you Aparna.
That magic was all provided
for by this magical
lensPosition property,
which was being key-value
observed and that's how you saw
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which was being key-value
observed and that's how you saw
that slider flying all over
the place every time the lens
was moving.
It's a key-value
observable property
and its range is
from zero to one.
Zero means macro or the
closest that the lens can focus.
One means completely
stretched out or the farthest
that it can park,
which may be infinity
or it may be beyond infinity
and we'll talk about that later.
Instead of a simple setter
property AVCaptureDevice
provides a compound setter,
setFocusModeLocked
WithLensPosition
completionHandler.
That's a mouthful, but it
does three important things.
It locks focus at an
explicit lens position
and it calls you back when
the command has completed
and it does so with a timestamp.
And that sync time is
the presentation time
of the first video frame,
which reflects your change.
This is far superior
to the adjusting focus
key-value observation
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to the adjusting focus
key-value observation
because you know exactly
which frame you can
synchronize this with.
Now, the sync time is
on the timeline or clock
of the AVCaptureDevice.
That's important, because if
you're going to synchronize it
with things coming out of
an output you will need
to sync the time to
a different clock.
AVCaptureVideoDataOutput
buffers are going to be
on the AVCapture session's
master clocks timeline.
The way you would do it is
by using CMSync services.
Here I've taken the time
provided and the master clock
from the session and I call
CMSyncConvertTime to go
from the device clock
to the master clock
and now I have my
synchronization point
for output buffers.
Next, there's a special
lens position parameter
that you can pass to that
setter called current.
And what that does is say I want
you to set the lens position
to exactly where it is right now
and then tell me
when you're done.
So, it will lock it in
the position where it is,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, it will lock it in
the position where it is,
but it does so without
avoiding race conditions.
Imagine if you tried to get the
lens position and then set it.
Well, if the lens is moving
at the time you may
have actually set it
to the wrong point, because
it will now jump back
to an undesired position.
So, the following
two are equivalent.
If you set it with the new
flavor using the current
property and no
completionHandler it's the same
as calling the old
focus mode locked.
Now, why did we choose to
go with unit-less values,
scalar values from zero
one rather than a distance
in meters, for instance?
The reason lies in
our implementation
on focus on iOS devices.
The lens is physically moved
with a spring and a magnet.
That means that there is
some hysteresis involved
when it moves, or bounce, and
that bounce prevents precise,
repeatable positioning
at a particular distance.
And also, that means
that gravity will
affect spring stretch.
So, a lens position of
.5 will be different
if you're pointing it straight
out versus up versus down.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
if you're pointing it straight
out versus up versus down.
Also, the lens position distance
may vary from device to device
or over time as the
spring stretches out more.
For all these reasons,
we chose to go
with the scalar value instead
of an absolute distance.
And I'd like to caution
you to not try
to correlate lens position
with a particular distance,
because as I said,
it changes depending
on gravity and other factors.
Next up is how to help
users achieve sharp focus.
One inherent problem with having
a small device is is it has a
small screen
and AVCaptureVideoPreviewLayer
is not at the resolution
of the buffers that you're
getting from the camera usually.
It's at screen resolution
meaning that it's scaled down.
So, how can you help people
know if they're in good focus?
A couple possible techniques,
you could zoom the AV cap-
using AVCaptureDevice you could
zoom the video preview layer
up so that people
can see larger pixels
and make a better decision
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
or if you're using a video data
output you're getting the buffer
so you could compute
your own focus score
and then highlight the
areas that are sharp.
That's sometimes
called focus peaking.
That's it for our first
third of manual controls,
now onto manual exposure.
Exposure refers to the
brightness of an image,
which means how much light hits
the sensor and for how long.
Our auto exposure tries really
hard to keep the scene well lit.
That's its job.
With manual exposure again
you've got some more freedom.
You can go for a more
stylized look like here.
You could go for instance, from
something unrealistically bright
to something kind of ghostly
dark or somewhere in between.
You could get some motion blur.
You could provide some grain.
Manual exposure also allows you
to devise your own alternate
auto exposure algorithm
to Apple's if you prefer.
Now, when talking about exposure
you have to draw a triangle.
That's the way we do things.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
There are three components
to exposure
and they are shutter speed,
ISO and lens aperture.
First up, shutter speed.
Shutter speed is the length
of the time that shutter-
that the shutter is
open to let light in.
In a conventional camera
there is a physical shutter,
which opens and for all
the time that it's open,
it's letting light into
that dark room and then
when it closes that's
all the light you get.
So, looking at the picture
on the left you see
a short exposure.
It lets in less light, because
it's open for less time,
but the image is crisp.
The motion is crisp compared
to the image on the left,
which lets in more light so it
has the- it can be brighter.
But, it also lets
in more motion blur,
because it's open for longer.
Long exposures are great for
photographing stationary scenes
in low light, but they're not
so good when shooting things
that move around a lot
like, say, my kids.
Shutter speed is
measured in seconds.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Shutter speed is
measured in seconds.
The second one is ISO and
that's a borrowed term
from film photography if
you've ever done that.
It refers to the sensitivity
of the chemicals in
the film to light.
So, a higher ISO film
is more sensitive
to light than a low ISO film.
And as you can see from the
higher ISO image a higher ISO
number means it's
going to be brighter.
It's more sensitive.
But, it does that at the cost
of introducing some noise
or grain into the photo.
So, as you can see
in the cloth flower
on the right there is definitely
more grain to the image,
more noise compared to
the one on the left.
The third component of-
in the exposure triangle
is lens aperture.
It's the size of
the lens opening,
meaning how much can
it physically open
to let more light in.
On iOS cameras, the
lens aperture is fixed
on all products that
we have shipped.
So, that means practically on
iOS the only two things we have
to play with are
shutter speed and ISO.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to play with are
shutter speed and ISO.
Now, what have we- what
do we support currently
in iOS 7 and earlier?
We have two exposure modes that
we support, the locked mode,
or the continuous mode.
And continuous, as
you might expect,
just continuously adjusts
the scene to get it well lit.
There is also an
exposurePointOfInterest settable
property that you can
use to tap to expose
if you've got a complicated
scene with various objects
that are light and
dark you can tell it
which thing you want
to expose on.
And finally, you can know when
exposure is being adjusted
through this key-value
observable property.
Now, new in iOS 8 we're
introducing support
for fully manual
exposure or what we call
in the API, custom exposure.
In custom mode you can get,
set and key-value observe
ISO and shutter speed.
Now, we refer to shutter
speed as exposure duration
in the API set since our cameras
don't have a physical shutter.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in the API set since our cameras
don't have a physical shutter.
I'll use those two
terms interchangeably
from here on out.
I feel a demo coming on.
Let's have Matt Calhoun come up
and show us how manual
exposure works in AVCam, Matt.
[ Applause ]
>> Thank you Brad.
My name is Matthew Calhoun.
I'm also an engineer on
the Camera Software Team
and I'm very excited to show
you our manual exposure API
in action.
So, let me switch over
to the exposure mode
of our demo application
and we've got this nice
romantic scene prepared.
And you'll notice
that-let me go back
into focus mode and
put us in auto.
Now, we're in focus.
You'll notice that there
are a few more controls
in the exposure view.
But, I'd first like
to draw your attention
to the middle two sliders.
There's Duration
and there's ISO.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
There's Duration
and there's ISO.
And Duration, as Brad said, you
can think of as shutter speed.
And ISO, if this were a film
camera it would be film speed,
but in this case
it's the gain applied
to the sensor coming
off the signal
or the signal coming
off the sensor.
And you'll see since we're
in Auto Mode right now
as I move the device
around you will see duration
and ISO changing.
That is the auto exposure
algorithm in action trying
to achieve a perfect
exposure by changing duration
and ISO as the scene changes.
And as it does that I'd
like to draw your attention
to the slider above
Duration, which is Offset.
So, that's sort of
our exposure meter.
That's the- that
represents the difference
between our target exposure
and our actual exposure
and since we're in Auto
Mode that should be hovering
around zero all the
time, unless we're
in some extreme lighting
situation
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in some extreme lighting
situation
that we can't meet
the target exposure.
So, I'm going to lock the tripod
on this scene now and you'll see
that we're pretty well
exposed on the flowers now.
But, what if I want to enhance
the drama in this scene?
Well, that's exactly what
manual controls are for.
So, I'm going to switch
over to Custom Mode
and you may have noticed that
the sliders, the Duration
and ISO sliders became
enabled now.
So, that means that I can let's
see, let's get as little noise
in this scene as possible
by lowering the ISO.
And then since this
scene is static and I'm
on a tripod I can crank up
the Duration or lower and sort
of get what wouldn't be
considered a perfect exposure,
but may be perhaps a
more artistic exposure
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
but may be perhaps a
more artistic exposure
and that is manual
exposure, thanks.
>> Thanks Matt.
[ Applause ]
Great demo.
So, let's talk about how
continuous auto exposure works.
I want to do that, because to
understand manual exposure you
really need to visualize how our
auto exposure internals work.
Here's a peek under the hood.
First, there is an auto
exposure or AE block
and its job is calculating
ideal exposure and it's fed
with metering stats
continuously.
So, it knows how far off of
that exposure target it is.
Then its job is to calculate
the right mixture of ISO
and duration to get
the scene well lit.
That's the AE loop,
constant feedback
through the metering
stats, constant adjustment
through ISO and duration.
In the locked exposure
mode, which we've had now
for several releases, that
AE block is still there.
It's still active.
So, it's still able to
set ISO and Duration,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, it's still able to
set ISO and Duration,
but the difference is
the metering stats engine
is disconnected.
So, ISO and Duration are
never going to change,
because the metering
isn't changing.
Now, the new custom exposure
mode, in iOS 8 we allow you
to manually control
ISO and Duration
with a new mode called Custom.
You set ISO and Duration
together in that one setter,
which by now should
look pretty familiar
and there's a completionHandler
that's fired on the first frame
where the change is reflected.
There are two special
parameters.
The first one means
don't touch the Duration.
Keep it where it is.
I only want to mess
around with ISO
and the second one is
keep the ISO where it is.
So, you could have a slider UI
where you're only adjusting ISO
or only adjusting Duration.
Now, we have supported
ranges: min and maxes for ISO
and exposure duration.
Those vary according to the
device format so take note.
They're implemented in the
AVCaptureDeviceFormat not
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
They're implemented in the
AVCaptureDeviceFormat not
in the AVCaptureDevice.
Min and max ISO and min and max
ExposureDuration tell you the
limits that you can use to
set these two properties
and do use them otherwise we'll
throw an exception if you try
to set an out of range value.
Also, as you might expect, we
have three observable properties
for the three elements in
the exposure triangle: ISO,
exposureDuration
and lensAperture.
LensAperture I'll remind you,
you can key-value
observe it all day long.
It's never going to change.
Now, how does custom
exposure mode work?
Well again, we have that
auto exposure block.
It's still there.
The metering stats are connected
so it's being fed
real-time stats
about how far off it
is from ideal exposure.
So, it can still
compute a target offset,
which you can use as a meter.
The difference is ISO
and Duration are cut off
and that's where you come in.
You get to set them
directly using these APIs,
but you can still use the target
offset to know how you're doing.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
but you can still use the target
offset to know how you're doing.
All right, now let's
talk about a close cousin
of manual exposure, which
is exposure compensation.
Exposure compensation
is a modifier
to our auto exposure block.
It lets you bias the AE
algorithm towards something
slightly darker or
slightly brighter.
It's kind of like a gentler,
kinder version of
manual controls.
So, if you don't want to
get your hands really dirty,
but you still want to affect
our AE algorithm somewhat,
you can use this instead.
So, it biases the decision of
the AE algorithm as I said,
towards something brighter or
darker and this can be used
in the Continuous AE mode
or in the locked mode.
The way it's expressed
is in f-stops.
If you've used a DSLR you'll
be familiar with this.
An f-stop or exposure value
is double the brightness
if you're going in
the positive direction
and it's half the
brightness if you're going
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and it's half the
brightness if you're going
in the negative direction.
I know you all are going
to have a conniption fit
if you don't get another demo
right now so take it away Matt.
[ Applause ]
>> Okay, so let's go back
into Auto mode and we're going
to do something similar
to what we just did
with Manual controls,
except this this time I
don't really want to think
about a specific shutter
speed or a specific ISO.
I just want the AE
algorithm to handle it.
But, I do want to make the
scene lighter or darker.
So, that's exactly what
this last slider is for.
I simply move it to the left
to lower the exposure target
or move it to the
right to increase it.
And that exposure target is
locked in at a higher value now.
So, you'll see as I
move the device around,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, you'll see as I
move the device around,
let me lower it a little bit.
You should see Exposure
and Duration still updating
in response to scene changes.
But, they are being updated to
meet a lower exposure target.
So, I also would like to
show you how this works
in Locked mode.
It's very similar except that
in Locked mode we only respond
to changes in the bias.
So, Duration and
ISO will get updated
if I change the exposure target.
But, once I stop changing that
target if I move the device
around and point at
different scenes, Duration
and ISO will not change.
That is exposure compensation.
>> Cool.
>> Thanks.
>> Thank you Matt.
[ Applause ]
>> Exposure compensation is
supported in all exposure modes.
And by now you should be very
familiar with this pattern.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And by now you should be very
familiar with this pattern.
We have a compound setter,
setExposureTargetBias.
We don't call it exposure
compensation in the API.
We call it target bias.
And you're provided
a completionHandler
on the first frame where your
change has been reflected.
The supported range, min and
max ExposureTargetBias are a
constant right now
on all devices.
The min is minus eight
and the max is plus eight,
but that may change in the
future so we do encourage you
to use these min and max
TargetBiasValues before
a setting.
We will throw an exception if
you use an out of range value.
There are some key-value
observable getters.
ExposureTargetBias is
again what we're calling
exposure compensation.
That will never change
unless you change it.
So, exposureTargetBias
is zero unless you set it
to something else.
The exposureTargetOffset,
as Matt just showed you,
is the thing that hovers
around zero usually if you're
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
is the thing that hovers
around zero usually if you're
in a well-mannered environment.
It's able to a good enough
job to get the scene well-lit,
and when it's well-lit,
targetOffset is at zero.
When you're at some extreme
condition the targetOffset might
move around.
You can use it as a meter.
Now, let's talk about how
exposure compensation works
with our familiar diagram now.
In Continuous Auto
Exposure mode,
we have again the metering stats
providing real-time information
to the block and its
setting ISO and Duration.
Now, you can bias that
decision of the AE block
by setting a new target bias.
The AE block uses those metered
stats plus your bias to set ISO
and Duration to a
new Target Offset.
Now, that offset is
influenced by your bias.
So, if you set a bias of plus
one then the Target Offset will
hit zero when the plus one
target has been reached.
Now, let's look at the
Locked Exposure mode.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, let's look at the
Locked Exposure mode.
And this is pretty cool
that even though it's
locked you can still kind
of influence the exposure.
The metering stats are
disconnected so normally ISO
and Duration will
not change at all.
It's locked.
But, you can still set
bias and when you do
that it will now need to
fluctuate ISO and Duration
to meet that bias and
it will report all
of that in the Target Offset.
If you'd like to see a great
use of exposure compensation,
look no further than Apple's
own camera app in iOS 8.
You might have noticed by now
that if you tap, when you tap
to expose, you're
presented with a new UI.
It's sort of a brightness
UI with a little sun in it
and that shows you that
you can swipe up or down
to bias exposure up or down.
So, it's influencing
the AE algorithm
to pick something a little bit
brighter or a little bit darker.
Try it out.
It's really pretty cool.
All right, two down, one to go.
Let's talk about
Manual White Balance.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Let's talk about
Manual White Balance.
White balance is all
about making the color
in your photos look realistic.
Sounds simple enough
right, but it's not.
Different light sources have
different color temperatures.
For instance, daylight
casts kind
of a bluish tint whereas an
incandescent light would give
you something warmer
like a yellowish tint.
Now, our brain is really good at
adjusting to these color tints,
but cameras can't do that.
So, under a blue light source,
your camera needs to compensate
for that by boosting
up the opposite colors,
to boost up the red
to compensate.
All right, this is the nerdiest
slide you'll see here today.
This is a CIE 1931
chromaticity diagram.
You don't need to know that.
But, it shows us all the
colors visible to a human being
from pure blue, to bright
green on top, to bright red.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And any point on the diagram
can be plotted with a little X
and Y value, as shown
here by these axes.
Take into account here that this
is 2D color we're talking about,
so brightness is orthogonal.
We're just talking about color.
So, on the X axis and
on the Y axis from zero
to one you can plot any
point on that color diagram.
And as you can see there are
going to be some crazy values
that are outside the range of
human visibility and, therefore,
they can't be faithfully
represented
or reproduced by a camera.
Also note that there is a
narrower range there that's
represented by a
tight little curve
and that curve there is
called the Planckian locus.
You also don't need
to know that.
But, every time you say
Planckian locus an angel get
its wings.
This little curve here
expresses the color temperatures
in degrees Kelvin with higher
numbers on the blue side,
that's hotter and lower
numbers on the red side
in degrees Kelvin again.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in degrees Kelvin again.
And these are useful for
typical lighting situations.
In typical lighting situations,
the light will go right
along this nice curve.
Sometimes though you have
mixed lighting conditions
and it's not quite
as cut and dry
as just moving along that curve.
Sometimes you need to shift
a little bit to deviate
from the nice curve,
because there is a little bit
of a green shift
or a magenta shift
in that's what those little
tick marks are up there.
So, the Planckian locus is
talking about temperature
and the little hash
marks off of it are tint.
That's adjusting for red
or magenta or green shift.
Now, what possible
uses might you have
for manual white balance?
Well, our auto white balance
does a pretty good job.
It's trying to guess what
your lighting source is
and compensate for casts
that might be coming off
of that lighting source.
But, it's not perfect.
It might make mistakes.
If you have manual white balance
you can do a manual temperature
and tint UI.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and tint UI.
You could come up
with some presets
for standard lighting conditions
and give the power
back to the user.
You could use a gray
card to assist
in doing a neutral white
balance or you just might want
to do some crazy color cast
and have little green
men in your pictures.
IOS devices compensate
for color casts
by boosting the opposite
color gain.
So for instance, if a scene
has too much blue it's
over on the left side, then
the red gain must be boosted
up a lot and the green
a little to compensate.
These gain values are calibrated
for our devices so they are said
to be device-dependent
gain values rather
than device-independent values
such as are represented
on this diagram.
Okay, what do we
already support?
Similar to exposure,
we have a locked mode
and a continuous
whiteBalanceMode,
which continuously
tries to adjust
for the lighting
source conditions.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And we let you know when the
white balance is being adjusted.
New in iOS 8 we give
you full manual control
of the device RGB gains.
This isn't, you know, just a
baby sort of temp and tint UI.
This is full control.
You can key-value observe
those device RGB gains.
And we also have support for
white balance using a gray card.
We'll get into that more later
and we provide conversion
routines to get you
from chromaticity values,
X and Y values to our gains
or from temperature tint
along that Planckian locus.
Another angel just
got its wings.
All right, I think it's
time for a demo of this.
Let's bring up both Matt
and Aparna to give us a demo
of manual white balance.
Take it away guys.
[ Applause ]
>> Okay, let's put exposure back
into Auto mode now and switch
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> Okay, let's put exposure back
into Auto mode now and switch
over to the white balance view.
And I'd like to show you just a
few of things that are possible
with the manual white
balance API.
So, we got a common theme
here, we've got a Mode switch
at the top and then a couple
of sliders and then a button
that I'll talk about
in just a moment.
The sliders are for temperature
and tint and, as Brad explained,
one goes between yellow cast
and blue cast and one goes
between green and
purple or magenta.
And since we're in Auto
mode and we have somewhat
of a mixed lighting
situation here in Moscone,
you can see the auto white
balance algorithm subtly
changing these values as
I move the device around.
But, what's more fun is
to go into the Locked mode
so that we can change
them ourselves
and let me point at
the flowers here.
So, in manual mode, you can
really see that we can go
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, in manual mode, you can
really see that we can go
between a yellowish cast, a
bluish cast and, with tint,
we can go between a magenta
cast and a greenish cast.
So, now let's go back to
Auto mode and let's talk
about another use case that
we've enabled, which is the use
of a gray card to
lock white balance
at an appropriate value
for a neutral gray.
So, you'll see down here,
we've got a yellow gray button.
How's that for cognitive
dissonance?
So, normally the auto white
balance algorithm does not
assume that you are pointing
your camera at a gray card.
In fact, it really has
no way of knowing that.
So, that's what this
gray button is for.
When we tap this
button we are going
to tell it-we are telling
the API essentially, "Okay,
we are looking at a gray card.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we are looking at a gray card.
Give us the correct
values and then we're going
to lock at those values."
So, you should see a subtle,
in this light, change.
Let me go back to Auto and again
a subtle change in the color
of the scene when I
tap the gray button.
And notice that we've
gone automatically
from Auto to Locked mode.
That is manual white balance.
Thanks, Brad.
[ Applause ]
[ Silence ]
>> Awesome demos.
Alright, manual white
balance is my favorite
of the three new camera control
APIs and that's because we got
to introduce some new C
structs, always a good day
when you can introduce
a C struct.
We don't want you to set red,
green and blue individually.
They need to be set as a team
so therefore we need a struct
for red gain, green
gain and blue gain.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for red gain, green
gain and blue gain.
You set them all at once.
Of course, there is a
max that you can set.
On all our devices
it's currently four.
That's the max white
balance gain you can set.
And so the legal range
is from one to four.
That may change in the future,
so do use maxWhiteBalanceGain.
Also as you saw from the UI that
Matt and Aparna just showed you,
the temp and tint sliders were
moving around in Auto mode.
That's because they
were observing the
deviceWhiteBalanceGains
and then converting them
into temperature and tint.
So,
these deviceWhiteBalanceGains
are key-value observable
and update constantly in
your-in the Auto mode.
So, here is what
the API set looks
like for manual white balance.
You set the white
balance mode to "locked"
with explicit RGB gain values.
Again, these are
device-dependent gain values not
device-independent
chromaticity values.
There is also a current special
parameter that you can set
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
There is also a current special
parameter that you can set
if you just want to lock
where it is right now.
Now, about those
conversion routines.
Of course, we need
structs for them too.
When we're talking about
chromaticity values or the X
and Y on that chromaticity
diagram they go from a min
of zero to a max of one.
Remember, not all
of them will fall
within human perceivable color
and then there's also
temperature and tint
as a struct, which
can be set together.
Temperature is a floating
point value in Kelvin
and tint is an offset for red,
or magenta, from zero to 150.
Positive values go in
the green direction,
negative in the magenta
direction.
Now, to convert them
unfortunately you need
to be very verbose so I need
to use three lines per
function call, but bear with me.
The first one is, when you
have RGB gains from the device
and you want to turn
those into X,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you want to turn
those into X,
Y chromaticity values,
you call this.
There is also, to go
in the other direction,
you can provide X and Y
values and we'll convert them
into our device-dependent
RGZB gain values.
Also, you can take our gains
and turn them into temperature
and tint and vice versa.
Now, note that our conversion
routines are more accurate the
closer you are to that curve,
to the Planckian locus.
The farther away you get the
crazier results might get.
As I said, some X
and Y temperature
and tint combinations will yield
out of range RGB gain values.
But, we're not going
to hide that from you.
If you call a conversion method
with a crazy X and Y value,
we are going to convert it
to the corresponding device RGB
gain value without clamping it.
But, that might not be a
legal value that you can use
when setting the white balance
gains on the AVCaptureDevice.
So, when using these conversion
utilities you must do your own
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, when using these conversion
utilities you must do your own
range checking.
You must check for out of range
values otherwise an exception
will be thrown.
Now, let's talk about
the gray card support
or what we call Gray world.
What is Gray world?
It is not the land of
perpetual depression.
It is-think of it
as an alternative
to our auto white
balance algorithm.
AWB is very complicated
and preferenced.
That's because it needs to
make guesses all the time
about what the lighting sources
are, but it can be tricked.
If for instance, you have
predominantly red in your scene
like a scene of autumn
leaves it might think
that that's your lighting source
when in fact it's just
predominantly the color
in the scene.
It needs to make
those assumptions
and sometimes it guesses wrong.
Well, you can remove the
guesswork by using a gray card.
And so the gray world is-think
of it as a parallel universe
of AWB values that are
computed all the time
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of AWB values that are
computed all the time
as if you had a gray card
in front of the camera.
So, you can get the
regular device RGB gains
or you can get the alternate
gray world gains at any time.
And what it does is try to
make white look neutral white.
So, if you have a gray card
and you have a UI for this,
you can really take
the guesswork out.
It does assume that that neutral
subject or gray card fills
at least 50 percent of
the center of the frame.
Now, how might you
do this in a UI?
Well, you could prompt the user
to put a gray card in front
of the camera then you could
wait for the gains to settle
down for a minute and then
sample the gray world device
white balance gains property
and then lock white balance mode
with those gray world gains.
Now, you know you have a neutral
white and you've taken all
of the guesswork out of AWB.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
of the guesswork out of AWB.
That's it for the
manual controls.
Let's talk about where all
three of these are supported.
Manual focus, it's
supported everywhere,
everywhere where you
can focus our camera,
you can use the manual
controls all the way back
to an iPhone 4s.
Manual exposure and manual
white balance: no restrictions.
You can use them on all iOS
devices supported in iOS 8.
Also, the talk has generally
been geared towards digital
photography, still photography,
but you can use these
manual controls
with any AVCaptureSession
preset or any active format.
So, they're equally applicable
to video recording,
barcode scanning.
You can set manual
controls for any use case.
Whew, are you stuffed yet?
Is there any room
for dessert at all?
I hope so, because you're
going to want to hear
about bracketed capture.
Think of this as a twist
on the manual controls
that we've spent the
bulk of the talk on.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that we've spent the
bulk of the talk on.
All of the AVCaptureDevice
manual controls happen
in real time.
You set an exposure, it
executes your command.
That's great, but sometimes
you need to capture a moment
in time a variety
of different ways.
You need to capture one
picture but you want
to set different settings.
So, wouldn't it be great if
you could preprogram the camera
to give you several images
in a row of the same scene,
but with different exposure
values for each one.
And then issue that command and
have it execute that command
as a group and then give you
those three or four images back.
That's exactly what
bracketed capture is.
It's a burst of still images
taken with varied settings
from picture to picture.
Some common examples would
be an exposure bracket,
two different flavors.
The first is an auto
exposure bracket
where you are differing the
bias from image to image,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
where you are differing the
bias from image to image,
minus two, zero, plus two.
Some reasons for doing
that might be if you want
to do some highlight recovery
such as an HDR fusion algorithm.
You could take an
underexposed image,
an overexposed image,
fuse them together.
The other case is manual
where you have full control
over shutter speed and ISO and
you set those independently
for each image in the bracket.
Why might you want to do that?
Well, creative exposure effects,
different combinations of long
and short exposure
duration, for instance.
And the simplest of all
brackets is the bracket
where you don't vary anything.
It's just a simple burst bracket
and that might be good
for a finish line.
So, without further
ado let's have a demo
of bracketed capture with John.
Come on up.
[ Applause ]
>> Thanks, Brad.
My name is John Papandriopoulos.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
My name is John Papandriopoulos.
I'm great, it's great
to be here.
I'm an engineer on the
Camera Software Team.
So, this is a two-part demo.
The first part we're going to be
looking at exposure compensation
and performing a bracketed
capture of three frames
where we have an underexposed
frame or image, a well-lit image
and an overexposed
image using EV values
of minus two, zero and plus two.
So, let's go ahead and take a
capture and what we've done is,
as these frames have been
captured, we process them
in real-time and stripe them.
So, we actually take a
strip from the first image
and then put that
into an output buffer
that you see on the screen here.
The second one beneath
that comes
from the second captured frame,
in this case a well-lit frame
and then an underexposed
frame following that.
So, now I'm going to go into the
fully manual bracketed capture
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, now I'm going to go into the
fully manual bracketed capture
mode that we support
where we have full control
over ISO and duration.
What we're going to do in
this case is we're going
to perform a three
frame bracketed capture
where we have control over
the duration or shutter speed
and we're going to be setting
that for a short shutter speed,
a medium shutter speed and
then a fast shutter speed.
So Brad, if you were to walk
across there and we capture
that what we can see here is
that we have a blurry image
at the top of Brad's head.
That's where we had
a very slow shutter.
We have a little bit more
crisp for his chin there
and then you can see his
torso is quite crisp.
And if we look really
closely there you might see
that there's a lot
of noise there.
What we've had to do is adjust
the ISO and increase the gain
to compensate for the small
amount of light that was going
in at that fast shutter speed.
Thanks very much, can't wait
to see what you do with it.
[ Applause ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
>> Thanks John for
slicing and dicing me.
Bracketed capture is all
implemented in a single class
and that's the
AVCaptureStillImageOutput.
That's the object that you use
to take pictures in our API.
So, how does it work?
Well, if you've used
AVCaptureStillImageOutput before
you'll be familiar with its
single image capture interface,
which is this.
You call captureStillImage
AsynchronouslyFrom Connection.
That takes a picture
and at some point
when it's done it calls you back
with that single image buffer.
For bracketed capture it
looks largely the same.
The bracketed capture interface
is captureStillImageBracket
AsynchronouslyFromConnection.
The only difference is that
it has a second parameter,
an additional parameter, which
is the settings array and that's
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
an additional parameter, which
is the settings array and that's
where you're giving it the array
of things that you want to vary
from picture to picture.
We have two new objects,
two new classes
to represent the settings for
a single image in the bracket.
The first is for the
exposure compensation
or auto exposure bracket.
So, one AVCaptureAutoExposure
BracketedStillImageSettings
object equals one of those
pictures in the bracket.
And here you get to set an
exposure target bias, minus one,
plus one, etc. For manual
exposure brackets you use an
object that lets
you set both ISO
and duration for
that one picture.
Now, there are some
dos and don'ts.
Let's cover the don'ts first.
In bracketed capture
we don't allow you
to mix bracketed
settings classes.
That means you can't
have a half manual,
half auto exposure bracket.
You have to have an all auto
or all manual exposure bracket.
Also, there is a limit to the
number of images you can take
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Also, there is a limit to the
number of images you can take
in a bracket and you
must not request more
than maxBracketedCapture
StillImageCount.
That will vary from platform
to platform and also depends
on resolution and the output
format that you are asking for.
Now, the dos, do prepare for
the worst case and the way
that you do that is
by calling prepare.
You're telling the AVCapture
still image output at some point
in the future I am
going to take a bracket.
And you tell it what kind of
bracket you're going to take
by passing the exact settings
that you're going to use.
And by telling it beforehand to
prepare itself for that bracket,
it can do all of the buffer
allocations that it needs
to up front so that when you ask
for the bracket there'll
be no shutter lag.
It will happen very quickly.
You should always assume that
the sample buffers are coming
from a shared memory pool.
In other words, don't ask
for one bracketed capture
and then hold onto those buffers
and then ask for a second one,
because the second
one is going to fail.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
because the second
one is going to fail.
You must CFrelease
the buffers given
from the first bracket before
asking for a second bracket.
If you want to reclaim the
memory that was prepared
from a previous prepare
call the way
to do it is just call
prepare again with an array
of one object and that
will reclaim the memory.
Now, some fine details
about bracketed capture.
What happens-what's
the interaction
between the AVCaptureDevice
manual controls
and bracketed capture?
Well, the bracketed
capture wins.
So, when you're doing a
bracketed capture all the manual
controls you set
on the AVCaptureDevice
are temporarily overridden
and then they go back
to what they were
after the bracketed capture.
Also, flash and still image
stabilization are ignored during
a bracket.
And all images in a single
bracket must have the same
output format, be
it jpeg, 420f, VGRA.
Also note that because you
might be doing long durations,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Also note that because you
might be doing long durations,
it might need to expose for a
long time, that it is typical
for you to see video preview
drop frames while an exposure
bracket is being taken.
And finally, bracketed
capture is supported
on all iOS devices in iOS 8.
Whew, that concludes our lovely
lunch, time for the check.
So, what do we got here?
We've got AVCaptureView
on Yosemite, a standard UI
for doing recording,
iOS screen recording
for app previews-for all
you people that make apps
on the App Store, you'll
want to check that out,
access to the hardware
video encoder in real-time,
special session about
that tomorrow,
powerful new camera controls.
We talked about focus,
exposure, exposure compensation
and white balance, and
finally, still image bracketing.
The two apps that we used today
for the demos, AVCamManual
and BracketStripes
are available now.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and BracketStripes
are available now.
So, if you go and look
at the session info you
can go download those,
see what we did there,
try them out.
For more information here's
our evangelism contact.
There are a number of
related sessions to this one,
both for AVFoundation and
for the photos framework.
Some have already happened.
If so, go check them
out in the videos
and there are some
still to come.
Thank you and have a great show.
[ Applause ]