WWDC2015 Session 510

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
>> DAVID HAYWARD: Good
morning, everyone,
my name is David Hayward, and
it's my privilege today to talk
about what's new in
Core Image on iOS 9
and Mac OS X El Capitan.
So to start off, we will be
covering several things today.
First off I will give a brief
introduction to Core Image
for those new to the subject.
I recommend that you go back
and see our presentations
from last year.
In particular, there was
a great discussion on how
to write kernels in Core Image.
Next, we'll be talking
about what's new
in Core Image this year.
We have a lot of stuff
to talk about here.
And the other third of our
discussion today will be talking
about using Core
Image and bridging it
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
about using Core
Image and bridging it
with other graphics
frameworks on our platforms.
First, an introduction
to Core Image.
In concept, the idea of Core
Image is you can apply filters
to images.
In a simple example, you can
start with an input image
and apply a filter to do a
color effect such as sepia tone,
but if you don't like the color
of sepia tone you can
apply another color effect
to change the hue to make it
more of a blue-toned image.
You can also use Core
Image to apply effects
such as geometry-distorting
events.
In this example, we are just
using a simple transform to zoom
in on a portion of the image.
You can think of there
being an intermediate image
between every filter.
However, the way we
implement filters,
they are actually very
lightweight objects
that take very little
time to create,
and there are not necessarily
intermediate buffers
in between them.
Another important
concept is that associated
with each filter is
one or more kernels.
CI kernels are small subroutines
that apply the effect
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
CI kernels are small subroutines
that apply the effect
that that kernel is
trying to achieve.
One of the other key features
of Core Image is we concatenate
these kernels into a program
as much as possible to minimize
the use of intermediate buffers
and improve performance.
Another key feature in Core
Image is what we call region
of interest support.
The idea is that if you are only
rendering a portion of an image,
either because you are
zoomed in on a large image
or because it's being
rendered out in tiles,
we can ask each filter
that's been rendered how much
of the input image it needs, and
from that we can calculate back
to the source image the
exact region that's needed
of that image in order to
produce the desired output.
This is another great feature
of Core Image that allows us
to get good performance
especially
when working on large images.
There are four main
classes you need to be aware
of when you are using
Core Image.
The first is the CI kernel.
I mentioned this earlier.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I mentioned this earlier.
This represents the
program or routine written
in Core Image's kernel language.
The second key class is
the filter, the CI filter.
This is a mutable object, and
it can have multiple inputs,
and these input parameters
can be either numbers
or vectors or other images.
The filter uses one or
more kernels in order
to create an output image
based on the current state
of the input parameters.
A CI image is an
immutable object
that represents the recipe
to produce that image based
on the previous kernels
that have been applied.
And lastly, there is
the CIContext object.
And this is a more
heavyweight object.
This is the object through
which Core Image will render.
You want to avoid creating these
too often in your application,
so if you are doing
fast animation you want
to do this just once.
And the great thing
about CIContext is
they can be implemented
on various different back-end
renderers in our system.
So the next thing I would
like to talk about now
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So the next thing I would
like to talk about now
that the introduction
is behind us is to talk
about what's new this
year in Core Image.
We have several things
to talk about today.
We will be talking about Metal,
talking about new filters,
new detector, color management
support, and some improvements
to the kernel classes
and languages.
And most important thing I want
to talk about is that we now
in Core Image have a unified
implementation across both
of our platforms, so for the
most part unless we mention
otherwise the behavior of Core
Image is completely consistent
and equivalent between
iOS and OS X.
And this is a great feature for
developers to be able to rely
on this consistent behavior.
This can be little
things such as the fact
that when you include
the Core Image header,
you can include Core
Image, Core Image.H,
regardless what platform
you are.
So it makes it a lot easier
if you're doing cross-platform
code.
We have now got API parity
between the two platforms.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We have now got API parity
between the two platforms.
So one of the major
things we want to talk
about today is Core
Image support for Metal.
And we will be talking this
in much more detail later
on in the presentation,
but I want to give you
the highlights right now.
The key things to think of is
that Metal Textures can now be
given as an input to Core Image,
and also Metal Textures can
be the output of Core Image.
And internally, Core
Image context will be able
to use Metal as their
internal renderer.
What that means is if
you have written a kernel
in CI's kernel language, it
will be automatically translated
on the fly to Metal language.
Another thing to keep in mind
is some of our built-in filters,
notably Gaussian and convolution
filters, are now built on top
of Metal performance
shaders in order
to get the best possible
performance on the diversity
of platforms that are supported.
A little bit about filters.
Again, as I mentioned before,
we now have a unified
implementation of Core Image.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we now have a unified
implementation of Core Image.
This means we have 200 buil-
in filters on both platforms.
That means we have
brought a lot of filters
to the iOS implementation
of Core Image.
Over 40 have been
added in this release.
These fall into different
categories.
There are fun filters like
comic effect, and CMYK Halftone,
and Droste, and Page Curl.
There are also some convolutions
filters that are useful,
such as median filters, edge
detection, and noise reduction.
Also we have reduction
filters, which are useful
for image analysis
such as Area Maximum
or Column Averaging an image.
In order to give you a
little taste of this,
I want to show you the
latest version of one
of our sample applications
called Core Image FunHouse,
and we try to update
this every year.
One of the things we have now
is now that we have 200 filters,
when you bring up the Filters
pop-up in this application,
we have now broken them
up into categories.
You can also see
highlighted in red all
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You can also see
highlighted in red all
of the new filters we have
added, and there is now an API
for you to determine
what release a filter was
included in.
And this is, of course, showing
the CMYK Halftone effect getting
good performance on an
iPad at Retina resolution.
There are two new filters
we have added to Core Image
on both platforms
by popular request.
These are filters for
generating bar codes.
So the input in this case
to a filter is not a number
or another image,
but a text string.
And we have added these two
for generating PDF417 bar codes
and code 128 bar codes.
Another feature of Core
Image is what we call our CI
detector classes.
These are our classes we
have released in the past
for doing things like
detecting faces in an image,
detecting QR codes in an image,
or detecting rectangles
in an image.
And we have a new one this year,
which is for detecting
areas of text in an image.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which is for detecting
areas of text in an image.
The idea in the filter is to
locate areas that are likely
to contain upright text.
So let me give a brief demo
of this running on an iPad.
We have hooked this up to the
Core Image FunHouse application.
So we have an old box that was
on my shelf, and if we turn
on the text detector it's
locating the upright text,
both the runs of text and
the individual characters.
And as we zoom in and
rotate the camera,
the upright text will
detect some of the text
that was at an angle as well.
So that's our new text
detector so I'm excited
to see what developers
come up with to use that.
Another thing we
have brought now
with our unified implementation
of Core Image is now
on iOS we have the
great functionality
of automatic color management.
This has been available on
OS X ever since the beginning
of Core Image, but now we
have this on iOS as well.
What this means is that Core
Image now supports ICC-based
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
What this means is that Core
Image now supports ICC-based
CGColorSpaceRefs fully.
And these can be used on input
images or output images or even
as a working space
in Core Image.
This is through great
work that's been done
to support ColorSync
on iOS as well.
What this means to users is
that automatically you will
get correct rendering of TIFFs
or JPGs that are tagged
with color spaces.
Many images are tagged with sRGB
and those have been rendered
correctly on previous versions
of iOS, but now if you have an
image tagged with a color space
that is not sRGB, you
get the correct behavior.
Here is an example
of an image tagged
with the Pro Photo color space.
The red bench in the
background is desaturated,
and skin tones look poor.
When you correctly see the
embedded ICC profile on this,
the image is rendered correctly.
And this you get
automatically in Core Image.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And this you get
automatically in Core Image.
We also have some new
support for CI kernel classes
that is now available on OS X
like it has been
on iOS in the past.
This is another benefit of
our unified implementation.
For example, we have two
classes called CI color kernel
and CI warp kernels.
The idea behind these classes is
to make it much easier for you
to write the most
common basic filters.
Traditionally on
OS X, if you wanted
to write a simple blend filter
to blend three images given
a mask, you would have
to write several lines
of code to sample
from the sampler correctly,
and then you would do the math
to combine the three images.
If you use CI color
kernel classes,
the code becomes much simpler.
So now the input to the kernel
is a sampler, underscore,
underscore sample
parameter, and the code
for the kernel is just the math
for mixing the three results.
So this is a great
thing for developers.
It makes it easier.
It also makes the job for
Core Image easier to simplify
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It also makes the job for
Core Image easier to simplify
and concatenate programs.
We also have a lot
of improvements
to the CI kernel language
are available on OS X.
Our unified implementation when
we compile CI kernel language
into the destination
context language,
we do that using LLVM
technology at Apple.
And what this this has given us
is new features in our language,
such as If, For, and While, that
were not previously available.
CI kernels in existing
apps should work fine.
However, with the new compiler
we have stricter warnings
so if your app is linked
against El Capitan or later,
keep a look out for
any compiler warnings.
As an example of this, here is
a simple example of a kernel
that wasn't possible before,
on OS X using kernel language,
and that's because this
particular filter has an input
parameter, which is a count.
And we want to have a For
loop inside this kernel
that loops based on
that count variable.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that loops based on
that count variable.
In this particular
example, we're trying
to do a motion blur along
a vector for n points.
And this is now a
trivial kernel to write.
You can get fancier,
and you can have a
For loop with an early exit.
In this case, we are sampling
from that image until we get
to an area of the image that is
not opaque, and then we break
out of the For loop and
return the average color
of only the colors
that are in the image.
So one thing to keep in mind is
with our kernel language is
what are our overall goals
of this language are.
What we want to do is enable
you to write kernels once
and have them run
regardless of the device
that your kernels
are running on.
So it will run independent of
what system you are running on,
whether iOS or OS X, what
size your input image is.
The CI kernel language has
support for destination core
to sampler transforms so we
can support automatic tiling
of images.
And the CI kernel
language works independent
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And the CI kernel
language works independent
of what our back-end
renderer is,
so whether we are using
Metal, or OpenCL, or OpenGL,
or OpenGL ES, you can
write your algorithms
in the CI kernel language once.
So that's the highlights
of what's new
in Core Image this year.
The next subject we would
like to talk about is how
to bridge Core Image
with other frameworks.
Specifically, some of the wealth
of other great graphics
frameworks available
on our platform.
We have great imaging
frameworks on our platform
such as Core Animation,
SceneKit, SpriteKit, Metal,
AV Foundation, IOSurfaces,
and various View classes.
We spent a lot of
time this year trying
to make these work right
together with Core Image.
So to start that discussion,
I would like to introduce
Tony Chu, who will be talking
about Core Image and
Metal in more detail.
[ Applause ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
>> TONY CHU: Thank you, David.
Good morning, my name
is Tony and I would
like to first tell you
about a little bit more
about Core Image and Metal.
So as David mentioned earlier,
this year we have added support
for rendering with
Metal in Core Image,
and one of the reasons
we did that is to add
to our extensive suite
of supported image types
such as IOSurface and CGImag,e
all of which can be used
as inputs or outputs to a CI
filter regardless of the type
of CIContext you have.
But if you have an OpenGL-based
CIContext, you can also render
to and from OpenGL textures.
And now this year if you
have a Metal-based CIContext,
you can also render to
and from Metal Textures.
Previously, without this
support you would have had
to convert a Metal Texture to
one of the existing image types,
which would have likely
incurred an expensive data copy
between the CPU and GPU.
With proper support
we can render to
and from these resources
very efficiently.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and from these resources
very efficiently.
Let's take a look at some of
the new APIs we have available
for Metal support in Core Image.
The first is an API that allows
you to initialize a CI image
with an input Metal Texture as
well as an optional dictionary
where you can specify things
such as the color space
that that texture
it tagged with.
That's an example of
one of the advantages
of using a higher-level
framework, such as Core Image,
is that it will take
care of details
such as color management
automatically for you.
Then, in order to do rendering
with these Metal-based
resources, you will want
to create a new CIContext that
is a Metal-based CIContext
by giving it the Metal device
your application is using.
And, again, you can specify an
options dictionary for things
such as working color
space or working floor mat
for intermediate buffers or
even, you can even declare
that you want to use
a low-priority GPU.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that you want to use
a low-priority GPU.
In any case, with this
new Metal-based CIContext,
we have the new render
API that allows you
to render any CI image to
an output Metal Texture.
And one of the nice
features I want to call
out about this API
is the ability
to specify optional
command buffer.
If you want things nice and
simple, you can specify nil
and in that case Core Image
will create one internally
and code all the
necessary commands to it
and then commit it
before returning,
which will then effectively
schedule
that render call on the GPU.
But you can also provide a
command buffer to that call,
and in that case Core Image will
merely encode commands to it
and return without
committing it.
What that gives you is full
control on how you want
to schedule your command buffer
for rendering on the GPU as well
as the flexibility to
insert CI filters anywhere
into a command buffer.
So let me explain that in
a little bit more detail.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So let me explain that in
a little bit more detail.
For those who are new to Metal,
rendering with Metal basically
involves encoding a series
of render commands
to a command buffer.
In this case, we have
two sets of commands.
And with that new API that we
just saw, you can now insert
that CI filter anywhere into
this command buffer such as
at the very beginning or at
the very end, or even right
in the very middle between
these two render commands.
So you can imagine this might
be a situation where you want
to do some draw, cause,
and render to some texture
and then feed the texture
into a series of CI filters,
generate some output
texture from that,
and do more rendering with it.
And internally, Core Image will
then encode all the commands
for each filter you may
have in your image graph.
And in fact, as David
mentioned earlier,
some of our built-in filters
will use Metal performance
shaders to take advantage
of these highly optimized
shaders specifically tuned
for Metal-capable devices.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for Metal-capable devices.
And lastly, I want to
mention that this type
of calling convention lends
itself perfectly to be able
to use CI to render
directly to a MetalKit view.
So to explain that further,
I would like to show
you sample code.
So this is sample code that you
would have to write if you were
to create a new application
based
on the new MetalKit framework.
The first thing you need to
do is to do a couple of things
in this when you want
to set up the view.
The first key thing is
to specify the Frame Buffer Only
property on that view to False,
which will allow Core Image
to use Metal compute shaders
to render to that
view's output texture.
The next thing you want to do
is initialize the CIContext
with a Metal device.
You want to do that here because
initializing a CIContext is
something you only want to
do once in an application.
Then, in the Draw and
View Delegate function,
this is the code you would
need to write in order
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
this is the code you would
need to write in order
to render some CI
filters through that view.
So let me step you
through this line by line.
First thing is, you
create a command buffer
that will eventually be used
to present the drawable with.
Then we are going to
initialize a CI image
with some given input
Metal Texture.
Now, this CI image could come
by other means, for example,
some of the other image types
we have, like a CGImage.
But in this case we will just
show you how to use our new API.
But then once you
have a CI image,
you can chain together a
series of CI filters to it.
In this case, we will apply
a CI Gaussian Blur filter.
Then, once you have your CI
image that you want to render,
you want to grab the
texture currently bound
to that view's current
drawable and render the CI image
to that texture with the command
buffer we want to use here.
Then finally, once we have
encoded the render commands
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Then finally, once we have
encoded the render commands
there is one more Metal
command that you need to insert
at the command buffer,
and that's to present the
view's current drawable.
And then you just call
Commit on that buffer.
That is how easy it is
to integrate some
Core Image filters
into a MetalKit application.
So next I would like to talk
about bridging Core
Image and AV Foundation.
So with the latest changes
we have this year in both
of these frameworks, it is now
easy to add Core Image filters
to your AV Foundation app,
and that's because Core Image
is now conveniently integrated
with the AVVideoComposition
class.
And by default you would get
automatic color management,
but if you don't need
it, you can disable it.
So we will take a look at
a couple of examples on how
to apply CI filters to videos.
First in the context
of exporting the video
and next during live
playback of a video.
So for the purpose of
demonstrating these examples,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we are going to use a filter
that we showed you a
couple of years ago at WWDC.
And this is a filter where for
every frame of the video image,
we are going to first apply a
sepia tone filter to it along
with some random noise, and then
finally some vertical scratches
overlaid on top of it.
For those who may recall,
this is the old film filter
that we showed you from a
couple of years ago at WWDC.
This filter is very
straightforward,
all it takes is a single
input image as well
as an input time parameter,
which will allow you
to apply the effect to the
video in a repeatable way
with deterministic results.
So let's get back to how we
would apply this filter during
exporting of that video.
What you need to do first is
create a filtered composition,
giving it the AV asset that
you want to export as well
as a callback block in which
you can specify a filter recipe
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
as a callback block in which
you can specify a filter recipe
to be applied as each frame of
the video is being rendered.
And from this callback block,
we will get a request object
as an input parameter from which
you can get the composition time
as well as the source image
for chaining together
your CI filters.
And then once you have
your filtered CI image,
you then call Finish With
Image on the request object.
You can pass in a nil
context to that call,
and the AVVideoComposition would
create a CIContext by default,
which, as I mentioned earlier,
will get automatic
color management.
If you want to disable
that, all you need
to do is create a
CIContext on your own
and specify a null color
working space and pass that into
that Finish With Image call.
Now, that filter we just showed
you is a pretty simple filter
that has no convolution
filters involved.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that has no convolution
filters involved.
But in the case where you
do have convolution filters,
one thing you want to watch out
for is an undesirable result
where you get clear
pixels bleeding
into the edges of that image.
To fix that, we have a pretty
simple recipe that we use
in a lot of cases
such as that one.
The first thing you want to
do is with the source image
that you have, that you want
to apply the convolution filter
to it, you want to call
image by clamping to extent.
It will edge replicate all
of the pixels along the edge
of that image to infinity.
And by doing that, you will
no longer have the problem
of clear pixels bleeding
into the image
as you are applying the filter.
Because by doing that you end
up with an infinite image,
at the very end of applying the
filter you want to add image
by cropping the rect in
order to crop that image back
into the source image's extent.
By applying that simple recipe,
you will get a much cleaner look
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
By applying that simple recipe,
you will get a much cleaner look
with nice, crisp, sharp
borders on the edge.
So once we have that
AVVideoComposition,
if we want to create an
export session in order
to export a video, and you do
that by creating this
AV export session
and specify an output URL
location to which you want
to export as well as
the video composition
that we just created.
And one thing to keep in
mind is you might want to --
you want to call Remove Item
at URL to remove any item
that might already exist
at that output location.
Once you have done that,
then you can call
Export Asynchronously
on that export session, which
will then kick off a process
to export that video and
apply all of the CI filters
to every single frame
of your video.
If you wanted to update
some progress on your UI
to show the progress
of that export,
you can use the Composition Time
parameter in your callback block
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to update such a UI element.
So now that was exporting.
For playing back an AV asset,
the code that you would need
to write is actually
very similar.
Creating the video composition
is exactly the same code we
saw earlier.
The only difference
now is instead
of creating an export session,
you need to create
an AVPlayerItem
with that AV asset along
with the video composition
we just created
and then create an AVPlayer
with that player item
and then call Play
on your player.
So I'm going to now show you
a video of how we are applying
that old film filter to an
AV asset during playback.
So one thing to notice here is
as you are scrubbing
back this video,
you can see the same
effect being applied
in a repeatable way with
deterministic results.
So that is Core Image
and AV Foundation interoperating
together very efficiently.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and AV Foundation interoperating
together very efficiently.
Next, I would like
to call up Alex
to tell you a little bit more
about Core Image providers.
Thank you.
[ Applause ]
>> ALEXANDRE NAAMAN:
Thank you, Tony.
My name is Alexandre
Naaman and I will talk
about Core Image providers
and then we will talk
about more APIs we have
on the system and STKs
and how they can
work with Core Image
to create interesting
applications.
Let's start with
CIImageProvider,
which is a category
we had on CI image
that existed previously just on
OS X but now exists also on iOS
as part of our unified
implementation.
It's a great way for you
to bring input images
into your system that wouldn't
be able to be done otherwise.
So, for example, if
you had a file format
that wasn't supported
and you wanted
to somehow create a CI image
that was based on that,
or if you had data that was
streaming from some site
and you wanted to
create a CI image,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you wanted to
create a CI image,
you could use a CIImageProvider.
They are implemented
via callbacks.
It's all done lazily, and we
will call you and tell you
when we need to fill in the data
and you get automatic tiling
and we handle the purgability
and caching for you.
Let's take a look
at how this is done.
First things first, you
will create your own class.
In this case, we will create
one called tile provider.
And then we create a CI image
with that tile provider,
and in addition to that,
we give it the size
of the image we are trying to
create, whatever format we would
like to use to create for this
image, a color space optionally,
and in this case we
will give the tile size
in the options dictionary.
Now, in order to use this,
all we have to do is implement
a method called Provide Image
Data, and Core Image
will call you and ask you
to fill in this information.
And you have to fill
in that data pointer
with a given row bytes value,
a certain location in X and Y,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with a given row bytes value,
a certain location in X and Y,
width and height, and you
can tag some user info
if you would like as well.
That's all you need
to do in order
to implement your
own image providers.
Now, let's talk about
the various view classes
that we have that you can use
with Core Image on
both iOS and OS X.
So we have a broad spectrum
of support for rendering
with Core Image on
a system ranging
from the very high level,
such as UIImageView,
which makes it very easy
to render an image
that's been applied
with a Core Image effect.
And going to the
much more low-level
and potentially higher
performance APIs such as GLKView
or MTK view, which give
you more fine-grain control
over what you are doing.
So let's take a look
at UIImageView.
UIImageView is probably
the simplest way
to display a CI image on iOS,
and all you have to do is
on your UIImageView, set the
Image property to a UI image --
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
on your UIImageView, set the
Image property to a UI image --
in this case, one
based on a CI image.
The problem is, although
this is very easy to use,
it's not the most
high-performance method
of doing so.
So what ends up happening is
we have to render that back
onto the CPU and send
it back up to the GPU,
so it's not as efficient
as possible.
And if we take a look
at a simple example,
in this case we will run
a pixelate filter using
a UIImageView.
We see that we get about
20 frames per second
on a Retina-sized image with
this effect being applied.
Now, if we switch to
an OpenGL ES-based view
and apply the same filter,
we can see that now we
get 48 frames per second.
And if we go one step further
and do a Metal-based view,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we get slight improvement here.
We get 52 frames per second.
And although this isn't great
we are only applying one filter,
and so the advantages that
we get aren't as noticeable
as we would get if we had
many filters being applied
or if we had a bunch
of smaller renders.
But the basic idea is there.
So now, let's take a look at
Core Image and Core Animation
and how we can make
those work well together.
This is one of the few instances
where we do have differences
between iOS and OS X.
On OS X, we can just apply,
we just have to do two things
in order to use Core Image
and Core Animation together.
First things first, on
your NSview, all you need
to do is say view.layer uses
Core Image filters and set
that to True and then
optionally specify an array
of filters you would
like to be applied
to whatever layer you have.
And that's really
all you need to do.
On iOS, we don't
have that support,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
On iOS, we don't
have that support,
so instead what you could
do is OpenGL directly.
You can do that either
by deriving from GLKView
or by creating a
UIView and ensuring
that you override the
layer class method
and returning CA
Eagle layer.self.
And when you do that, then
you get a GL ES-based object
that you can use to
create your CIContext.
And that will ensure that
you get optimal performance.
All of that is great, but
one thing you need to keep
into mind is that if you want
to get great performance,
it's not just a question
of using the best API,
but using it efficiently.
And in this case, the number one
thing you have to remember is
to only create your
CIContext once because that's
where the caching takes place
and a bunch of state is held.
So keep that in mind when you're
using the lower-level APIs.
Now, I would like to talk
about Core Image on IOSurface.
Internally, within the
Core Image implementation,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Internally, within the
Core Image implementation,
we use IOSurface extensively.
We love it as an API because
it provides us with a bunch
of functionality
that doesn't exist
with any other API
on the system.
So mainly we have
great purgability,
some locking semantics so
we can get data in and out
of IOSurfaces, and it's a great
way to move data between CPU
and GPU and vice versa.
We have an incredibly
broad spectrum of support
for different formats,
we think probably some
of the best on the
entire system.
For example, we have 420, 444,
RGBA half float,
and many others.
Now, on iOS, as a
developer it's difficult
to use IOSurface directly,
but you can inform Core Image
that you would like
to use IOSurface
by creating Pixel Buffers
that have the KCV pixel
buffer IOSurface property
key specified.
When you do that, so if
you create a CV Image
from a CV Pixel Buffer
that has this key,
what ends up happening
internally is
that Core Image knows that it's
an IOSurface-backed CV Pixel
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that Core Image knows that it's
an IOSurface-backed CV Pixel
Buffer and we can render
as efficiently as possible.
So, this is something to
keep in mind if you want
to get all the benefits
of IOSurface on iOS.
Next I'd like to talk
about a few other APIs.
We will go over examples of how
we can actually use Core Image
and STKs together to create
sample applications very simply.
So we are going to start
off with SpriteKit.
If we start in XCode and
create a new application,
we choose Game, and
then we will choose
as a game technology SpriteKit.
And we just build and run,
we will get this application,
which as you tap on the screen
you get new ships showing up,
and you can see here we are
getting 60 frames a second.
We can now with a
very small amount
of code add Core Image
to this application.
So in this case, we will go
and modify the Touches
Began method inside
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and modify the Touches
Began method inside
of GameScene.swift, and
initially what was happening was
for every tap it was adding
that sprite to the root node.
We will modify that a bit, and
we will use an SK effect node.
An SK effect node renders the
entire contexts into a buffer,
which you can then apply
a series of filters to.
So we put an SK effect node.
Instead of adding our sprite
that we had earlier to the root,
we will add it to the effect.
We will say we want to enable
some effects, we are going
to create a filter, in
this case we are going
to use a pixelate filter,
which is the same one
we were viewing earlier.
And then we will add
that effect to the root.
That is all we need to do.
This is the exact code you
would write if you wanted
to add Core Image to a
SpriteKit application.
And if we now run that exact
same sample that we had
and start tapping away, we get
beautifully pixelated sprites
in our application and running
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in our application and running
at pretty much the
same frame rate.
Now, let's talk a little
bit about SceneKit.
Same idea.
We will create an
application from Start,
and we will choose SceneKit
as a game technology.
If we just build and run this
app, we get this spaceship
that just rotates around
at interactive rates.
Now, if we want to add Core
Image to this application,
all we have to do is go to
the View Did Load method
in GameViewController.swift,
find the ship,
which is aligned
in the sample code.
We then create, once
again, the pixelated filter,
and we specify an optional
array of filters to the ship.
If we do that and run, we get
a beautifully pixelated ship.
So you can apply this to any
node in your scene, and, again,
we get great frame rate.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
we get great frame rate.
One of the big advantages
of using SceneKit alongside
Core Image is you can animate
properties with Core Animation.
So in this case,
we are just going
to create a CA basic
animation, and we are going
to animate the input
scale, so we are going
to get a varying scale pixelate
effect that will be applied
over time going from
a value of 0 to 50.
It will ease in and ease out
over the course of two seconds.
If we add this code,
we then get our ship
with a beautifully
animated pixelate effect.
And, again, great frame rates.
Now, this doesn't necessarily
have to be applied to one node.
You can apply it to
your entire scene.
Here we have a sample
that we shipped
that you can download
called Bananas,
where we have applied the same
effect along with animation
and we are changing the pixelate
scale in real time here.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and we are changing the pixelate
scale in real time here.
I was surprised that I
could play this game better
when it was pixelated
than when it was full-res.
But you could use this
not just to create games
but to also apply an effect
at the end of the game,
for example, or if you wanted
to have different versions
of your assets rendered
with different image
processing effects,
you could use Core Image
with these APIs together
and it works great.
So, so far today we have seen
a bunch of stuff about how
to use Core Image with
Metal, AV Foundation,
why IOSurface is
so important to us.
The easy way to use UIImageView
if you are only creating
an image once
and don't need to
constantly update.
It's a great way to
apply an effect once.
We showed you how to use Core
Animation as well, how to bring
in custom data with
CIImageProvider,
and how to use it in the context
of games or other applications
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and how to use it in the context
of games or other applications
that you can create very simply
with SceneKit or SpriteKit.
For additional information,
we have a bunch
of resources available online
at developer.apple.com,
and for any additional inquiries
you can contact Stephen Chick
at chick@apple.com.
There are other sessions you
may be interested in going to,
including the Editing
Movies in AV Foundation
which took place a few days
ago but you can look at online
and What's New in Metal Part 2
that also took place yesterday.
And on that note, I would like
to thank you all for coming
and I hope you enjoy using Core
Image in your applications,
and I hope you enjoy the
rest of the conference.
Thank you very much!
[ Applause ]