WWDC2014 Session 511

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Silence ]
>> All right.
Good morning, everyone.
[ Applause ]
Welcome to Session 511.
This is the "Introduction
to the Photos Frameworks",
collectively known as PhotoKit.
And my name is Adam Swift.
No jokes, please.
I'm a software engineer on the
iOS Photos Frameworks Team.
All right.
So, we're really excited
about introducing you
to PhotoKit today.
PhotoKit is comprised
of two frameworks.
The first framework is
the Photos Framework,
which gives you access
to the photos and videos
from the user's photo library.
And you can use it to
build a full-featured app
like the built-in
Photos application.
The second framework is
the Photos UI framework,
and this is a framework
that will allow you
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and this is a framework
that will allow you
to build a photo editing
extension, app extension,
that can be used right within
the built-in Photos app.
So, in this session we're
going to focus first
on how you can use the
Photos Framework to fetch
and manipulate photo
library model data,
handle external changes, and
both retrieve and edit photo
and video content in
the iOS 8 photo library.
Then we're going to
turn our attention
to the PhotosUI Framework and
walk you through the steps
and the concepts you need
to understand in order
to build your own app extension
for editing photo
and video content.
All right.
So, let's get started
with the Photos Framework.
The Photos Framework is a new
Objective-C framework in iOS 8.
And it allows you to make
your application a first-class
citizen of the system photo
library for everything,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
citizen of the system photo
library for everything,
from something simple
like a custom image picker
in your application to going
and building a full-featured
custom application
for browsing the photo
library and editing.
And we really believe in this
because we're using the Photos
Framework in our own Photos
and Camera applications
that are built-in in iOS 8.
For those of you who have
worked with photos in the past,
this Photos Framework
is designed
to supersede the
ALAssetsLibrary,
which doesn't provide,
[ Laughter and Applause ]
which doesn't provide much of
the functionality and features
that you'll get with
the Photos Framework.
So what do you get with
the Photos Framework?
You get access to
photo and video assets,
albums and moments,
something you didn't get
with the assets library.
And then you can also
add, remove, modify assets
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And then you can also
add, remove, modify assets
and albums right in the
system-wide photo library.
You can then also edit photo
and video content and metadata
that is written directly into
the system photo library.
We're excited about it, too.
That's okay.
So, we're going to walk through
this API in a couple of steps.
The first section is going
to focus on the model data
and this is going to
walk you through how
to fetch the model objects,
how to change model data,
how to handle external changes.
And then we'll turn our
attention to working
with the image and video
content of those assets,
how to retrieve it efficiently,
and tips for working
with different types of image
and video content, and then how
to go ahead and make changes
and apply those back
to the photo library.
So I'm going to start off
looking at model data.
So, model data in the Photos
Framework is represented
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, model data in the Photos
Framework is represented
with what we call model objects.
And these represent the
structure of the photo library.
These are the records
that represent photo
and video assets, moments,
albums and folders.
And the key concept I want you
to take away from this section,
one of the key concepts is that
model objects that are presented
in the Photo Framework
are read-only,
and that is a nice benefit
because it means you can work
with those objects in
a thread-safe manner.
You can pass them between
threads without worrying
about their data changing
out from underneath you.
There's no worries about locking
and handling concurrent changes.
So I'd like to walk through
the different model objects
that we provide with an
illustration using the built-in
iOS 8 Photos application.
So we'll start with the assets.
These are the photos and videos
that you see here
in the Photos App.
And then containing assets,
we have at the next level up,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And then containing assets,
we have at the next level up,
Asset Collections, what
we call Asset Collections
in the Photos Framework.
And in the case we're looking
at a view that shows moments.
And a moment is comprised
of a collection of assets.
At the next level above that,
we have what we call
Collection Lists,
and these are lists
of Asset Collections.
And in the case you're
looking at here,
we're seeing the year moment
view, which is comprised
of a series of moments,
which then in turn are
comprised of assets.
Now let's walk through
these Model Objects in terms
of the APIs that
are exposed to you
through the Photos Framework.
The first one we'll
look at is the Assets,
the photos and videos.
And photos and videos
assets are represented
by the PHAsset class.
And as you look through this
UI in the Photos Application,
you can see it's a photo.
That's the media type.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
That's the media type.
But then at the top
of the screen you can see we're
displaying the creation date
and the location, which are
both properties of the asset.
Down at the bottom of the screen
you can see an icon representing
a heart, that is, whether
that asset has been
marked as a favorite.
And these are all
properties available to you
in the model object, PHAsset.
At the next level
up we're talking
about the Asset Collection
and, again,
we're illustrating
that with the moment.
An Asset Collection is a general
ordered collection of assets,
so there are a couple
of different types.
We've got moments,
like you can see here.
But we also have
albums and smart albums.
The Asset Collection,
unsurprisingly,
represented by the
PHAssetCollection class,
and it has the properties
of the type, title,
and start and end date.
And you can see these
properties reflected in the UI.
Going one level higher, we
have the Collection List.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Going one level higher, we
have the Collection List.
And a Collection List
is an ordered collection
of collections.
And in the case we're looking at
here, we've got a moment year.
But you can also have
a Collection List
that represents a folder.
And something that's
unique about a folder
versus a moment year type
of Collection List is
that a folder can
actually contain subfolders
as well as albums.
So that's why we say an ordered
collection of collections
where those collections
might be an Asset Collection
or a Collection List.
The Collection List class
is the PHCollectionList and,
similar to the Asset
Collection, has a type,
title and a start and end date.
So when you want to work
with these model objects,
you need to get them out
of the photo library.
So, in order to get
these model objects
out of the photo library, the
key concept here that I want you
to focus on in this
section for fetching,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to focus on in this
section for fetching,
is that you use the class
methods on the model object
that you want to fetch out.
So when you want to
fetch out assets - say,
you wanted to fetch all
of the video assets -
you look for the class
method on the PHAsset class,
fetchAssetsWithMediaType,
specifying the media type video.
Similarly, as I said before,
a moment is a type
of Asset Collection.
So to fetch moments,
we use a class method
on the Asset Collection class,
to fetchMomentsWithOptions.
Now in both of these examples
I've omitted the options,
but they give you an opportunity
to do things like filter
or sort the results
that you get back.
So the next key concept
I want to cover as far
as collections are
concerned is that collections
in the Photos Framework
runtime don't actually cache
their contents.
[ Cough ]
Excuse me.
So what that means is that
in order to find the assets
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So what that means is that
in order to find the assets
that live in an album you need
to perform a fetch similar
to the ones that we just saw
for general purpose fetching.
So here's an illustration
of that.
To fetch the assets in an
Asset Collection represented
by an album, we're getting back
assets, so we use a class method
on the PHAsset class
and fetchAssetsInAsset
Collection:myAlbum.
Now the next construct
that I want
to cover here is something we
call a Transient Collection.
And this is a really useful
construct that we've made a lot
of use of in the photos
application where we wanted
to represent a collection
that isn't represented
in the actual photo library.
An example of that is a search
result or a user selection
as they're picking
off items in the UI.
And by taking that selection
of items or model objects
and putting it into a
Transient Collection,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
the benefit is the Transient
Collection is interchangeable
with the regular collections
that you might fetch
out of the photo library.
So you can do things like reuse
your existing view controllers
or fetch the contents of
a Transient Collection
without worrying about whether
it's represented by something
that really exists
on the photo library
or if it's just something
that your user is working
with in the runtime.
To create a Transient Collection
there's a class method
on the Asset Collection class
to create a Transient
Asset Collection.
You could also create a
Transient Collection list.
But to create a Transient
Asset Collection we call
transientAssetCollection
WithAssets and simply pass
in an array of assets as
well as an optional title.
So what are you getting back
when you perform these fetches?
Most of the time when you're
writing an application to work
with photos, you're going
to be reading those objects,
looking through them,
working with them.
So we wanted to make it
as simple as possible
to get those objects out and
to work with them directly.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to get those objects out and
to work with them directly.
So you wanted to get
synchronous results
and you want them quickly.
But, especially when you think
about iCloud Photo Library,
the results of the fetch could
be potentially very large,
in the tens of thousands
of assets or even more.
So it's really important
that you don't fetch all those
objects into memory at once.
And you really want to
work with them in batches
as you're working with them
or displaying them on screen.
But we didn't want to put
that complexity on you.
So what we did is we've
introduced a class
to represent the
objects that you get back
from a fetch called
a PHFetchResult.
And the PHFetchResult takes care
of tracking the full results set
by keeping track
of the lightweight
IDs representing each
of the contents of that fetch.
But when you access the contents
of that fetch result to pull one
of the objects out, we'll vend
you a fully realized object
that you can work with.
It's got an API that's familiar
and similar to the NSArray
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's got an API that's familiar
and similar to the NSArray
so it should be easy
to work with.
And I'd like to show you
an example of what it looks
like as you're working
with a fetch result.
So on the slide here I've
shown a fetch result that's
representing, say, a
fetch of 10,000 assets
and as we're iterating
through that result set,
pulling out assets, as we
access one of the indexes,
we'll behind the scenes pull
in a full batch to represent
that object as well
as many of the others.
And as you iterate through over
that batch to the next batch,
we'll pull in the
next batch of objects
but release the previous ones
so you don't wind up holding
onto all that in memory.
So now that you've got the
[applause] - ah, thanks.
Now that you've got model
objects that you're working
with in your application, your
user may, will, oftentimes want
to make changes to
those objects.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to make changes to
those objects.
So I want to take a look at
how you can make model changes
with those model
objects to support things
like the user wanting to
favorite a photo or add a photo
to an album because,
as I said earlier,
the model objects
themselves are read-only
so you can't mutate
them directly.
So the way we express changes is
through a Change
Request API that's based
on change request
classes that you create
in a change request block and
then hand to the photo library
so that it can apply those
changes asynchronously
out of process.
And the reasons those changes
are applied out of process
and asynchronously is twofold.
The first reason is,
for some operations,
it may be very expensive
or processor-intensive
or just take a long time.
But even more importantly than
that, because you're working
with the user's actual
photo library,
some of those actions may
require user authentication.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
some of those actions may
require user authentication.
For example, if you
wanted to delete 10 photos,
the photo library's going
to prompt the user to say,
"Is this application allowed
to delete these 10 photos?"
to allow the user to have
a chance to confirm that.
So let's take a look at
the change request classes.
It's actually pretty simple.
For each model object class,
there is a change request class.
We've got the
AssetChangeRequest,
the
AssetCollectionChangeRequest,
and the
CollectionListChangeRequest.
And each of these change
request classes provide model
object-specific APIs to allow
you to perform the types
of changes that they allow.
So the example I'm showing here
is for the asset change request
where you can set the
creation date for an asset
or to set it as a favorite.
Some other things to understand
about the change
request classes is
that they are not subclasses
of the model classes.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that they are not subclasses
of the model classes.
And there's a good
reason for that.
We wanted to provide a
really clear separation
between the thread-safe,
immutable model objects
and the objects that
express mutations.
That's the change
request classes.
These change request classes
are also only valid to work
within a change request block.
So I'd like to illustrate
that with an example now.
So here's an example where I've
provided a sample implementation
of a method to toggle whether
an asset has been marked
as a favorite.
So the first thing we'll do
in this method is ask
the shared photo library
to perform some changes,
and we'll pass
in a change request block here.
First step in that block is
to create a change request
for the asset that we were
provided in the method.
And we're using the
PHAssetChangeRequest class here
you can see.
And then the last
step is simply to set
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And then the last
step is simply to set
on the change request the value
of Favorite to the inverse value
of what the asset currently has.
That's enough for the block to
get passed to the photo library
and those changes to be
performed on your behalf.
After those changes are
performed we'll call back the
completion handler and let
you know if it was successful
in performing those changes.
So one other aspect of
working with change requests
that you'll want to do is
to create new model objects.
And you can create
new model objects
with these same change
request classes
but creating a different kind
of change request using
a creation request.
So in this case I've
shown as an example
where you can create a new asset
from an image using the
creationRequestForAssetFromImage
class method.
And if all you want to
do is create a new asset
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And if all you want to
do is create a new asset
from an image, you're done.
You just make this call
within a change request block
and the library will take
care of adding the asset.
But if you want to
work with that asset
within that change request block
to do some additional work, say,
to add it to an album, well,
remember that the asset
won't actually be created
until the work is
performed out of process.
So within the same change
request block you can access a
placeholder object from the
change request representing
that new unsaved object.
And then you can use that
placeholder to add the new asset
to a collection or even
potentially a new placeholder
for a collection.
One other thing you can get
from a placeholder object is you
can access the localIdentifier
which is a unique string
that you can use later
in your application to
fetch that object back even
on another invocation.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So what happens to
these changes?
Well, the changes are done
when the completion
handler is invoked.
But the model objects
are read-only
and they aren't automatically
refreshed or modified
out from underneath you.
But more important than these
two things when we're talking
about changes, there's a really
important thing to understand.
The changes that you requested
may incur side effects
or may be influenced by
external changes coming
in from elsewhere.
In fact, there are a lot
of sources for change
into the photo library.
There's your application.
There's the built-in
application.
There's iCloud Photo Library,
Cloud Photo Sharing
and my Photo Stream.
And all of these things can
have an impact on the data
that you're working with.
So what we recommend is
that, instead of trying
to take the result of a change
in your completion handler,
you work with a single
change notification system
to handle changes
there and reflect them
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to handle changes
there and reflect them
to your user that way.
So what happens is,
anytime there is a change
to the photo library,
we'll publish a PHChange
to registered observers.
And that change is delivered
on a background queue
and it provides details on
updated and deleted objects.
But where it begins to
get really powerful is
when you're working
with fetch results,
where it can provide you
details on what indexes
and objects were inserted,
updated, deleted or moved
that apply to that fetch result.
And I want to dig
a little deeper
into the fetch result
change details.
When you create a fetch
result by performing a fetch
for a collection or any
other general-purpose fetch,
the fetch result will
implicitly register itself
with the photo library
for calculating changes.
And all those changed details
are calculated in the background
so they don't run while
your user is interacting
with your application.
If you're fetching a fetch
result and you're not interested
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
If you're fetching a fetch
result and you're not interested
in change details, you can opt
out of the difference
calculation via one
of those fetch options.
But if you do want the details,
if you want those incremental
differences, then it's important
that you get the updated fetch
result for the fetch result
that you started with from
the PHFetchResultChangeDetails
object.
And I'm going to walk
you through an example
to demonstrate why
that's so important.
When the photo library
receives a change,
if you've registered
yourself as an observer,
we'll call the
PhotoLibraryDidChange method
and pass you the
PHChange instance.
In this case, in this
example, which by the way,
this example is taken from the
sample code that's provided
with this session, so I've
omitted some of the details
to make it a little
clearer on stage,
but all of this is
available in the sample code
that you can download
from the WWDC website.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that you can download
from the WWDC website.
Anyway, getting back to that.
The first thing we're
going to want to do
in our change handler here is
to dispatch to the main queue.
And the reason for that is
because we're going to work
with our user interface
objects, our view controllers,
and you know it's only safe to
access those on the main queue.
So once we've dispatched
to the main queue we'll
ask the change instance,
"Were there any changed
details for the fetch result
that we are working with
in our view controller?"
And in this case, we've got -
I've represented self.assets.
That's a fetch result that we're
presenting in a collection view.
So if there's a change that
affects the collection view,
the contents of that
collection view,
we want to know what
those details are.
So we'll ask the change instance
for those change details.
And if there were any change
details then we know we need
to update our fetch result based
on what those changes were.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to update our fetch result based
on what those changes were.
And the way to get
those details,
or to get that updated fetch
result as I was saying,
is to get it directly from
the change details object.
The change details have
already done that work for us,
and that way we're sure that the
other details we're going to ask
for about inserts, updates
and deletes are going
to exactly match up to the new
result that we're working with.
So on the second slide here,
I'm going to show you how
nicely this works with an object
like the collectionView.
So now we can tell
the collectionView
to perform a batch
of updates and walk
through the changed details to
see where there removed indexes?
If there were, we'll
translate them into index paths
that the collectionView
can understand and tell it
to delete the items
at those paths.
Similarly, we'll ask
the change details,
were there inserted indexes
in that fetch result?
If there were, we'll tell
the collectionView again
to insert items at those paths.
And same with changed indexes
that affect the fetch results.
We'll ask the collectionVew
to reload those items.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We'll ask the collectionVew
to reload those items.
So now I'd like to take a moment
to show you a demonstration
of the sample code that we've
provided with this session.
So the first thing I'm going
to do before I actually run the
sample app is just give you a
view of the photo library
that we're working with here.
And so I'm running the
iOS 8 built-in Photos app.
And you can see I've got
a series of Smart Albums
in this album list here and, in
particular, I've got nine photos
in my Favorites Library.
Now I'm going to switch over
to the sample application
and we've got a listing here
with a special entry on top,
all the photos in
the Photo Library.
We can take a quick look.
You can see we've got a pretty
good-sized library here.
But we have also got
those Smart Albums.
So I'm going to tap
on the Favorites Album
and you can see it's
the same contents.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you can see it's
the same contents.
But it's pretty obvious
that one of the photos
in this group doesn't belong.
So I'm going to tap on my moose
and you can see when you look
at the sample code, we're going
to use - I'll bring up the menu
with this Edit button.
When I press Unfavorite,
the code is going
to create a change
request to set that asset
as not being a Favorite anymore.
Okay? When I back out to the
Favorites smart album you'll
notice the moose is gone.
Nothing in the code actually
asked to remove that moose
from that Smart Album.
This is one of those side
effects I was talking
about where a change
that I requested has
triggered a secondary change
that was reflected in
the user interface.
Now if we switch back to the
built-in photos application,
you can see that change is
immediately reflected there
as well.
The photo's been taken out of
that Favorites Smart Album, too.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now there's one other
thing I want to quickly go
through to show you in
terms of the sample code,
and that is if I go into the
photo list and I find a photo
that I think the user wants to
delete, where I as the user want
to delete, I'm going to
go ahead and create a -
if you look at the sample code -
you'll see I create a
Delete change request.
But there's no code
in the application
to provide this prompt.
This is the photo library
performing some work
out-of-process to
let the user know
that an application has
requested a change that's
destructive to their library
and asking for confirmation.
So you can see, "Allow Sample
Photos App to delete 1 photo?"
And in this case I'm
not going to delete it.
But you can see how some of
the work that is performed
out of process can have such
an impact on your changes.
All right.
Well, that covers the work with
model objects that I wanted
to introduce from
the Photos Framework.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to introduce from
the Photos Framework.
But there's a lot more
that you could see
in the sample application when
you look at all the images
and image data that
was presented.
So I'd like to invite Karl
up to talk about image
and video data, thanks.
[ Applause ]
>> So, good morning.
My name is Karl Hsu.
I also work on the
iOS Photos Frameworks.
So we've just spent a
little while talking
about how you can discover
and work with the structure
of the user's photo library.
And that's great.
But we're missing
a key ingredient
because it is the
user's photo library.
How do we actually get a hold of
and display image
and video data?
Before we begin,
it's useful to know
that the user's photo library
actually caches a variety
of representations
for each asset.
For images, we might
have representations
that vary all the way from the
full size original all the way
down to small thumbnails.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Videos might be cached in a
variety of bitrates and sizes.
They might even be
streaming, okay?
With iCloud Photo Library we
might not actually have all
of those representations
cached on disks.
Some or all of them,
or in some cases,
even none of them
might be available.
That makes it kind of
a pain when you want
to display image data.
So what we have is
we provide you
with a class, the
PHImageManager.
The PHImageManager's job is to
abstract a way, sort of this,
your decision, from
having to decide, you know,
do I want a thumbnail, do I
want a medium size, a large?
What do you want?
So basically it's
a usage-based API.
For images, you tell us how
big do you want the image?
What are you trying
to do with it?
Are you trying to display it
in a small grid on the screen?
Are you trying to -
you want the full size?
You want something
that's screen size?
And we'll try to find the right,
the best available
representation we have for you.
Similarly, videos.
You tell us what
quality you'd like
and what you plan
on doing with it.
Is it just for user playback?
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Is it just for user playback?
Are you planning
on exporting it?
Or are you going to work
with it in some other way?
And we'll try to pick
the right representation.
One key thing here is that
unlike the model object API,
which is largely synchronous,
the image request APIs
are largely asynchronous.
And that's because even if
the data is available locally,
we may have to do work before
we can hand you the data.
JPEGs have to be
read off a disk.
They have to be decoded.
But the most important case
is in the case of something
like iCloud Photo Library,
we may not have the data
locally available at all, right?
We may have to go
out to the network,
get it and then bring
it back to you.
So let's take a quick look
at what that looks like.
This is a very straightforward
fetch request.
I'm sorry, image request.
In this case we're
trying to fill sort
of a straightforward
four-across grid on an iPhone.
It turns out to be
about 160 by 160 pixels.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It turns out to be
about 160 by 160 pixels.
So we set the target size,
we tell it that the image
should fill that target size,
and then we issue the request.
And when it comes back,
if there's any image data
available, we'll hand it to you.
The Info Dictionary is there and
we'll tell you a little bit more
about what we handed you back.
Specifically, is
it a substandard?
Is it the right size?
It's good.
So let's take a look
at a little bit more
of an advanced image request.
So when you want to make
a more advanced request,
you create an
ImageRequestOption.
In this case, after
I create the option,
I really want this photo, right?
So I'm going to tell the
option that, yes, it's allowed
to go out to the network.
It can go out to iCloud
and fetch the data down.
And because that might
take a little bit of time,
I'd like a progress
handler as well.
Again, this is a good idea.
You can always show
the user that we're,
in fact, doing work, okay?
And then, finally, just
set it as an option
when you actually
make the request.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I want to spend a second
and talk about the callback
that actually produces the data.
There's two things
to keep in mind.
Again, this block will be
called back on the main thread.
And the reason is
because the feeling is
that generally you
want to use it for -
if it's an asynchronous
callback,
you're typically using it for
display on the main thread.
There's an optional way to
make a synchronous request
if you're working in
background threads.
Okay. The other thing
is actually,
this block can be called
multiple times for each request
because it turns out that a
really common UIDesign is the
user wants to see their image
data as soon as possible.
So if we have any image
data available we'll return
that to you right away.
And then, if that
isn't sufficient
to fulfill your request,
we go out,
possibly to the disk,
possibly to iCloud.
We'll go get the data
and, when it arrives,
then we'll call it
a second time.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
then we'll call it
a second time.
[ Applause ]
Thank you.
Of course, if the
data's already available,
then you only get
the first callout.
Let's take a quick look
at requesting videos.
So in this case, the user
has scrolled to some video
and they just want playback.
So we're requesting a
PlayerItem for the video.
When we get back a player item
you can create an AVPlayer
out of it.
It looks very much
like the Image Request.
It's pretty straightforward.
And, again, if we want a more
advanced request - let's say,
the user now wants
to push this video
up to their own sharing service,
a different sharing service.
So now we're going to
create a VideoRequestOption.
And, of course, because
we're sharing it,
we want it to be high quality.
Maybe you're on an iPhone and
you wanted the full 1080p, okay?
So, we want it to
be high quality
and if it's not available -
because we were playing
it before.
That might have been streaming.
We are, you know, we say,
okay, it's all right.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We are, you know, we say,
okay, it's all right.
You can go out to the
network and download it.
And it's particularly
important for videos, of course,
to have progress because
videos can be very large.
And then, finally,
we set the option.
Note here that we're actually
using a slightly different API.
We're specifically requesting
an export session rather
than a playback item.
So this is good.
We can fetch individual images,
and that works pretty well.
But, it's really common
for a photo app to want
to show lots of images.
The user doesn't want to scroll
through their images
one at a time.
They want to see
a grid of images
in either a regular square grid
or a more interesting, you know,
tile and brick mortar layout.
That poses some performance
problems.
Even if the individual
images are relatively cheap,
there's a lot of them.
So what do we do?
The general thing that people
want to do is you create a cache
around what the user's
looking at.
You figure out which
direction they're going
and you start caching
ahead of them
and you stop caching behind.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So this is a pain, right?
You kind of have to
track all of this data.
You have to know to retrieve
stuff from that cache.
We've already got an API
for doing a lot of this.
And that API is the
PHCachingImageManager, okay?
The caching image manager's job
is exactly what it sounds like.
Its job is to cache images
on your behalf, okay?
As you make requests against
the caching image manager,
if it's in its cache, it
will return that directly.
And if not, it will
automatically fall back
to the default image
request behavior.
Now the general suggestion is
that you create a caching
image manager for each sort
of distinct view controller
that you have in your app.
That's because each view
controller typically has its own
display of image data, right?
You know, a grid view
controller is going to behave -
it's going to need
different caching behaviors
than like a one-up view where
you're scrolling through.
So let's take a quick look at
what I'm talking about here.
So we have the user's phone.
They're looking at
some nice pictures
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
They're looking at
some nice pictures
and they're scrolling
through it.
Now this is what the user sees.
But we know that underneath what
they actually have is this sort
of long scroll view of
photos above and behind.
And what they only see
is the visible range.
So as they're scrolling down,
we want to start caching ahead
of where they are so that the
data's available immediately
when you need to
display it to the user.
And, of course, for
memory reasons,
we want to stop caching behind.
You just sort of calculate
this range and maintain it,
and as you scroll
just keep updating.
Okay? The API for the caching
image manager is actually
pretty straightforward.
You calculate what's
going to be visible soon.
You tell us to start caching it.
And as stuff is scrolled off,
you tell us to stop caching it.
The only key thing here is when
you start and stop caching,
use the same target
size, content mode,
and options that you will use
for the actual request image.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and options that you will use
for the actual request image.
Otherwise, we don't
know how to look it up.
So now we know how to get hold
of image data and we know how
to get a hold of it fast.
We know how to get a
hold of video data.
But, of course, just looking
at data's only half the story.
Users love to touch
their photos.
They want to adjust them, crop
it so it looks just right,
apply a filter so
their kids look,
you know, extra special pretty.
So let's talk about editing.
If the asset is editable,
the edits are now in place.
Okay? You no longer need to
save your edits as a new asset.
You can edit any
asset that's editable.
The edits are, just
like in the real photos
out there, are nondestructive.
The user can always revert.
You can also programmatically
revert.
Changes that you make
are visible everywhere.
They'll show up in your app.
They show up in the Photos app.
They show up in mail
messages, other 3rd-party apps.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
They show up in mail
messages, other 3rd-party apps.
And with iCloud Photo Library
they're actually visible
across all of your
devices as well.
Okay. So how does editing work?
Well, at a very simple,
basic level,
you ask us for an input image.
We hand it to you.
And you do whatever
transformations you want
to do on it.
You can crop it.
You can edit it.
You can go and do, you know,
pixel operations on it.
And when you're done,
you generate a new image,
a new output image,
and you hand it to us,
and then we save it
on top of the asset.
That's it.
So let's take a look at
how that works, okay?
First you want to ask
us for the input image.
You ask the asset for its
content editing input, okay?
And what that comes with is,
the content editing input
carries a bunch of information
that you'll need in order
to actually do your work.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that you'll need in order
to actually do your work.
It gives you a URL
with a reference
to the full size asset.
It gives you some
orientation information.
So, in this case, I'm
creating a Core Image image
and I'm sure I'm
going to be using some
of the new filter stuff that
we have available for you now.
And I'm going to do some work.
So, I've done some work.
You just cropped it.
Maybe you applied a filter.
It looks great.
How do we save it?
Well, you create, ta-da,
a Content Editing Output.
You take your data, the
fully-rendered output,
save it as a JPEG,
write it to the URL,
and then you set
the adjustmentData.
I'll talk about the
adjustmentData in just a second.
And then how do you save it?
It's the same as saving
any other model change.
You create an
PHAssetChangeRequest.
You sent the content
editing output on it,
put the whole thing in a
change block, and that's it.
So, it's a little
bit misleading,
actually when I said
it was that simple.
So there's another
wrinkle, which is,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So there's another
wrinkle, which is,
what if someone has
already edited that image?
Well, the key point
here is that actually
when you edit an image you
actually have the base image
and some adjustment data.
Your output image, the thing
that you give us at the end,
is actually the result
of applying this
adjustment to the base image.
So, really, what we hand you
is the base image plus whatever
adjustment data is
already on that image.
And that way the user
can continue editing
as if they had never left
their editing session, right?
You do a crop-down and then what
we hand you is the original plus
the crop information so that
you can show the user the crop,
that they can extend
the crop in and out.
They can remove it entirely.
They can change it.
And when you're done, you
hand us your output image
and the new adjustment data.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and the new adjustment data.
So that cycle will
just continue.
So the next person that asks
will get back the base image
plus the new adjustment data.
So how do you save
the adjustment data?
Well, the adjustment
data actually can be -
you use a PHAdjustmentData
object,
but really the data can be
whatever you want, all right?
For instance, it could be
the name and parameters
for a Core Image filter.
It could be anything that you
want that describes your edits.
Notice here that actually beyond
just that data we also ask you
to include a format
identifier and a format version.
They basically are a way for you
to identify what
edit it actually is.
Who did this edit?
What's the format of it?
Okay? And I'll talk a little
bit more about that in a second.
So, I misled you again.
There is one additional
case, of course.
What if someone else
had edited the image?
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The scenario that I
described before is great
if you were the one who
previously edited it.
But if somebody went to App
A and they applied a crop?
And then they go to App B and
they want to do more editing?
Well, you have to tell us
whether you understand the
adjustment data.
We'll ask you.
We'll give you the
PHAdjustmentData.
And you tell us.
Do you understand what this is?
So, looking at this,
this actually seems
relatively straightforward
but we're not putting
any restrictions
on what the format identifier
and the format version can be.
Several developers from
different companies could,
for instance, get together
and define a common identifier
and version and, you know,
adjustment data format.
And that way the user could move
seamlessly between those apps.
Okay. But let's see
what happens,
depending on what you return
when we ask you this question,
can you understand this?
Do you understand it?
Yes. You say, yes.
That's great.
We'll hand you the base
image and the adjustment data
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We'll hand you the base
image and the adjustment data
and you can behave as if
the user had just sort
of turned around, waited
a second, and come back
and started editing again.
That's sort of the ideal case.
But what if you tell
no, you don't?
Well, in this case,
unfortunately,
we don't have any choice.
We have to hand you the
fully rendered output
from the previous edit.
So the user can continue
editing but they can't back out.
Like, in this case, the previous
app applied a noir filter
and it's effectively
blocking my filter.
We've thrown away the
color information.
We don't have it.
If the user wants to go
back, they basically have
to revert back to original.
Okay. So, let's take
a quick demo.
So, here we are.
So, this is what we had before.
And I want to take a look.
Let's see.
Let's look at our
Favorites again.
And here we go.
She looks very nice.
But I want to play
with it a little bit.
I think that this
photo could be better.
So, actually, I'm going
to - let's try Posterize.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So, actually, I'm going
to - let's try Posterize.
You know what?
I didn't - I don't know.
That doesn't look
quite right to me.
Okay? So at this point we
started with the original image
and there are no adjustments.
So we didn't even ask,
"Do you understand?"
because it doesn't matter.
We have the base image.
This time, when I go
into edit, what's going
to happen is we're going to
say, do you understand the edit?
And this app happens
to be the same app.
So I says yes, I understand.
So this time I hand
you the base image,
which is effectively the
original, plus the adjustment
that did the Posterize
which allows us
to switch it completely.
And now it's in Sepia, right?
But it's not sepia applied
on top of the Posterize.
It's Sepia applied
instead of the Posterize.
Actually, I kind of like this.
I think I'm going to leave it.
I like that old-time look.
And just to prove it, look.
The Photos app sees it as well.
All right.
So we've talked about
doing this in the context
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So we've talked about
doing this in the context
of your own application,
but, of course, as we showed
in the Keynote, we can
do this from inside
of the Photos app as well.
And to tell you about how
to do that, Simon Bovet.
[ Applause ]
>> Good morning.
My name is Simon Bovet.
And in this last
part of the session,
let's talk about Photo
Editing Extensions.
So what are Photo
Editing Extensions?
How can they be useful
to you developers?
And what do they
bring to our users?
So it's a new feature in iOS 8
that allows the users to access
to your Image or Video
Editing tool right
from within the built-in
camera or photo applications.
No need for the user to
switch between applications.
No need for the user to
grant specific access rights
to your application.
It's a very simple way for your
tool to reach its audience.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's a very simple way for your
tool to reach its audience.
And we think it's a great way
to put your creativity inside
the hands of our users.
So this feature has been shown
in the Keynote on Monday,
but let me refresh your
memories with a few slides.
So here we are on iPhone editing
an image using the iOS 8's
built-in Photos application.
The user can tap on
the top left button
and access any available
editing extension.
Can pick one.
And this could be your Editing
tool extending the reading
capabilities of the
built-in apps.
So now the user can interact
with whatever interface
you want to provide.
And when the user is done,
the changes are saved right
in place inside the
user's photo library.
No need to create the
duplicated assets.
And if the user has turned
on iCloud Photo Library,
those changes will be
applied to all of the devices.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So the user could go on his iPad
and then draw your beautiful
effects on a larger screen.
So, what do you need to
create your own photo
editing extension?
And it turns out that
it's really simple.
It's basically three steps.
The first one is to create
an app extension target.
The second one is to
provide a view controller
which will manage the
UI of your extension.
And then this view
controller needs
to adopt a specific protocol
which is a set of methods
that Photos will call in order
to communicate with
your extension.
The good news is the first two
steps are really easy thanks
to the new Xcode.
Basically all that
you need to do is,
from your application's Xcode
project, add a new target
and select the Photo
Editing Extension template.
Xcode will create whatever is
necessary for you go get started
and have something working.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It will create a view controller
which you can start using
or which you can replace
with your own if you happen
to have your existing
tool or view controller
that you want to reuse.
A note about the user interface.
You've seen photo extensions
are shown fullscreen.
Now on the top bar, with the
Cancel and Done buttons as well
as the title of your
application,
it's shown automatically
for you by the Photos app.
So if you, say, you'd like
to design your extension not
to have its own navigation bar.
Now onto the protocol
adoption, so the protocol
which is specific
to photo extensions.
And this protocol is defining
the photo's UIFramework
and is the
PHContentEditingController.
It consists of four methods.
And the first one,
quite obviously,
is called when the user
selects your extension.
And this where you get the input
data before your extension is
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And this where you get the input
data before your extension is
presented onscreen.
A simple implementation
could look like this.
And the first thing you might
notice is that the input object
which is given to you,
the PHContentEditingInput,
is exactly the same object
as the one Karl described
just a few minutes ago.
So, basically, we have
the exact same classes,
the exact same concepts, as
when editing a photo using the
photo's framework just here
wrapped slightly differently
for the specific
needs of an extension.
So, you would typically
read out the input image.
In this particular case
we're going to work
with a display-size
representation.
We don't need the full-size
image right away here.
Then we're going to decode
any input.adjustmentData
or fall back to some default
settings if there's none.
Then you set up your user
interface the way you need it.
And you typically find
it's useful to hold
on to the input object
that was given to you.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now the user will interact
with your extension.
And when the user is done, the
second method will be called.
And this is when your extension,
we need to provide the final
output data and hand it back.
Second implementation
can look like this.
For images, you drag
a JPEG representation
of the full-size image with
your effects applied to it.
You still create this
adjustment's data object
which describe whatever you've
applied to the input image.
And then you create
an output object
that wraps all this information.
And when you're done, you
call the completionHandler.
You notice in this example
that the flow is synchronous,
but it doesn't have to be.
If you prefer to have an
asynchronous workflow,
it's totally fine.
You just have to call
the completionHandler
when your output is ready.
Now you remember the story
about resuming editing.
So we imagine that the user
selects your extension using an
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So we imagine that the user
selects your extension using an
image or a video that
already has been edited
and that has some
adjustments data.
You remember the first
question that we asked you is,
do you understand that given the
existing adjustment data on it,
and that way we can provide
you with the appropriate input.
And this is when this method can
handle adjustments data is going
to get called a new extension.
And, typically, implementation
could be as simple
as checking whether you support
the formatIdentifier and version
of this adjustment data.
And, finally, the last
method pretty obviously is
when the user cancels
your extension.
One thing to notice,
which isn't obvious,
is that this method can
be called at any time.
So, for example, if your
extension requires some time
to produce the output
- for example,
you're editing a video
- the user could decide
to interrupt you and cancel
right away at any time.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to interrupt you and cancel
right away at any time.
So just keep that in mind.
And this is basically
all what you need
to create your own
photo editing extension.
So we are really excited to see
all the tools that you are going
to create and all the great
ideas that you are going
to provide to our
users using this API.
And let me conclude with a
short demo of that in action.
So let's switch to
our demo device.
And I'm going to go to
the Photos application.
And here is the image
that has been just applied
or just has been edited by
Karl with this Sepia filter.
Now it turns out that this
simple app is actually vending a
photo editing extension.
So what I can do is right from
within the Photos app, tap Edit,
select the top left
extension button,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
select the top left
extension button,
and here you see our
sample extension.
So let me select it.
And what I can do is not
only apply our filters
but I can also resume what
was previously edited.
So, for example, I could decide
here to bring back the colors
and even enhance them.
So I'm going to replace
the Sepia filter
that was previously
applied and I'm going
to choose a Chrome filter.
And that's it.
You see how easy it is for
the user in just a few taps
to use your photo
editing extension.
So time to wrap up this session.
What have we learned?
We've introduced the new Photos
framework which allows you
to access the user's
photo library
and which allows
your application
to gain all the features like
the one which we provided
with the Photos application.
So your app can be a
first-class citizen
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So your app can be a
first-class citizen
of the iOS Photos ecosystem.
And then we've seen
how easy it is for you
to provide your Editing
tool using the photos
editing extensions.
If you want more information
you can contact our Evangelist,
Allan Schaffer.
You can read our documentation.
And if you want to ask questions
or find answers, you can check
out our Developer Forums.
We have a couple of
related sessions.
Some of them have
already taken place.
So check out the
videos, like the one
about what extensions you
can create on iOS and OS X
or how you can capture content
from the camera using even
more control on iOS 8.
And we have two sessions
taking place this afternoon
if you're interested in knowing
more about how to edit images
and apply your custom
filters using Core Image.
And with that, I thank you
very much for your attention.
Enjoy the rest of the show.
[ Applause ]