WWDC2016 Session 225

Transcript

[ Music ]
[ Applause ]
>> Hello everyone.
My name is Vineet Khosla.
And today, I'll be joined by
Diana Huang and Scott Andrus,
as we walk through the process
of extending your
applications with SiriKit.
In the introduction session,
we learned there are
three main components
to a SiriKit-enabled app.
You have the Intents extension.
You have the Intents
UI extension.
And you have, of
course, your Application.
With the Intents extension,
we have three main methods.
With the Intents extension,
we have three main methods.
You have the Resolve,
Confirm and Handle.
In this session, we are going
to talk about three things.
We're going to talk about how
to prepare your application
to adopt SiriKit.
And then we will talk about how
to add your first
Intents extension,
and finally we will talk about
how to bring your applications,
user interface and style
into the Siri experience,
by writing your Intents
UI extension.
For this session, we are going
to use our favorite chat
app, the UnicornChat.
Some of you guys
know about this app.
It's a favorite app that is
used by the Siri team internally
to chat with our unicorns.
It's a standard messaging app
that does sending and receiving
of messages, but it
has a certain amount
of unique corniness, which
makes it a special app for us.
of unique corniness, which
makes it a special app for us.
And we will add SiriKit
support to it today.
So, SiriKit is built
on extensions.
But before your application
can go into extensions,
there are number of
things you can do to it
to help it adopt SiriKit.
I'm going to cover a few
of those areas first.
We will talk about preparing
your application by moving some
of your code to embedded
frameworks.
Embedded frameworks are
a great way to reuse code
between your application
and your extension.
Having moved some of the code
to embedded frameworks
will provide us
with a great opportunity to
write some unit tests around it.
And finally we will
talk a little bit
about what is an
appropriate architecture
for architecting
your extensions based
on the intents your
application subscribes to.
So let's dig a little bit
deeper into embedded frameworks.
Your extensions will need
to do everything your
application does.
It will need to handle the
intent, and it will also need
to render UI when
it is required.
And using embedded frameworks
is a great way to reuse code
between your application,
as well as your extension,
because you want to ensure
that your users get a uniform
experience, whether they come
in from your application
or whether they are
being invoked by SiriKit.
In the case of UnicornChat, we
found there were a few areas
that made a lot of sense to
move to embedded frameworks.
We took a look at our
networking layer of UnicornChat.
This is everything that
is related to sending
and receiving of messages.
And we realized that this is
a really good piece of code
to move entirely to embedded
frameworks, so it can be reused
by my application as
well as the extension.
Having done that, we took
a look at our data model.
Having done that, we took
a look at our data model.
Your application as well as your
extension should be accessing
the same data model.
In the case of our chat app
that meant it was the database
as well as the data accessor
methods written for it
that could be moved to
an embedded framework,
so it could be reused by
application and extension.
After moving that,
we took a look
at our decision-making logic.
This is the business
logic of your app.
In the earlier sessions
we had covered
that we have the Resolved,
Confirm and Handle.
These three methods
will correspond
to the real business
logic of your app.
And you would always want to
ensure that your application,
as well as your Intents
extension,
gives your users the same
experience when they're trying
to complete that one
task, irrespective
of where they come from.
So we moved our decision-making
logic also
to an embedded framework.
And finally, if your application
is signing up for intents
And finally, if your application
is signing up for intents
that requires it to rend
a user interface, a UI,
into the SiriKit, that
code should also be moved
into an embedded framework.
So you can reuse,
and once again,
provide consistent experience
across the board for your users.
Whether they come in
from your application
or whether they are coming
in from an Intents extension.
I also recommend everyone
to watch this 2015 talk,
App Extension Best Practices.
I watched it.
I found it really useful.
So after having moved all of our
code into embedded frameworks
or some of it, it provided
us with a greater opportunity
to write some quality
unit tests.
Now I know I'm preaching
to the choir in this room.
And all the engineers
in this room,
we write our unit
tests really well.
We all follow test-driven
development.
It happens every
time, I know it.
But having moved
some of this code
But having moved
some of this code
to an embedded framework
will provide you
with a new opportunity to
write some quality tests.
More specifically when we
are dealing with SiriKit.
What you can do is
create some mock intents
and then write tests to
ensure that your app,
as well as your extension,
is responding properly to it.
You don't need a real
live Siri interaction.
You can just mock
the Intent Object
that you expect to
receive from Siri.
And you can write
offline tests around it.
Finally, let's think a little
bit about architecting your app
for the appropriate
number of extensions.
Typically an app will sign
up for multiple intents.
It will want to do
more than one thing.
In our case, our UnicornChat
was signed up to work
with SendMessageIntent, but
let's assume we also add
to its capability, and
we can do audio calls
and video calls with it.
At this point the question
is how do we architect
our extensions?
Should we put intent handling
of all these intents
in one extension?
But that might make our code
really bulky and unmanageable.
We could do an alternate
architecture
where you can say
it's really clean
to start putting all
my intent handling
in a separate extension
by themselves.
That is great, but you
might end up redoing a lot
of boilerplate code and also
creating more extensions
than is necessary and creating
a memory pressure that's
not needed.
So, in the case of UnicornChat
and I'm sure this is what
would be the guidance,
is think about which intents
fall naturally together.
In our case, we found
the audio call
and the video call intent could
fall naturally in one extension
because doing so let us maximize
the code we use while sending
of messages intent could live in
a separate extension by itself.
In other words, there is
no magic bullet over here.
In other words, there is
no magic bullet over here.
You know your application best.
You will know which intents your
application is signing up for.
And you will need to choose
an architecture which ensures
that you have a manageable
code but at the same time,
you don't create too many
extensions causing undue
memory pressure.
And having taken care
of these conservations,
your application is now
ready to adopt SiriKit.
And to help us write our
first Intents extension,
I invite Diana onstage.
[ Applause ]
>> Thank you, Vineet.
Hello, everyone.
My name is Diana Huang, and I'm
here to actually talk to you
about how to create your first
Intents extension to work
with Siri, now that your app is
fully prepared to adopt SiriKit.
with Siri, now that your app is
fully prepared to adopt SiriKit.
I will also show it to you
in Xcode using the
UnicornChat as an example.
So to get started,
there are three steps
that you want to follow.
First, you want to add an
Intents extension target
to your app.
And next up, you want
to configure the
extensions Info.plist.
And lastly you need to look
at the principal class
of the extension.
Let's talk a little bit more
about these three steps.
To add an Intents
extension target,
you will go to Xcode,
File, New, Target.
And then pick Intents
extension from the list.
For those of you who have
worked with extensions before,
it's just like how you create
other extension targets.
And then, let's take a look
at the Info.plist
of your extension.
at the Info.plist
of your extension.
So we have the existing key of
NSExtension and inside that,
we have NSExtensionAttributes.
And in side that dictionary,
we're introducing two
new keys in the iOS 10.
The IntentsSupported and
IntentsRestricted WhileLocked.
So IntentsSupported is
a required key for you
to specify your extension's
capabilities.
In other words, you want to put
an array of intent class names
that you want to
support for extension
into this array, for
IntentsSupported.
IntentsRestricted WhileLocked
is an optional key for you
to specify your locked
screen behavior.
So by default, Siri already
restricts a few domains
to not be easily invoked
when the device is locked.
For example, the payments domain
or the photo search domain.
For example, the payments domain
or the photo search domain.
But if your app has a tighter
security requirement than Siri,
then you can put the intent
class that you want to restrict
into this array, for
IntentsRestricted WhileLocked.
And this is to tell
Siri, please prompt users
to unlock the device before
invoking your extension
to handle the intent.
So now our Info.plist for the
extension is also configured.
Let's now talk about
the principal class.
So the principal class of your
Intents extension must be a
subclass of INExtension,
which conforms
to the INIntentHandlerProviding
protocol.
This protocol has one and only
method called handlerForintent.
So the method name is
pretty self-explanatory.
You're given an Intent
Object and you need
to return a Handler Object.
And do note the Handler Object
that you return must conform
to the specific intent
handling protocol.
So for UnicornChat, we're going
to support INSendMessageIntent.
So whenever we're passing an
instance of INSendMessageIntent,
we will return a
handler that conforms
to INSendMessageIntent
handling protocol.
So now we have covered
these three steps.
Now let's actually go follow
them in Xcode for UnicornChat.
So here we have the
UnicornChat Xcode project opened
and in the interest of time,
I have already created
an Intents extension.
So let's go take a look
at the second step,
which is to configure
the Info.plist.
So first thing, let's
actually zoom
in a little bit to see better.
All right.
So here we have the
NSExtension dictionary.
If we expand it, you can see
the NSExtensionAttributes
dictionary.
And if we further
expand this dictionary,
you will see the two new
keys that we're introducing.
IntentsSupported and
IntentsRestricted WhileLocked.
First, in order to support
INSentMessageIntent,
let's add an item inside
the IntentSupported array.
And we will put in
the intent class name,
which is INSendMessageIntent
in here.
And next, let's take a look
at the locked screen behavior.
So because UnicornChat is
a chat application used
for private communications
among unicorns who would really
like to enjoy some privacy,
we decided to lock it down so
that users will have to unlock
their device first before they
that users will have to unlock
their device first before they
can send a UnicornChat
message through Siri.
So to do that, we will add an
item inside the IntentRestricted
WhileLocked array.
And again, put in the
intent class name,
INSendMessageIntent here.
And now we're done configuring
the extension's Info.plist.
So the third step is
we want to take a look
at the extensions
principal class.
So when you create the
Intents extension target,
a principal class will be
automatically created for you.
Here, I have renamed
my principal class
to UCIntentHandler.
It is a subclass of INExtension.
And we also have the handler
for Intent method here.
So you can see that the default
implementation returns self,
which is returning an instance
of the principal class itself.
which is returning an instance
of the principal class itself.
But just to make our
code a little bit clearer
and make it more extensible.
If we're going to support more
intents in this extension later,
then we're going to create a
separate intent handler class.
So we're going to do that by
creating a new file called
UCSentMessage IntentHandler.
And we will also import a few
frameworks, as well as putting
in the class declaration.
So here we are importing
the Intents framework,
which hosts a lot
of the SiriKit API.
We're also importing the
UnicornCore framework,
which is the embedded
framework that we have created
through to share code
among the many application
and the extensions.
And here we have the
UCSentMessage IntentHandler
And here we have the
UCSentMessage IntentHandler
class that conforms
to INSentMessage
IntentHandling protocol.
All right.
Now let's go back to
the principal class
and replace the implementation
here to return an instance
of UCSentMessage IntentHandler
for any intents passed in,
that is, of type
INSentMessageIntent.
And for all the other intents,
we're going to return nil.
And that's it.
The three steps that
you want to do to add
and configure your
first Intents extension.
Now let's talk about
the app logic.
So hopefully from the
introducing to SiriKit session
as well as Vineet
has reiterated,
your Intents extension's
interaction
with Siri can be divided
into three stages.
Resolve, Confirm and Handle.
Let's talk about Resolve first.
So Resolve is the stage
where you want to validate
and clarify the intent
parameter values one at a time.
So we have provided for
each parameter on intent,
we have provided
a Resolve method
in the intent handling protocol.
So you may ask which ones of
the them should I implement?
Well, think about this.
Would you need Siri's help in
the process of trying to settle
on the final value of
an intent parameter?
If the answer is yes,
then you probably do want
to implement the Resolve
method for this parameter.
Let's take a look at this in
the context of UnicornChat.
So to send a message,
we need recipients.
And in order to decide on the
final values for recipients,
we need to perform a contact
search among UnicornChat's own
we need to perform a contact
search among UnicornChat's own
address book records.
There could be a few potential
outcomes of this search.
In the most simple and
straightforward path,
we will find exactly
one matching contact.
Then we're good to go.
However, it's also
possible that we find two
or more matching contacts.
In that case, it
would be really great
if Siri can help ask
the user to pick one
from the candidate list.
It's also possible that we find
no matching contacts at all.
And in that case, we would also
like Siri to tell user about it
so that the user may pick a
different recipient value.
So after having a recipient,
we also need content.
So in this case, we simply need
a value in order to proceed.
If the user simply hasn't
provided a content then we would
If the user simply hasn't
provided a content then we would
really like Siri to help us
to prompt users for a content.
So considering all these
cases, it does sound
like we should implement Resolve
methods for both recipients
and content, as we
do need Siri's help
to take further user
inputs in order to come
up with the final values
for these parameters.
So now the parameters have
been successfully resolved,
we get to the Confirm stage.
So this is the stage where
you want to do a dry run.
Think of it as if you were to
handle this intent right now.
Do you have everything
that you need?
Or are you able to
successfully handle it?
So you want to tell that answer,
along with some extra
information
that you can gather while
preflighting the intent to Siri.
So that then Siri,
when appropriate,
can communicate all this
information to the user.
can communicate all this
information to the user.
And finally user can make the
decision about whether they want
to proceed with the
action or not.
So in UnicornChat, because
of the security requirement
that we have, we need users
to reauthenticate themself
every once in a while.
So Confirm is the
perfect stage for us
to check the authentication
status of the user.
And either way, we want to tell
the result of the status check
to Siri so that either
Siri can offer users
to proceed inside Siri
or to maybe go forward
to the application in order
to finish this transaction.
All right.
So now the intent is also
confirmed, we finally come
to the stage of handling it.
Hopefully this is the
most straightforward stage
for you to understand.
You simply need to
perform the action here
and tell Siri how it went.
So in the case of UnicornChat,
we just need to send the message
So in the case of UnicornChat,
we just need to send the message
and then report back
if the message has
been successfully sent.
So now we have covered
the Resolve, Confirm
and Handle methods and concepts.
Let's actually go
implement them in Xcode.
So this time we're
going to dive right
into the IntentHandler class.
So before I start,
just a quick reminder.
All these simple code will be
posted online, so if I skip
through some of the details,
it probably means it's not
as important for you to read
through every single
line of code right now.
But if you're interested,
you can always go back online
and check out the simple
project and read by yourself.
All right.
So as we have mentioned, we're
going to cover the Resolve,
Confirm and Handle methods.
For Resolve, we are going to
implement the Resolve method
for recipients and content.
So let's start with
resolveRecipients.
So in this method
we need to focus
on the recipients
parameter that is represented
by an array of INPerson.
You can also tell from the
method signature that you need
to callback with the array
of resolution results.
So there is a one-to-one
mapping relationship
between the recipients array and
the resolution results array,
meaning that for each
recipient you need
to create a
PersonResolutionResult for it.
The only exceptions
here is when you want
to create
ResolutionResultNeedsValue
or ResolutionResultNotRequired.
Where these two types of
resolutionResult are more
for a parameter level resolution
versus the other
resolutionResults are more
targeting towards the
individual parameter values.
So the first thing we want
to do, in this method,
is to try unwrapping
the Recipients Object.
And then we're going to loop
through every single recipient
and then call our API
inside UnicornCore framework
to find the matching
contacts given the name.
And next, we're going
to do a switch statement
on the matching contacts count.
And as earlier, we're going
to cover the different
results of the search.
The case where we have two
or more matching contacts.
The case where we have
exactly one matching contact.
And the case where we
have no matching contact.
And the case where we
have no matching contact.
So in the case where we find
two or more matching contacts,
we're going to create
a PersonResolution
Result.disambiguation with the
options that we have found.
In the case where we find
exactly one matching contact,
we're good to go.
So we're going to
tell Siri about it
by creating a
PersonResolutionResult.success,
with that one person
that we found.
And in the case where we
find no matching contacts,
we're going to create
a PersonResolution
Result.unsupported.
So that is the end of
our switch statement.
You might have noticed
that I went
through the code pretty fast, so
you might not have time to read
through every single
line of code.
That's perfectly fine because
the key takeaway for you here is
to know that we do have
different resolutionResults
to know that we do have
different resolutionResults
that are appropriate to
use in different scenarios.
So when it's time for you
to implement your code logic
for your result methods,
you can go online and check
out the documentation
for the complete list
of resolutionResults.
And then and also
the usage of them.
All right.
So now we have all
the resolutionResults
that we have created
for recipients.
Let's call the completion with
the array of resolutionResults.
And that marks the last line
for the case where we are able
to get some recipients
from the intent.
But in the case where the
user simply hasn't specified a
recipient, then we're going
to create a PersonResolution
Result.needsValue
and call completion with that
to tell Siri please prompt
the users for a recipient.
And that's it for our
result recipients' method.
So next, we're going to cover
the resolveContent method,
where we are simply going
to check if there's a value.
And if there isn't,
we're going to ask Siri
to kindly help us
to prompt users.
So the first thing we
do, in resolveContent,
is again try unwrapping the
content property and then check
if it's truly not empty.
If a content is indeed given,
we're going to create
ResolutionResult.success
with the given content.
Otherwise, we're going to create
ResolutionResult.needsValue,
just like we did in the
previous Resolve method.
And then call completion
with this.
So now we have gone through
both of the Resolve methods.
So now we have gone through
both of the Resolve methods.
Next up is the Confirm
method, where we're going
to check the authentication
status of the user.
So in the Confirm method, we're
going to call the shared API
in the UnicornCore
framework to check
if the user still has a
valid authentication status.
If he or she does,
then we're going
to create an
INSentMessageIntentResponse
with the code success
and a nil userActivity.
I will talk about the
userActivity in just a moment.
But now let's move onto the case
where the user is no
longer authenticated.
Well, in this case, we're going
to create a IntentResponse
with a code
.failureRequiringAppLaunch.
So this is to tell Siri,
Siri should provide an option
So this is to tell Siri,
Siri should provide an option
for users to maybe proceed to
our main application in order
to log in and finish
this sending action.
All right.
So that's it for
our Confirm method.
Lastly we're going to implement
the Handle method together.
So in Handle, we're simply going
to call the shared API
inside UnicornCore framework
to send the message with the
given content and recipients.
We're also going to get the
status of the sending action.
So if the message is
successfully sent,
we're going to create
an IntentResponse
with the code success.
Otherwise, we're going
to create the response
with the code failure.
And then we're going
to call completion
And then we're going
to call completion
with the IntentResponse.
So we have just gone
through the Resolve, Confirm
and Handle methods together.
Now, as promised
earlier, I'm going to talk
about the NSUserActivity
that those IntentResponse
initializers take.
So let's step out of
Xcode for a moment.
So NSUserActivity.
In the context of SiriKit,
NSUserActivity is used
to help your application
to resume state
when it gets launched by
either Siri or the user.
By default, Siri creates
an NSUserActivity for you,
if you decide to pass in nil
into the IntentResponse
initializer.
into the IntentResponse
initializer.
And Siri will create it with the
ActivityType being the intent
class name.
You can also choose to
provide your own UserActivity,
if you want to pass
in some custom data.
But either way, Siri will help
populate the INInteraction
property on the NSUserActivity
Object.
This property is newly
introduced in iOS 10.
And this object has all of
the intent, the IntentResponse
as well as the intent
handling status.
And Scott will talk a little bit
more about this object later.
So now let's take
a look at the usage
of NSUserActivity
in our code again.
So if you have paid close
attention to the code,
you might have noticed
that in Confirm
and Handle methods we
have been passing in nil
for the userActivity into our
IntentResponse initializers.
for the userActivity into our
IntentResponse initializers.
This is perfectly fine,
if our main application will
just handle the UserActivity
that Siri creates for
us and take advantage
of the INInteraction object.
But in some cases,
it is indeed helpful
to give our application
some custom strings
from the extension process.
So, for example, in the
Confirm method, when we find
out the user is no longer
locked in or authenticated.
Then we do want to
pass some error strings
to our main application.
We're going to do that by
creating our own userActivity
and populate the
.userInfo dictionary
with the custom error
strings that we want give
to our main application.
And then we're going to replace
nil with the userActivity
that we have just created.
All right.
Great. So now my UnicornChat
main application can now get
these custom error strings and
know to prompt users to log-in,
if the user or Siri chooses to
launch the app at this point.
So now we have finished all the
coding for Intents extension.
Let's actually go see
it run on a device.
Send a message to Scott using
UnicornChat saying are you ready
for your presentation?
Yes.
[ Applause ]
All right.
Thank you.
Yeah. It's very exciting.
We've just sent our
first UnicornChat message
through Siri.
That's absolutely awesome.
[ Applause ]
Thank you.
[ Applause ]
However, inside the
UnicornChat main application,
when I sent messages
to my fellow unicorns,
I actually often refer to
them by their unicorn names.
So really I want to say
to Siri, send a message
to Sparkle Sparkly saying
are you ready for your talk?
Where Sparkle Sparkly is
obviously Scott's unicorn name.
So in order to do that, let's
move onto our next topic,
which is user-specific
vocabulary.
All right.
User-specific vocabulary.
So these are custom words or
phrases that are quite unique
to your application and that
can vary from user to user.
In the example that I just
gave, Sparkle Sparkly as well
as other unicorn names can
be considered user-specific
vocabulary here.
So in order to help Siri to
understand what the users meant
So in order to help Siri to
understand what the users meant
when they speak about
these custom phrases,
you need to provide
them to Siri.
And you will do so by
calling the INVocabulary API
from your main app.
Let me repeat this.
You need to call
the INVocabulary API
from your main application,
not your extension.
All right.
Let's take a look at how
we do it in UnicornChat.
So in UnicornChat, we have
this UCAddressBookManager,
which manages UnicornChat's
own contact records.
And we have created this method
to update Siri's knowledge
about Unicorn names.
And it will be called whenever
a contact record gets added,
deleted or updated.
The first thing we want
to do in this method is
to get a sorted list
of Unicorn names.
And we put the more important
Unicorn names at the front
And we put the more important
Unicorn names at the front
and leave the less
important ones towards the end
of the array.
So we prioritize like
this to help Siri
to better prioritize
learning and matching
for these Unicorn names.
After gathering this sorted
list of Unicorn names,
we're going to provide them
by calling the INVocabulary
API here.
We will also give it
the vocabulary type
of these strings.
In this case, the Unicorn
names are of Type.contentName.
One last thing that I
want you to pay attention
to about this block of code
is that we actually want
to send all these operations
to a different view.
This is because operations
like fetching your entire list
of contacts can be quite
expensive and you don't want
to block your main
thread for it.
So please do take
advantage of GCD
So please do take
advantage of GCD
and dispatch those
expensive operations
into a different view.
All right.
So now after adopting the user
specific of vocabulary API,
I can now send messages to
Sparkle Sparkly, Celestra,
Buttercup and all
my fellow unicorns.
That's absolutely great.
So now I have yet
another feature request.
Inside UnicornChat application,
the visual and the style
of the application is actually
far more rainbowy and colorful
than what you see here in Siri.
So can I make my UnicornChat
experience inside Siri to be
as colorful as that in the main
application of UnicornChat?
To tell you all about it, I'm
going to invite up my teammate,
Scott a.k.a. Sparkle
Sparkly to the stage.
[ Applause ]
>> Good afternoon.
I'm Scott Andrus and I'm
an engineer on SiriKit.
And now we're going to talk
about how to make this feel more
like an interaction
with UnicornChat.
And to do that we're going
to build a UI extension
with SiriKit.
In iOS 10, we've introduced
the Intents UI extension point,
which can allow you to create
wonderful UI extensions
that provide custom
user interfaces
within the Siri experience.
And so, let's get started.
The reason why you
might want to do this is
because UI extensions increase
your application's impact
on the user.
By importing a UI extension,
you're showing your view
alongside the Siri experience.
And then you can show custom
experiences that are unique
to your application alongside
what Siri might normally show.
This gives you a lot of great
opportunities to do things
with your app that are
unique and let your app stand
with your app that are
unique and let your app stand
out from the rest of the pack.
You can also offer
user-specific customization.
So you can engage with
users on a one-by-one basis.
And finally you can
show information
that Siri might not
otherwise show,
which is a really great tool
to have in your tool belt.
And this is what it looks like.
So to get started all you need
to do is add an Intents
UI extension.
Add that to your project.
And embed it inside of
your application's bundle.
And you'll see the
great Info.plist
that Xcode generates for you.
And inside, you're
going to want to look
for the new IntentsSupported
key, which is analogous
to the one you've seen
in the Intents extension.
And inside, you'll register
for an intents that you'd
like to show custom
user interfaces
for in the Siri experience.
The anatomy of the UI extension
in SiriKit is actually
really straightforward.
SiriKit calls into your UI
extension with the configure
with interaction method,
and this is the key method
in SiriKit UI extensions.
Your UI extension has
a principal class,
which is the UIViewController
conforming
to the INUIHostedViewControlling
protocol.
And it will be passing
an INInteraction object
to your UI extension for
this configuration step.
Now, as Diana mentioned,
the INInteraction
class defines an object
that encapsulates three
important properties.
The first is the Intent
Object that's being confirmed
or handled by your
Intent extension.
Next, the Intent response
object that's being sent
from your Intents extension
to Siri via the completions
of the Confirm and
Handle methods.
And finally, there's an intent
handling status [inaudible]
value that describes the
state of the interaction
value that describes the
state of the interaction
between your application
and Siri.
As these are all really
useful properties to implement
as you build your user
interface for Siri.
Your view controller is the
gateway into your UI extension
as the principal class
that you're going
to start building your
user interface with.
Because it's a subclass
of UIViewController,
you've got access to all the
great UIKit functionality you
may be used to when
building user interfaces
for Cocoa Touch applications.
And you'll configure it
with the interaction object
that Siri sends you
in the configure
with interaction method.
There are a couple of other
parameters that you might want
to take note of in this method.
One of which is provided
view context parameter.
And in an iOS 10, this
is an [inaudible] value
which is one of two values.
Siri snippet or maps card.
And so you can configure
your interface differently
And so you can configure
your interface differently
for these different kinds
of modal interactions
with the user.
And this can be really
useful to you
if you're making a
[inaudible] extension.
Finally. You'll have a
completion, which you can call
to let Siri know that you've
completed configuration
of your user interface and
you'll pass back a desired size,
which tells Siri how to size
your view within a Siri snippet.
So now I think we know
everything we need to know
to get started with a demo
of building a SiriKit UI
extension for UnicornChat.
Okay. So we're back in the great
project that Diana was setting
up for us, with our Siri
extension, which allowed us
to plug our app into
the Siri experience.
And we're going to
take it a step further
within Intents UI extension.
Now when Diana created her Siri
extension target, we were able
to create an Intents UI
extension target to go with it.
An Xcode created this
group here on the left
in our project navigator
for our Siri UI extension.
So we open that up.
We can see a few great files
that let us get started
with our Intents UI extension.
The first is the
IntentViewController class,
which is the principal
class of our extension.
And then we also have a
storyboard for that class
and then an Info.plist, and
we'll dig into this first
to register for our
supported intents.
So inside we've got a great
IntentsSupported array inside
the NSExtension dictionary.
I'm going to go ahead
and add an entry here.
Now what we'd like to do
with our Intents UI extension is
show a user interface to users
of Siri during Siri results
for sending a message
to other unicorns.
And when we show this
interface, we'd like it
to be a chat transcript
interface
that really displays the
unicorniness of our application.
So inside I'm going
to add support
for the INSendMessageIntent,
declaring that we should
for the INSendMessageIntent,
declaring that we should
in fact show a user interface
when Siri handles this
intent with our application.
Great. And we're all
done with our Info.plist,
so we can start implementing
our IntentViewController.
So I'll zoom back out here.
And here we've got our
IntentViewController class.
Now you notice this is
[inaudible] subclass
of UIViewController conforming
to the INUIHostedViewControlling
protocol.
As part of that conformance,
it has to configure
with interaction method which
is provided to [inaudible] here.
Now the very first thing
I'm going to do is I'm going
to import the UnicornCore
framework
as a module into my Swift file.
Now again, this UnicornCore
framework is a framework
that we implemented for our
application and we use it in all
of our Unicorn apps, like
Unicorn Pay or Unicorn Rides.
It's a great way to share
code for our application
and for all of our extensions.
We made great use of it in
Diana's demo as a way to be able
to share our business
logic for Resolve, Confirm
and Handle in our extension.
And now we're going to use
it to share user interfaces
for our UI extension
and our application,
so we can have the
great familiar feeling
of sending a UnicornChat
message no matter where we are.
So let's start implementing
the configure
with interaction method.
So inside I'm going to go ahead
and set up a size variable,
which I'm going to send back
to Siri once I've
completed configuration.
And now I'm going to check
if my interaction.representsSend
MessageIntent.
This is a convenience that I've
implemented as a class extension
on INInteraction in my
UnicornCore framework.
Then I'll instantiate a
chatViewController class.
And this is what we use
to represent a message
sending interface,
and we're using both
our UnicornChat app.
And we'll use it here in
our UI extension as well.
And we'll start configuring
that chatViewController
with .messageContent from
the interaction object,
with .messageContent from
the interaction object,
which I'm again using
my class extension
to get from the interaction.
I'm creating a UC contact
model object, which is based
on the interaction's properties,
and then I'm assigning
that model object to
my chatViewController
to show the recipient
of the message.
And then finally
I'm going to switch
on the intentHandlingStatus
of the interaction.
And we can use this to configure
our user interface differently,
based on whether or not
the message has been sent.
And so in this case, if the
message has not been sent a.k.a,
the intentHandlingStatus
is unspecified,
inProgress or ready.
I can set the isSent property of
my chatViewController to false,
indicating that I should
set a draft-type interface
and show that to the user.
Otherwise, if it's done, I
can set the isSent property
of my chatViewController
to true,
indicating that I've
sent the message
and letting the user
know the same.
Finally I can present the
chatViewController as a child
Finally I can present the
chatViewController as a child
of this principal class
IntentViewController,
which is a really useful way
to implement different view
controllers for different kinds
of intents in my UI extension.
Finally I can use the
NS extension context
of my Intents UI extension in
iOS 10 to get a maximum size.
And I'll use this by default
in my UnicornChat integration.
If for some reason I couldn't
get the extension context,
I'll make use of the desired
size of the chatViewController,
which is good enough for me.
Now that was the happy path.
Let's say that something went
wrong and we got an interaction
that we didn't expect.
We can set a size of
zero, telling Siri not
to draw our UI extension's
view within the Siri snippet.
Okay. The last thing I'm going
to do is I'm going to tell Siri
that I've completed implementing
and configuring my
user interface
and configuring my
user interface
and that should go ahead and
show us in the Siri snippet.
Okay. So I have a version of
this running on my device.
Let's go ahead and see it now.
So as you can see here, I
have my UnicornChat app.
I'm going to send
the message to Diana.
Send the message to Diana using
UnicornChat that says "Great job
on your presentation."
[ Applause ]
And so we've got a great
custom user interface here,
but also you noticed that
something is a little bit off.
And so we'll take
a look at that now.
So we've just shown you how
to boot strap your UI
extension with SiriKit.
to boot strap your UI
extension with SiriKit.
And that's really great.
But you'll notice
here that again,
there's something not quite
right about the user interface
that we're showing
to users in Siri.
And thus that we have a
duplicate chat transcript
interface being shown
within the Siri snippet.
By default, Siri shows the user
interface for various kinds
of intents, and this
includes the SentMessageIntent
that we've just used to send
Diana a message on UnicornChat.
As such, what we'd like to do
for our UnicornChat users is
really show our custom user
interface to let the users have
a great feeling of unicorniness
when they send messages
in UnicornChat.
And so we can do this in iOS 10,
with an optional new protocol.
By implementing the
INUIHostedViewSiriProviding
protocol, you can let Siri know
that you're drawing
either messages or maps
within your UI extension's view.
And thus, you can opt-in to
displaying different kinds
And thus, you can opt-in to
displaying different kinds
of particular content
within your user interface
and then taking over that
interface on behalf of Siri.
Ultimately when you do this,
Siri will accommodate
your views content
and so you should make sure
that you do accurately
draw these properties
on behalf of the user.
So let's take the
IntentViewController
that we were just working
within our UI extension.
Here you can see that
if we implement the
INUIHostedViewSiriProviding
protocol,
we can implement displaysMessage
property and return true,
indicating to Siri that we are
in fact displaying message
content within UnicornChat.
And this is all it
takes to be able
to implement your own user
interface within Siri.
So let's see a demo
of this on my device,
where I have a version
of this application
that does exactly this.
So we're back on my device.
Now let's send another
message to Diana.
Send the message to Diana
using UnicornChat that says
"It's pretty tough to type
demo code with unicorn hands."
And now we see exactly the
interface that we want to see
and what we want
to show our users.
[ Applause ]
Our interface is unimpeded by
what Siri might show by default.
And this gives us a great outlet
to show a custom user interface
that really reflects
on the unicorny style
of our application.
Now some final thoughts
on implementing UI extensions
before we part today.
And the first thing I'd
like to leave you with is
that you should consider
being memory conscious near
that you should consider
being memory conscious near
UI extensions.
Because extensions, by default,
are temporary and only shown
to the user for short
periods of time,
the system enforces
a lower memory limit
than you might be used to
with your applications.
And so usage of views,
like MKMapView,
can be especially
memory intensive,
and you should use
them judiciously
when building your UI extension.
As we saw, we have access to
minimum and maximum view sizes
within our UI extension
via NS extension context.
And this is also
incredibly useful to you,
if you're designing
your application
and your UI extension
to be shown
in various different
size configurations.
But desired size that you then
send back to Siri is just that.
A desired size.
And so, if you're making use
of different kinds of layout,
you want to make sure that
you're being adaptive with it
so that it can look good
at either the minimum
or the maximum size, no
matter how Siri draws it.
So we've seen a few
key things with respect
to extending our
applications to adopt SiriKit.
And the first is preparing
our application appropriately,
and that's by making use of
shared code in great ways
like embedded frameworks,
implementing unit tests
to be able to properly test
for different kinds of intents
that Siri might send us.
And then architecting
our application
to use the right
number of extensions.
We solved how to add our
first intents extension
and implementing
the Resolve, Confirm
and Handle business logic
that lets our applications
speak Siri's language.
And finally, we showed how to
provide a user interface in Siri
to bring the custom, unique
experiences of our application
into the Siri experience.
The sample code from this
session, as well as the slides
and some great documentation
about SiriKit,
and some great documentation
about SiriKit,
are available on our website.
And then we had an excellent
session yesterday called
Introducing SiriKit,
where we talked
about what we want SiriKit to be
and how it integrates into iOS.
And we had a great session about
app extension best practices
from WWDC 2015 that I'd
highly encourage you to watch,
if you intend to implement
SiriKit extensions.
And I hope that you find
implementing your SiriKit
extensions and your
applications as easy and fun
as we did with UnicornChat.
Thank you.
[ Applause ]