WWDC2014 Session 716

Transcript

X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
>> Welcome to Power,
Performance and Diagnostics:
What's new in GCD and XPC.
I'm Daniel Steffen, I'm one
of the Engineers responsible
for GCD and XPC in Core OS,
and today we'll go
over some background.
Some -- a new concept called
Quality of Service Classes
that we're introducing this
year, the new APIs associated
to that, and the concept of
propagation of this quality
of service and execution context
across threads and processes,
and finally, some pointers
to great new features
around diagnostics and
queue debugging this year.
So GCD, for those
who might be new
to the topic even though
given the number of people,
maybe everybody knows
about it [laughter],
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
maybe everybody knows
about it [laughter],
GCD is a low-level frame maker
on asynchronous execution,
concurrent execution,
and synchronization.
Today, we are mostly
going to be focusing
on the asynchronous execution
aspect of it, and you can think
of asynchronous execution
with GCD as a way to run code
in a separate environment
in your process.
The reasons you might
want to do that are things
like avoid interfering
with the current thread,
a typical example would be the
main thread of your application,
or execute at a different
priority level,
which is something we'll talk a
lot more about in this session,
or coordination between
multiple clients in the process.
This leads us to XPC, which
is our low level IPC framework
on the system, and that
can be thought of as a way
to asynchronously execute
code in a separate process,
for which you might do
for very similar reasons:
avoid interfering with the
current process as a whole,
say if you're running
on un-trusted data
that you might not want to
crash the main application
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
that you might not want to
crash the main application
for if it goes wrong,
or you might need to run
at a different privilege level
say in a different sandbox,
and maybe you need to
coordinate with multiple clients
if you're writing
a Daemon on OS X.
And that's really all I'm
going to go over in terms
of background for
these two topics.
This is sort of a "what's new
session" this year, so I --
here are a number of sessions
from past years if you're new
to this technology
or to the platforms.
That will get you up to speed,
you should be able to see all
of those in your WWDC app
or on the developer website.
So let's take a step
back and think
about what our goal should
be as application developers.
We'll see one of
the primary goals is
to provide the best
user experience
for the person using the device.
What do they care about?
The frontmost app and its
user interface, that that be
as responsive as possible.
What do you need to
provide on the system
as application developers
to make this possible?
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
as application developers
to make this possible?
This responsive user
interface well,
there must be enough
resources available
so that the main thread
of the frontmost app,
which is where all the UI event
handling and UI drawing occurs,
can proceed unimpeded, as well
as all the associated
infrastructure that is involved
in pushing pixels to the
screen or getting events
from the input devices.
Other work that is
not directly related
to this task should
execute off the main thread
of the application independently
off the main thread,
and ideally at lower-priority.
Let's talk about priorities.
Very generically
priorities are a mechanism
to resolve resource
contention on the system.
The idea is that
under contention,
the high priorities win, but
if there's no contention,
the low priorities aren't
really a restriction they will
proceed normally.
So an example of that that
you are probably familiar
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
So an example of that that
you are probably familiar
with is scheduling priority.
This is something you
can set on your threads
that tells the Kernel scheduler
how you would like access
to the CPU prioritized, and the
idea is that under contention,
high priorities get
to the CPU first.
But even if you set low-priority
there's no restriction
to your execution if
they're no contention,
but then if something
high-priority comes along
like a UI action, then you might
not run for a period of time
if you're at low-priority.
Similar concept for I/O that
we've had for a long time
that you might be familiar with
on the GDC background queue,
the I/O that you perform
on that queue are tagged
as low-priority.
Again, this is no restriction
if there's no high-priority
I/O present,
it will just proceed
normally in that case.
But if there is say,
the main thread
of an application loading an
image for display in the UI,
if such high-priority
I/O is present,
the low-priority I/O
will be deprioritized.
But it turns out that our system
actually has many other resource
controls of this type,
and to configure all
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
controls of this type,
and to configure all
of this correctly
is very complex
and a lot of knobs involved.
There isn't really any
unified approach for you
to know what settings
that should be used in all
of these cases, and
no good way for you
to tell the system your
intent behind setting specific
configuration values, and
this is what we wanted
to address this year
with the introduction
of Quality of Service Classes.
Quality of Service Classes are
a new concept whose goal is
to allow you the developer to
communicate intent to the system
by giving an explicit
classification of work
that your application performs
with a single abstract
parameter, and move away
from this situation of having
to dictate very specific
configuration values
for all the possible things
that you could configure.
Among the effects of
setting Quality of Service,
the two we talked about,
CPU scheduling priority
and I/O priority, but
also configuration
of timer coalescing
and hints to the CPU
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
of timer coalescing
and hints to the CPU
that it should prefer throughput
versus more energy
efficient modes of execution,
and potentially more parameters
today or in the future
that you don't need to know
about as an application
developer or can't even know
about yet because
they don't exist yet.
In particular, we might be
tuning these configuration
values that are actually
used underneath the colors
differently for different
platforms of different devices,
but you don't have
to know about that.
You can just specify
this abstract parameter.
So the core Quality of Service
Classes we're introducing are
user-interactive,
user-initiated,
utility, and background.
I'll go through each
of those in turn.
User-Interactive is the quality
of service of the main thread
of the application, and we set
that up for you automatically.
It's -- should be used
for anything that's directly
involved in event handling,
UI drawing, and anything
of that nature,
but overall for an
application we expect
that this should be a small
fraction of the total work
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
that this should be a small
fraction of the total work
that an application does,
especially in the case
where the user isn't
directly interacting
with the application.
The User-Initiated Quality
of Service class is intended
for work that's asynchronous
to the UI
but directly UI-initiated, or
anything that a user is waiting
for immediate results.
This could be things that are
required for the user to be able
to continue his interaction
with the current action
that he's doing in the UI.
Anything that is not of
that nature should run
at the lower Quality of
Service class like Utility
which is intended for
long running tasks
but user visible progress such
as a long learning computation,
intensive I/O, or
networking, but anything
that really feeds data
to the user interface
on a long running basis.
You might also put
things like getting ready
for the next UI request if
you can confidently predict
that that will be needed very
soon, that Quality of Service.
But this is already one of
the energy efficient Quality
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
But this is already one of
the energy efficient Quality
of Service Classes, so it's
important to put as much work
as feasible in your application
at this Quality of Service Class
or lower in order to maximize
your user's battery life.
The next level is Background.
That is intended for work
that the user is unaware of,
that the work is
currently occurring.
He may have opted into the
performance of that work
in the past saying like, "I want
to have an hourly backup
occurring," but he doesn't see
that occurring currently
when it's going on.
Anything that might
be prefetching,
opportunistic prefetching of
data, or that might be work
that could be deferrable
for long periods of time
or just generally
maintenance or cleanup
where I can tunnel
to an application.
So how do you go
about choosing one
of these Quality
of Service classes?
There's a couple of
questions you can ask yourself
that will help with that.
Going through the list
for User-Interactive,
you should ask yourself,
"Is this work actively
involved in updating the UI?"
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
"Is this work actively
involved in updating the UI?"
If that's not the case
it probably shouldn't run
at this Quality of Service.
For User Initiated,
similarly, is this work required
to continue the user
interaction?
Like we said, if
that's not the case,
this is not the right level.
Utility, the question
is, is the user aware
of the progress of the work?
So, there's exceptions to this
but that's typically
the criteria for being
at this level, and
for Background,
should you be at
Background or not.
Can this work be deferred
to start at a better time?
If the answer to that
question is, yes,
then you probably shouldn't
actually be scheduling
that work right now.
You should be using an
alternative mechanism
to start the work
in the first place
like this background
activity scheduler.
For more on that, please see the
Writing Energy Efficient Code,
Part one session from yesterday.
So once you've picked
one of these Quality
of Service classes,
say User-Initiated,
another way to think about
your choices are to compare
with the other classes
above and below you.
So, you could ask
questions like,
Is it okay for user-interactive
work to happen before my work
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
Is it okay for user-interactive
work to happen before my work
at User-Initiated, or,
is it okay for my work
at User-Initiated to
compete with other work
at User-Initiated
Quality of Service?
If the answer to
that question is, no,
then you should probably
move on below User-Initiated,
and similarly, is it okay for
my work to take precedence
over work at Utility
Quality of Service?
So to recap this section,
we talked about the facilities
we have and need for being able
to provide a responsive
user interface,
particularly asynchronous
execution,
at the correct priority, but
it wasn't really very easy
until now to express your intent
as far as priority is concerned,
and that we were addressing
that with the Quality
of Service Classes which provide
an explicit classification
of work for you and we talked
about the questions
you can ask yourself
to choose the right QoS Class.
So let's look at the
Quality of Service Class API
that you'll be writing
code with.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
that you'll be writing
code with.
You can provide Quality of
Service Classes at a number
of levels in the system,
starting with threads
if you use manually-created
NSThreads or Pthreads,
you can provide Quality
of Service Class on
those at creation.
We won't talk about
this in detail here,
but it's pretty simple.
You can look that up
in the documentation.
We'll talk about how to
provide Quality of Service
on dispatch queues and dispatch
blocks, and yesterday's session
on writing energy efficient
code talked about how
to provide Quality of Service
on NSOperation queue
and NSOperation.
In rare cases, it's also useful
to provide Quality of Service
on processes at a whole.
Again, that's something
that you can look
up in the documentation.
So here under Quality of
Service Class Constants
that we've provided in
the headers that will pass
through the APIs, these are the
four classes we talked about.
The sys/qos.h header has
constants that you typically use
at the lower level APIs, and
foundation.h has a coolant,
and in fact, interchangeable
constants
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
and in fact, interchangeable
constants
for user at the NS APIs.
But the QS.h header has
two additional values
that we'll talk about right now.
The QoS Class default is a
class that fits in the middle
between User-Interactive
-- the UI classes, rather,
and the non UI classes,
and this is what we use
when we have no more
specific QoS information.
For instant, for a
thread that was created
without any specific QoS,
it will run at the
default Quality of Service.
Similarly, for the PCD
Global Default Queue,
that runs at Quality
of Service default.
It's not in itself intended as
a work classification for you
to use to specify intent,
it's just so you shouldn't
typically set it, except maybe
when you're resetting
to a previous state
or maybe propagating a state
from one place to another.
The other special value
is QoS underscore Class
underscore Unspecified.
This isn't an actually
class, this is the absence
of QoS information, the
nil value if you will,
and this indicates to us
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
and this indicates to us
that maybe they should be
inferring the Quality of Service
from a different place
like the work origin.
It is also something that
you'll might see returned
from the thread header APIs if a
thread was opted out of Quality
of Service by use
of a legacy API.
These are things that might
manipulate these underlying
knobs directly that we talked
about that then become
incompatible with the Quality
of Service unified
concept in which case,
so this would be
things like skipparam,
in which case we will opt
out the thread out of Quality
of Service and you
will see this value
as the one returned
from the current QoS.
In addition to the classes,
we also provide you
an initial parameter
that indicates relative position
within a QoS class band,
or relative priority.
So rather than having five
discreet classes, we can think
of QoS really as a set
of five priority bands
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
of QoS really as a set
of five priority bands
where you can position yourself
inside one of these bands,
and you can only lower
yourself from the default.
So, you can provide a value
between minus 15 and zero
to position yourself lower than
most other people in that band,
and it's really only intended
for unusual situations.
We expect that in most
cases, the zero value --
default zero value will
be perfectly sufficient,
but if you have special
situations
like interdependent work at the
same Quality of Service class
that needs slightly
differing priority
or produce a consumer scenario
so one or other side might need
to be slightly high-priority
to get a good flow,
this is the tool for that.
Now let's talk about the
API you'll use with threads.
As mentioned, QoS is kind of
a thread specific concept,
and you can get the QoS class
off the currently running thread
with the qos class
self function.
This will return what the
thread is currently running at.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
This will return what the
thread is currently running at.
This is not only in the cases
of manually-created threads,
but if work starts running
where your specified QoS would,
the GCD or NSOperation APIs,
once it starts running the
thread will have a QoS value
and this is how you get it.
The other thread concept we
have is the Initial QoS Class
of the main thread.
This is something the system
chooses for you when it brings
up the main thread, and
that's selected depending
on what kind of process you are.
If you're an App, that will be
the User-Interactive Quality
of Service.
If you were an XPC
service or Daemon,
it will be the default
Quality of Service,
and because that
can change later
on if the main thread changes
itself, then you can go back
to that original
value with this API.
For the APIs in GCD and QoS,
let's look at the existing
global queues that we've had
since the beginning, and you'll
see we'll be mapping those
to Quality of Service Classes.
So the main queue, which
in an application maps
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
So the main queue, which
in an application maps
through the main thread of
the application obviously runs
that User Interactive
quality of service.
You know, mapping the
high default and low queue
to User Initiated, Default
and Utility respectively,
and the Background priority,
concurrent queue is mapped
to the background QoS class.
That one is pretty much
a one-to-one mapping.
The others, it's worth noting,
are a slightly larger spread
of behavior than what you've
had before with high, default,
and low, which were
very similar.
So, this might be something
to watch out for when you move
up to current releases.
Getting a Global Queue with
QoS directly is also easy.
Just use the existing dispatch
to get global Queue API
with the utility QoS constant,
that's the first constant
in this example rather than the
existing priority constants.
And this is really what we
recommend you start doing
from now on to be able to
express that intent directly
of what you want rather
than take advantage
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
of what you want rather
than take advantage
of the compatibility mapping.
Once you have a queue, you
can also ask for its QoS class
with the dispatch queue
get QoS class getter.
Not that QoS class is an
immutable property of the queue
that is specified when
the queue is created.
For a queue that you create
yourself, how do you do that?
With the dispatch queue adder
make with QoS class API.
This will really turn an
attribute for the QoS Class
that you have requested,
like Utility in this example,
and you then pass that attribute
to the dispatch you create API
and get a Utility serial
queue in this example.
Now let's move onto
a new concept
that we're introducing
this year for QoS
and other reasons called
Dispatch Block Objects.
We've always had
blocks in GCD as sort
of a fundamental unit
of work, of course.
We are enhancing that
concept slightly this year
with Dispatch Block
Objects to allow you
to configure properties
of individual units
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
to configure properties
of individual units
of work on a queue directly.
And it will also allow you to
address individual work units
for the purposes of
waiting for their completion
or getting notified
about their completion
or being able to cancel them.
So, this is something that
lots of people have requested
over year that you be able to
cancel blocks in the GCD queue.
Hopefully this helps
out with that.
Otherwise, we are -- the goal
was to integrate transparently
with the existing API that
we already had without having
to introduce a lot of
additional functionality.
So, the way we achieve that is
by using the concept
of a wrapper block.
You start with an existing GCD
block of type dispatch block t
which is that function
at the right of a block
that takes no arguments and
returns no return value,
and we wrap that in another
block of the same type
which contains these additional
configuration parameters
of QoS Class and Flags.
That operation creates
a heap object of course,
so this is really like
similar to calling block copy
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
so this is really like
similar to calling block copy
on the nested block, so in
a seed program you will have
to call block release on the
return object to get rid of it,
or in Objective-C programs,
send a release message
or let arch do that for you.
Quick example of
that API in action,
we create a local variable
of this type dispatch block t
and send it to the result of the
dispatch block create function.
Here passing no flags
and just a block literal,
and this is very similar to
block create at this point,
and then we can just
pass that block object
to the existing dispatch, async
API and do some work while
that is synchronous and
this log is occurring,
and finally maybe we need
to wait on that result,
so we call the dispatch
wait API,
passing in that block
object directly
and now we don't need any
additional setup to wait
for the result to face
[inaudible] like we might have
in the patch with dispatch
group or dispatch centerfolds.
And finally as mentioned,
in a C program you have
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
And finally as mentioned,
in a C program you have
to block release that
reference created
by dispatch block create.
Second example here we use
the dispatch block create
with QoS Class API to
create a block object
that has a specific assigned
Quality of Service Class
that we want for
just that block.
So here we've chosen Utility
minus 8 derivative just
as an example, and we
pass, again, that block,
to dispatch async, and maybe we
do some work and then decide,
"Oh we really didn't need this
Utility Quality of Service work
at all," so we then pass it to
the block to dispatch cancel
which will mark that
block as cancelled
and if it hasn't
started executing
yet when it gets de-queued it
will just return straight away.
So, this allows you to sort
of take back the end queue
that we thought in the
past was not possible.
It's important to note this
cancellation is not preemptive.
It's very similar
to the dispatch source
cancellation that we've had.
If the block is started,
cancellation will not stop
it from doing anything.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
cancellation will not stop
it from doing anything.
The block can check for
cancellation on itself
with the test cancel
API of course.
Finally, last example here we'll
be showing the use of a flag
when we call the
dispatch block create API.
We're using the detached
flag here which is something
that you might have heard about
in the Activity Tracing Session
if you went to that
this morning.
It's a concept of being able
to disassociate that block
that you're going to schedule
from what is currently going
on in the thread that caused the
dispatch block create for work
that should not be correlated
such as internal work
to the application like
clean caches in this example.
You know of course, we pass
that again to dispatch async,
and in this case we will
use the dispatch notify API
to schedule a notification block
on the main queue to tell us
when that clean cache
block is completed.
This is very similar
to the dispatch group
notifier API that we've had.
Now that we've talked
about the interaction --
at least talked about
the various levels
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
at least talked about
the various levels
where you can specify Quality
of Service, we have to talk
about how they interact when you
specify them at multiple levels
at once, and for Asynchronous
Blocks, the default behavior is
that we will always prefer
the Quality of Service Class
of the queue if it has a
Quality of Service Class.
Or if it doesn't, we will look
at the immediate target queue
if that's one of the global
target queues with Quality
of Service like the default --
sorry, like the high/low
Background but not the default,
or one of the ones that
you specifically requested
with Quality of Service.
In that case, we
really use that as sort
of a backwards compatibility
method with the existing way
to specify priority in
GCD, or the target queue.
If you don't have any of these
two pieces of information,
we will use the Block
Quality of Service class
if you've specified it
with the creation API,
or otherwise we will use
Quality of Service inferred
from the submitting thread.
What do we mean by
that inferred QoS?
This is the Quality of
Service that we captured
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
This is the Quality of
Service that we captured
at the time the block was
submitted to the queue,
so this is the Quality of
Service that was active
on the thread that
called dispatch async
at the time the block
was submitted.
We will -- because this
is an automatic mechanism,
we will translate User
Incorrective to User Initiated
for you to make sure that you
don't propagate the main thread
priority inadvertently to lots
of places in the application.
But otherwise, if there's no
Quality of Service specified
on the queue, we will
use this mechanism.
This is intended for queues
that might not have
a specific identity
that it can assign a
Quality of Service to
or that don't really
serve a single purpose
where it is appropriate for
the Quality of Service Class
from the client, if
you roll off the queue,
to actually determine
what you run it.
So things that mediate between
many different clients would be
a good candidate for that.
For synchronous blocks, the
rules are slightly different,
but you will default
to the Quality
of Service Class off the
block if there's such a thing,
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
of Service Class off the
block if there's such a thing,
or otherwise use the one
off the current thread.
This is very similar to
what has always happened
with this batch sync.
It actually executes
the block that you pass
on the calling thread itself.
Note that this will only ever
raise the Quality of Service,
so as not to prevent any
work later on in the thread
after the dispatch sync
returns from making progress.
These are just default.
We also provide you explicit
control over these options.
You can use the Dispatch
Block Inherent QoS flag
when you create a block -- a
special block object to tell us
to prefer, always prefer the
QoS of the queue or the thread,
or conversely to pass the
Dispatch Block Enforce QoS Class
flag so that we will always
prefer the block's Quality
of Service even if we go to
a queue that has a Quality
of Service itself, but again,
in these cases we only
ever raise the Quality
of Service to something higher.
Now that we've talked about
all these different ways
of introducing different
priorities into your process,
we have to talk about
the priority inversions.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
we have to talk about
the priority inversions.
What is a priority inversion?"
In general, it's just some
situation where the progress
of high-priority work
depends on either the results
of some low-priority work
or a resource held
by low-priority work.
And in the debugging
scenario you would see this
as high-priority threats
that are either blocked
or maybe even spinning
or polling results
from a low-priority thread that
you might also see present.
So in a synchronous situation
like that it would
be high Quality
of Service thread waiting on
lower Quality of Service work.
We will actually try
to resolve inversions
in very specific cases for you,
namely when you call dispatch
sync and dispatch wait the block
on a serial queue, or when
you call pthread mutex lock
or any facilities built
on top it like NSLock.
In those cases, the system
will try to raise the Quality
of Service of the work that is
being waited on to the Quality
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
of Service of the work that is
being waited on to the Quality
of Service of the waiter.
The asynchronous case is
obviously also possible.
Say you have submitted a
high Quality of Service block
to a serial queue that was
created with lower Quality
of Service or that contains
some blocks with lower Quality
of Service earlier on.
Now this block is -- some
high-priority work is backed
up behind lower-priority
work asynchronously.
In the case of a serial
queue specifically again,
the system will attempt
to automatically resolve
that for you by raising
the Quality of Service
of the queue temporarily
until you have reached
that high Quality
of Service work.
But of course, rather than
relying on the system to try
and resolve these situations
for you, it's much better
if you can avoid these
inversions in the first place.
So if that's possible,
you should attempt to do
that if you see that
type of problem.
One technique is to
decouple shared data
between multiple priority
levels as much as you can
by using finer grade
synchronization mechanisms,
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
by using finer grade
synchronization mechanisms,
finer granularity, and
move work outside of blocks
or serial queues if
that is possible.
Another technique is to
prefer asynchronous execution
over synchronous waiting,
because synchronous
waiting typically leads
to chain soft waiters
in situations like this
where one guy is waiting
on the next is waiting
on something else, etcetera.
But in asynchronous execution,
so now that is much
easier to resolve.
And also something worth looking
at here is spinning or polling
for completion where
my high quality
of service thread might
actually be, by doing that,
holding off the low-priority
work that it's waiting for,
and particularly look at for
timer-based "synchronization"
in quotes, which is some
kind of checking for a result
after an amount of time, that
might not immediately appear
to be a polling loop,
but in fact is,
especially if it's some
priority inversion situation.
So to recap this section,
we talked about the QoS Class
constants that you can use,
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
we talked about the QoS Class
constants that you can use,
the concept of relative
Quality of Service priority,
the APIs for queues and
blocks, and the interaction
of multiple Quality of
Service specifications
if you've given them to blocks
of queues at the same time,
along with what we do
for priority inversions
and what you can
do to avoid them.
Our next section is
about the Propagation
of Execution Context.
What is Execution Context
here, this is a set
of thread-local attributes that
the system maintains for you.
This includes the Activity ID
that we heard about this morning
in the Activity Tracing session
that underlies the correlation
aspect of Activity Tracing.
It also includes properties
of the current IPC request
if you say in a XPC Event
Handler, such as the originator
of a chain of IPC across
multiple processes,
or the importance of that
originator, and we'll talk more
about that in a while.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
The interesting thing about
this Execution Context is
that we automatically
propagate it for you.
We propagate it across
threads and processes with GCD,
NSOperationQueue, and
other Foundation APIs,
and we propagate
it across processes
like XPC and other IPC APIs.
So an example of that
graphically would be we have two
queues here that are running
with two different
execution contexts,
two different activity IDs
as a proxy for one and two.
If you do a Dispatch Async from
Q1 to Q3, that will transport
that activity ID1 transparently
for you to that other Q,
or if that Q1 talks to
a different processor
at XPC we will transport
that execution context
across process, and then of
course inside that process,
we can continue to
propagate, as well.
Now because this is
automatic propagation,
sometimes you may
need to prevent
that because it might
be inappropriate
in some situations, and that is
where the DISPATCH
BLOCK DETACHED flag
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
where the DISPATCH
BLOCK DETACHED flag
to the dispatch block
create API comes in.
This would be used for any type
of work that's asynchronous
and to be disassociated from
the principle activity that's
currently ongoing, so anything
that is not directly related
to say, the UI action that an
application is undertaking right
now, or in a Daemon.
You can't IPC request if you
have to do some related but work
that may not be directly
attributable
to say, the IPC request.
A typical example of that
is an asynchronous long
running-cleanup or
Daemon of that nature.
We have a couple of things
that are detached by default.
The Dispatch source handlers,
the blocks that specify
as Dispatch source
handlers, or the blocks
that pass the dispatch after
are detached by default.
So same animation as before.
The upper half, we have
asynchronous propagation
of the activities automatically,
but say Q3 now discovers
that it has to do some
maintenance operation
that really shouldn't be
associated to this activity.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
that really shouldn't be
associated to this activity.
It uses the Detached Block
API to create a separate unit
of work that is not
related to this activity,
which then maybe later on can
create its own Activity ID
that is separate from the
one that it originated from.
We also provide facilities
for you
to manually propagate
this execution context
with the DISPATCH BLOCK
ASSIGN CURRENT flag.
This assigns the current
Quality of Service Class
and Execution Context
at the time you call the
dispatch block create API
to the block.
And this is particularly
useful in cases where you want
to store the block yourself
in some data structure.
Say you have your
own thread pool
or your own threading model, and
you then later on want to call
that block on one of those
threads and we can't really make
that connection that you
transported work across threads
for you in that case
because we don't understand
that relationship.
Similarly, you might decide
to later on submit a block
to a dispatch queue but you
want to capture this state
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
to a dispatch queue but you
want to capture this state
that occurred when
you stored the work.
For XPC, as mentioned, the
propagation is automatic.
XPC connections propagate
both Quality of Service Class
and Execution Context
automatically
to the remote XPC Event Handler.
Worth noting that the capture
of the current state happens
when the Send operation occurs,
so we call the XPC
Connection Send API.
That's what Message Send
API, this is what --
that's the point at which
we captured that state,
and note that XPC handlers
prefer the propagated Quality
of Service over that
on the queues
that they run on by default.
For XPC Services that you
might be writing in OS X,
we have talked about in
the past about the concept
of importance boosting
in past years' sessions.
This is still present this
year, but it's slightly changed.
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
This is still present this
year, but it's slightly changed.
We -- this is a mechanism
that's used
to initially clamp the
XPC Service process
to Background Quality of Service
and only unclamp it during
the IPC with the UI process.
So this allows XPC services
to have as little impact
on the system as a whole when
they're not directly in use,
and XPC manages the lifetime of
that boost automatically for you
until either the reply is sent
or you've released
the last reference
of the XPC message
that you've received.
Additionally this
year, the lifetime
of that boost will also be
maintained while there is
asynchronous work that was
submitted from the context
of the handler, the XPC
Handler was submitted --
is ongoing, sorry.
So this is done right at
propagation of execution context
that contains this
important state.
So this is typically what
you want in a process
if you're creating
asynchronous work
that is related to
the IPC request.
The process shouldn't
become clamped again
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
The process shouldn't
become clamped again
until that work is done, but
of course, it is also possible
that you might have
unrelated work generated there
and in those cases you
should make sure to use
that Detached Block flag
to submit that work,
because otherwise you might
be keeping the XPC service
unboostered, unclamped, for
a longer period of time.
Now to recap, we talked
about execution context
and the attributes
that we track therein.
Automatic propagation of
this context along with QoS
and how you can control
that propagation manually,
as well as, the aspects
pertaining to XPC propagation
and importance boosting.
So finally, I want to give
a shout out to a couple
of very exciting new features
that we're introducing this year
around Diagnostics
of Asynchronous Code
and Debugging of
Asynchronous Code.
First off, the Xcode
6 CPU Report --
this is what you would
use to diagnose or confirm
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
this is what you would
use to diagnose or confirm
that you have done your
adoption of Quality
of Service classes correctly.
If you stop in the debugger
at the breakpoint say,
you can click on the CPU gauge
tab to get the CPU report,
and here you can see the
total amount of CPU used
by the process in a graph.
And new this year for each
of the threads involved we
will also show you the Quality
of Service of that thread that
it is currently running at.
So in this example,
if you want to confirm
that you have correctly
adopted Utility Quality
of Service class, this would
provide that confirmation
since we see most
of the CPU time
in the overall graph is actually
in that thread that was running
at Utility Quality of Service.
Next up, Xcode 6
CPU debugging --
you may have seen
this on Monday.
It's a really exciting feature
that hopefully will help
out a lot with debugging
asynchronous code.
Not only does the Xcode
debugger now show back traces
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
Not only does the Xcode
debugger now show back traces
from the currently running
code, which in the case
of an asynchronously-executed
block isn't always
as helpful as it could be.
It will also stitch in the
back trace that was captured
when the block was enqueued.
So this shows you a past
historical back trace of when
that block was submitted
to the queue,
and you can distinguish the two
halves by seeing that the icons
for the currently live
back trace are colored
and the historically-captured
back traces
from the enqueue event
are colored on gray.
In addition to showing you
currently running blocks,
the queue view of the debugger
can also show you the set
of enqueued blocks on a queue.
So these are things that are not
running yet but will be running
on that queue in the future,
which sometimes other source
of something not occurring, and
that can be really difficult
to track down if you don't
have something like this.
So here, it will also show you
how many pending blocks exist
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
So here, it will also show you
how many pending blocks exist
on this queue, and
for each of the blocks
if you just close the
triangle we can see
where that block was enqueued.
Finally, a really exciting
feature we're introducing this
year called Activity Tracing
that I've mentioned already.
It was covered this morning.
Just a quick reference,
this will allow you
to have additional
information about asynchronous
or past events in
your crash reports.
You're adding concept
of bread crumb trails
which are past high-level
events that occurred leading
up to the point of the crash,
directing into your
crash report,
as well as the Activity ID.
This is the Activity ID that
I was talking about before
in being tracked in
the execution context.
That is also available directly
in your crash report along
with the meta data
associated to it,
and in particular the most
interesting part, trace messages
for that activity scoped
to that specific activity
from both the crashing
process and any processes
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
from both the crashing
process and any processes
that the activity has propagated
to by this propagation
of execution context
that we talked about.
And this is also available
directly in the debugger when,
say in this case, you have
crashed with the sync abort.
You can type the thread
info command and see
that there was an activity
present, with its name
and 5 messages, and it will
show you the meta data,
the bread crumb, and those
trace messages directly
in the debugger.
So this can be a
really powerful way
of debugging asynchronous
code as well
by inserting trace message
the system keeps for you
and displays to you when you
crash in the same activity
that we've propagated for you.
So in summary, we went
over some background,
then talked about Quality of
Service Classes, the new concept
that we're introducing
this year,
and then the APIs surrounding
that, as well as the propagation
of Quality of Service and
Execution Context across threads
and processes, and finally some
exciting news about diagnostics
X-TIMESTAMP-MAP=MPEGTS:900000,LOCAL:00:00:00.000
and processes, and finally some
exciting news about diagnostics
and queue debugging that
we're introducing this year.
For more information,
please see Paul Danbold,
our Core OS Technologies
Evangelist,
and the documentation on
GCD on the developers site,
as well as all the
related sessions
that have already occurred
this week, in particular,
the Writing Energy Efficient
Code Part 1 session went
into more detail on
Quality of Service
with different set of examples.
If you would like
more information
on that please see that,
as well as the Debugging
in Xcode session,
view the live demo
of the queue debugging feature,
and provides some more
information on that as well.
The Fix Bugs Faster using
Activity Tracing session
of this morning goes into lots
of detail about that Activity ID
and Activity Tracing mechanism
that you saw, and that is it.
Thank you.
[ Applause ]