Transcript
[ Music ]
[ Applause ]
>> Alright, good morning.
My name is Chad Woolf.
I am a performance tools
engineer here at Apple and
today's session 410.
We're going to talk about
creating custom instruments in
Instruments 10.
Today's session looks like this.
We're going to talk a little bit
about why you might want to
create custom instruments.
We're going to go over the
architecture of Instruments.
And we have a lot of content
today, so we have three
sections: Getting Started,
Intermediate, and Advanced.
And then on the way out, we'll
talk about some best practices,
some of the things we've learned
along the way, writing
instruments on our own.
So the first one, why would you
want to create custom
instruments?
Instruments already ships with a
lot of really powerful tools,
for example here we have System
Trace where you can see how your
application is interacting with
the scheduler and the virtual
memory.
We have a new game performance
template this year that combines
System Trace and Metal System
Trace to help you spot glitches
and missed frames in your
application.
And if you're on the network
portion of your application, we
also have the Network
Connections Instrument, which
can show you TCP/IP traffic
coming in and out of your app.
And then, of course, a lot of
you are familiar with the Time
Profiler.
Time Profiler is a great way to
see where your application is
spending its time, whether that
be the networking layer or the
game engine or some other
portion.
Now the common thing here is
that these are all very useful
if you know the code that you're
profiling, right.
So if you know those IP
addresses, you know what those
mean, and you know what the
different functions mean and the
call stack of the Time Profiler,
it makes it a lot easier.
But what if someone is profiling
your application and they're not
familiar with their code, right?
What if they just want to see is
the application spending a lot
of time in the networking layer,
and, if so, what is it doing?
Well, a good use for a custom
instrument would be to try to
tell the story of what your
layer or what your application
is doing in a way that someone
who doesn't understand the code
can understand and appreciate
it.
Now in the advanced section,
we're going to show you how to
take advantage of the Expert
System Technology that built
inside of Instruments so that
you can create an instrument
that's actually able to look for
bad patterns and spot
anti-patterns in your code even
if you're not there.
Alright, so let's take a look at
the architecture that makes this
possible.
And to do that, we're going to
have to start here, back at the
beginning.
So in the beginning, Instruments
works about the same as it does
today.
There's still a library.
You still drag instruments out
and drop them into your trace
document and then you press
Record and it's like running a
bunch of performance tools at
once.
Now the major difference between
then and now is that back then
the infrastructure of
Instruments didn't really do a
lot to help us write instruments
quickly.
And at the time, that was okay
because we had already inherited
quite a few assets and
performance tools that we
already had.
They all had their own recording
technology and their own
analysis logic and all we had to
do was build a custom storage
mechanism to get the data in the
trace and a custom UI to help
integrate it with the rest of
the app.
Now over time, the maintenance
costs of Instruments and
maintaining this model shot up.
And the reason for that was
every time we wanted to add a
new feature we had to modify
seven custom UIs and seven
custom storage mechanisms and
that's not the model we wanted
you guys to inherent.
We didn't want you to inherit
this kind of maintenance costs.
So before we even talked about
doing a custom Instruments'
feature, we needed to solve that
first and I think we did.
So in the new version of
Instruments, instead of having
custom UIs and custom storage
mechanisms, we have two
standardized components and
that's the Standard UI and the
Analysis Core.
Now the Standard UI is what
implements the entire user
interface of a modern Instrument
and it's tightly coupled with
the Analysis Core.
The Analysis Core you can think
of as a bit of a combination
between a database and an expert
system.
And the two these are optimized
to work on time series data,
which makes them a great
foundation for building
instruments.
Now when you build an instrument
with the modern architecture,
really what you're doing is
essentially creating a custom
configuration of both the
Standard UI and the Analysis
Core.
Now if you look at some of the
screenshots of the powerful
instruments that I showed in
beginning, we have the System
Trace and we have the Game
Performance template and the
Network Connections template and
Time Profiler.
All of the instruments in all of
those documents were built
completely out of the Standard
UI and the Analysis Core.
So you can do the exact same
things that they can do.
And in Xcode 10 and in
Instruments 10, we're giving you
the exact same tools to build
your instruments.
So the only difference between
an instrument that ships with
Xcode and one that you build is
just simply who built it.
Now your instruments will show
up here in our library and you
can see like Activity Monitor at
the top.
Just like that, you can drag and
drop your instrument into a
trace document and take a
recording.
And what happens here is the
Instruments fills in the
Analysis Core with data and the
Standard UI reacts to create the
graphs and the table views.
Now Instrument has two ways of
showing data.
It's got the graph view at the
top here, which we call a track
view, and an instrument can
define more than one graph, if
it would like to.
And the way that you choose
between the graphs that will
define your instrument is
there's a small control here
attached to the Instrument icon
and we can change this from say
CPU to Networking.
Now each graph is allowed to
define a certain number of
lanes.
So here we've defined three
lanes, graphing three different
types of CPU utilization.
And each one of these lanes is
bound to a different table in
the Analysis Core or it can be
bound to the same table but
you're looking at a different
column in the table.
Now the other portion of the
instrument is the lower portion,
which is equally as important.
It's called the Detail View.
And that's where you can see the
event-by-event lists and also
any sort of aggregations and
summaries of your data.
Now just like the lanes, oh, I'm
sorry, just like the graphs, you
can define a number of details
for your instrument and you can
select which detail is active by
clicking this portion of the
jump bar and then selecting the
title of detail that you define.
Now just like the lanes in the
graph, all of the details are
bound to again a table in the
Analysis Core and that's where
they receive the data.
The recording happens.
The tables fill in.
And UI reacts and there's no
special code needed on your
behalf.
Now from the perspective of the
Standard UI, everything in the
Analysis Core appears to be a
table.
So let's talk a little bit about
tables and what they are.
Tables are collections of rows
and they have a structure that's
defined by a table schema.
Right, so it's very similar to a
database application.
The schema defines the columns
and the names of the columns and
also the types.
Now the Analysis Core uses a
very rich typing system called
an engineering type and that
both tells us how to store the
data and also how to visualize
it and analyze it in the
Standard UI.
Now in addition to or while the
schema describes the structure
of a table, you can use
attributes which are key/value
pairs to describe the content.
So that kind of helps us
describe what goes into the
table.
You can think of schemas as like
a class in Objective-C or Swift
whereas the rows are like the
instances.
And so it's important that your
schema names are singular, just
like we have class names in
Objective-C that are singular,
like NSString instead of
strings.
So this will be more important
when we get to the advanced
section but I wanted to call it
out now so we can know what
we're looking at.
Okay, an example of the schema
here is tick.
This is one of schemas that
comes inside of Instruments and
it's used to hold a table of
synthetic clock ticks that we'll
use later for statistical
computations in our modelers.
Now it is very simple.
It has one column that's defined
and that's time and it's using
the engineering type
sample-time.
And it also defines an optional
attribute that can be attached
to that table instance called
frequency.
So if you create a table with a
frequency equals 10 attribute
here for our tick schema, then
the provider of that data knows
that it needs to fill that table
with ten timestamps per second,
right.
So that's a way to communicate
what you want filled into the
table.
Now with that, I think we have
enough information to help us
get started.
So we're going to show you how
to create your own Instruments
package project in Xcode and
we're going to show you how to
create your very first
instrument that graphs these
ticks and shows these ticks in
the detail view.
And to do that, I would like to
call up my colleague Kacper to
give you guys a demonstration.
[ Applause ]
>> Thank you, everyone.
Now I will show you how to start
with creating and running your
first custom instrument.
You're going to be using tick
schema presented by Chad to
craft instruments during ticks
with constant frequency.
You will learn how to describe
your package, iterate on it
using Xcode, and tested in
Instruments.
Let's get started.
You create your new Instruments
packets project just like you
used to in Xcode.
You go to a New Xcode Project,
select macOS platform, and
Instruments Package.
You need to fill out your
product name, which will become
default name for your
Instruments Package.
Let's call it ticks.
Hit Next and Create.
Xcode has created project with
package target and one file,
package definition.
Let's look in to it.
Packages are described in
XML-based syntax.
At the beginning, each package
contains identifier, title, and
owner.
These fields will be visible
when someone attempts to install
your package.
Usually, you would start by
defining your own schema and
optionally modeler but because
here we are going to be using
predefined tick schema, let's
remove these guides.
To import tick schema from base
package, all you need to do is
to specify import schema element
and first name of the schema,
tick.
Now it's ready to be used by our
Instrument.
To make defining more complex
elements easier for you, we've
deployed a number of snippets in
Xcode.
To use them, just start writing
your element name, like
instrument, and hit Return.
You need to fill out your unique
identifier for Instrument
and a few properties that later
appear in Instruments Library.
It will be Instrument drawing
ticks every 10 milliseconds.
Now it's time to create a table
that will be instantiated when
this instrument is dropped from
library to a trace document.
Table identifier has to be
unique within this instrument
definition.
Let's call it tick table.
In schema-ref, we need to
reference schema that we
previously imported, tick.
Now we need to define what will
appear in [inaudible] view and
detail view for our instrument.
I will use graph element.
We need to fill out title for
our graph.
I will call it ticks.
And title for our lane.
I need to reference table by
identifier that was previously
created here, so I will
reference tick table.
And now we will specify plotting
for our graph.
I will use plot element.
And in its most basic form, it
requires you only to pass
mnemonic of the column that
contains value to be graphed.
We will be graphing time.
:11
I would like all of my
timestamps to be visible in a
table.
To do this, I will use list
element.
We pass title for a list that
will appear in the [inaudible]
of Instrument, table ref, just
like for lane element before,
and columns that we would like
to see.
Now our package is ready to be
built, and run in Instruments.
To do this, you will use Xcode
scheme run action.
Let's do it.
You can see that build error
appears.
You have full ID support when
building instruments packages.
Here, error appears in line and
says that column timestamp could
not be found in schema tick.
Oh, that's right, because it's
not timestamp.
It's supposed to be time.
I will fix it and run it again.
You can see it running because
this new copy of Instruments to
appear.
You can recognize the special
copy by having different icon.
It loads your package temporally
only for this run session.
It allows you to iterate on your
package more easily.
To be sure that your package is
already loaded, we can check it
out in New Package Management
UI.
You can find it in Instruments
Preferences and Packages
[inaudible].
You can see our newly created
package here along debug batch,
which means that it's loaded
only temporarily.
You see also all of the system
packages here.
You can use and link against
them using subtitle, visible
here.
Our ticks package contains ticks
instrument.
So let's test it now using blank
template.
I will switch my target to my
MacBook
and we'll search for my
instrument in Instruments
Library.
I will fill the ticks and it
appears here with all of the
properties being filled out from
the package definition.
Let's drag and drop it into a
trace
and record for just a second.
You can see the bottom pane was
propagated with data generated
every 10 milliseconds.
Detail and graph are coordinated
with each other.
When I click on rows, you can
see inspection head moving here.
I can also zoom into a graph
using Option and Click and Drag.
Here you can see the ticks are
indeed being drawn.
That's how you create your first
Instruments Package.
Now back to Chad who will tell
you more about Standard UI.
[ Applause ]
>> Alright.
Thank you, Kacper.
Okay, so we've seen how to
create a very basic instrument.
We see how to get started with
creating your first project in
Xcode.
Now let's talk about the
different kind of graphs that we
have and the different kind of
details we have and how we can
potentially do this with real
data.
Starting with graph lanes.
So you saw how Kacper was able
to define a graph and a lane
using what we call the plot
element.
Now the plot element is a way to
tell the Standard UI that we
should be taking the entire
contents of the table and trying
to plot it in that particular
lane.
Now the way that the plot
element determines how to graph
this, what the graphing
treatment should be, is by
looking at both the schema and
the column that was targeted to
take the value from.
If the schema is an interval
schema, meaning that it has a
time and a duration, or if it's
a point schema, which means it
just as a timestamp, they're
treated differently.
And if the column being targeted
has a magnitude, meaning that it
can draw a bar graph out of it,
it will draw a bar graph like
this.
An alternative here is our Life
Cycle lane where it's still an
interval schema but we're
targeting a column that's a
state and the state does not
inherently have a magnitude.
So it doesn't make sense to draw
a bar graph there.
So the Standard UI will
automatically pick a state style
treatment which involves drawing
these intervals with a label in
a rounded rectangle style so you
can tell it apart from just a
flat bar graph.
Now it's really important that
the Standard UI be able to pick
these treatments for you because
that's what keeps Instruments UI
consistent.
So if you define a state graph
and we define a state graph, the
Standard UI will enforce that
they look the same way, which
makes it a lot easier for user
of Instruments to move from
instrument to instrument.
Now if you want to create graphs
or the number of lanes
dynamically, based on the
contents of the data, you can
define what's called a plot
template.
Now a plot template is defined
very similarly to a plot except
there's an extra element in
there that allows you to choose
a column in the table and it
will create a separate row for
each unique value in that
column.
Now if you're looking for just
spikes or periods of activity,
we have what's called a
histogram and what you can do is
break the timeline over certain
size buckets, let's say 100
milliseconds, and then use
functions like count or sum or
min or max to sort of drive up
the magnitude of those buckets
as the different points or
intervals are intersecting.
So it's a great way to look for
spikes in activity such as here
in the System Trace where we're
looking for spikes of activity
in context switches or virtual
memory.
Now let's talk about details.
Details are on the lower half of
the UI.
And you've already seen the
first one, which is the List.
That's very simple mapping
between the Table and Analysis
Core and a table view in the UI.
We also have aggregations.
And aggregations are nice when
you want to try to subtract out
the time component and you want
to look at your data in
aggregate.
You want to apply some
statistics to everything that's
in that table.
And so when we define an
aggregation, what we're doing is
the columns this time are now
functions.
So you can use functions like
sum, average, count, and several
other statistical functions to
help you create the aggregation
view that you want to create.
Now the nice part about
aggregations is that you can
define a hierarchy as well.
So here we've defined a process
thread in virtual memory
operation hierarchy, so we can
see these totals broken down by
the process and then by each
thread that's in that process
and then by each type of
operation that's in that thread,
in that process.
So aggregation is a really nice,
powerful way to look at a lot of
data in summary.
Now another type of aggregation
is called Call Tree.
Now the Call Tree is useful when
you have a column that is a
backtrace and you got another
column that's a weight.
You can create weighted
backtrace or a weighted Call
Tree view using the Call Tree
like you see in Time Profiler.
Now another style is called a
narrative.
And the narrative is used when
you want to convey information
that's just best left to
technical language, such as the
output of an expert system and
that works hand-in-hand with the
narrative engineering type.
Now the last type of detail here
is called a time slice.
The time slice looks very much
like a list except the contents
are filtered to include only the
intervals that intersect with
that blue line you see on the
graph.
That's called the inspection
head.
So as you move the inspection
head over the graph, the
contents of the list will be
filtered to match what
intersects with that inspection
head.
Now all of these UIs are bound
to tables in the Analysis Core.
And when you hit Record, the
data comes in through the
Instruments app and fills in the
data in the Analysis Core.
So let's talk a little bit more
about how that process works.
The first step before you can
press Record is the Analysis
Core will be taking the tables
that are created within it and
it'll be mapping it and
allocating storage for it in the
core.
Now if a table has the exact
same schema and the exact same
attributes, then by definition
it's the exact same data, so
it's going to map to the exact
same store.
Now for each store, the second
step is to try to find a
provider for the data.
Now sometimes we can record that
directly from the target through
the data stream and sometimes we
have to synthesize the data
using a modeler.
Now modelers can require their
own inputs and those inputs can
be the outputs of other modelers
or they can be recorded directly
from the data stream and that's
how we synthesize the rest of
the data that we don't know how
to directly record.
Now once we've got data sources
for all of the stores in the
Analysis Core, that's what's
called the binding solution.
And so the third step is to
optimize the binding solution.
And here you see Instruments is
visualizing its own binding
solution for what we call the
thread narrative.
Now the next part about the
binding solution is that it's
trace-wide and so as you're
dragging and dropping
instruments into the trace,
Instruments is computing the
best possible recording solution
to try to minimize the recording
impact on the target.
Now when you create your own
tables or when you create table
instances, you have to give them
a schema.
And Instruments already has over
100 schemas defined.
And all of these schemas are
available to you and are
contained in the packages that
you saw in the Package
Management UI.
You can just simply import the
schema into your own package.
Now if that schema is contained
in a package that's not the base
package, you have to also link
that package as a build setting
in Xcode for Linked Instruments
Packages that you can set so
that we can find that extra
package that you're referring to
at build time and do some type
checking.
Now because all of these schemas
are defined in other packages
when you hit Record, all the
tables with those schemas will
fill in because they either have
modelers defined or we know how
to record it from the data
stream.
So these make excellent building
blocks for your own instruments
but even better they make
excellent inputs for writing
your own modelers.
Now you write a modeler or you
define a modeler in your
Instruments Package with the
modeler element and you can also
create a custom output schema
for that modeler.
You can use the point-schema for
just a single point in time or
you can use the interval schema
if you have a point and a
duration.
Now the modeler is able to
define what inputs it needs and
this is what tells the binding
solution how to fill out the
rest of that data flow graph.
And so your modeler will snap
right into the binding solution.
Now modelers are actually
miniature expert systems and
they are written in the Clips
language, which means that
they're very powerful but
they're also pretty advanced.
So we're going to save the
details on how to create
modelers for the advanced
section; however, it is really
important that you be able to
define your own schemas and we
have a new os signpost API this
year, which is a great way to
get data into Instruments.
So we've created a little bit of
a shortcut.
Inside your package, you can
define what's called an os
signpost interval schema and
what that does is both define
schema and also give us enough
instructions to be able to
generate a modeler on your
behalf.
Now inside there, you can
capture the data that you
recorded in the metadata of your
os signpost calls and you can
use that captured metadata and
expressions to define how we
should fill out the columns of
your schema.
So we'll look at a really simple
example.
Let's say we're going to do JSON
decoding and we have a signpost
that we mark the beginning of
that decoding activity and the
end of that decoding activity.
And in the beginning, we'll also
capture some metadata to
indicate the size of the JSON
object that we're about to try
to parse.
Now in your Instruments Package
definition, you can create an os
signpost interval schema and you
define the name of your schema
here.
You select which signpost you
would like to have recorded,
including the signpost name, and
then here you can use a syntax
to capture the different pieces
of metadata from your start
metadata message.
And here, we're going to take
that captured value and we're
going to use that as the
expression to teach us how to
fill in the column for data size
that we just defined in line
here.
Now in Session 405, which is
Measuring Performance Using
Logging, I demonstrated the
Trailblazer application and also
showed you an instrument that
you guys could write based on
the signpost inside that.
And now that we know a lot more
about how to write custom
instruments, I'd like to invite
Kacper back on stage to give you
a demonstration and walk-through
of how we created that package.
[ Applause ]
:15
>> Thank you, Chad.
So Trailblazer app is an iOS app
that displays lists of popular
hiking trails near you.
As an UI component, UI table
View.
Each cell loads image for a
trail asynchronously.
To prevent glitches and as an
optimization when cell is
reused, we cancel the download.
To visualize my flow of
downloads, I wrap every download
in os signpost call.
Let's take a look at it.
When my [inaudible] cell is
displayed, start Image download
method is called.
We create downloader signpost
ID, which takes os log handle
and downloader object.
We then grab address of UI table
View cell and call os signpost
begin with os log which is
coming from signpost.networking.
Let's take a look at it.
This log takes our app
identifier as subsystem and
networking as category.
We pass background image name,
previously created signpost ID
and message format, which
includes image name.
Here, we wrap it in public
specifier because it's a string
and caller, which is address of
a cell.
Our download could complete in
two ways, successfully.
Let's take a look at it now.
When it completes like that,
delegate method it's called.
We create signpost ID just like
before and call OS signpost end.
This time we pass status and
size.
Value for status is completed.
And size is set to the image
size.
Next let's take a look at our
prepare for use overwrite.
When there's running downloader
in progress, we cancel it.
We create signpost ID and call
our signpost end with the same
format string but now value is
canceled and size is zero
because download didn't succeed.
Let's take a look at our os
signpost interval schema
definition and how we captured
those signpost end package.
We define our signpost interval
schema with unique identifier
and title.
Then we define our subsystem and
category, which corresponds to
the one that we passed when
creating os log handle.
We create name element, which
corresponds to the one that we
passed in os signpost call and
start pattern and end pattern.
These both correspond to the one
that we passed in os signpost
begin and end calls.
Message element is the same as
the format string you passed but
instead of format arguments, you
pass variables here to capture
the values that you passed when
calling os signpost.
Let's take a look at how we fill
out those values in our columns.
Here, you can see status column.
It's type of string because it's
either completed or canceled,
and we fill it out with the
value of status variable.
Because expression element could
take arbitrary Clips expression,
we could also do more
sophisticated things in it.
Here, we could compute event
impact by looking at size.
If it's greater than 3 and 1/2
megabytes, we say that impact is
high, else impact of operation
is low.
That's our definition for os
signpost interval schema.
Now let's take a look at table
creation.
For schema ref, we pass
identifier of our os signpost
interval schema and create
unique identifier for this
specific table.
Then, we can reference it in our
UI definitions.
For graph, we create a single
lane.
It takes our table and this time
it graphs by using the plot
template.
Plot template is dynamic way of
creating graphs.
It looks at the table, at the
column that was passed in
instance by element and for each
unique value of this column, it
creates plot.
Label format element allows us
to create format title for this
plot.
Here it's img column and the
value from image name column.
We pass image name as a value of
our plot.
Each of our planes will be
colored with column of impact
and label on our plane will be
taken from image size.
Next, we have a list.
You already saw this one in
ticks example.
Here, we pass all of the columns
that you would like to see.
Next, aggregation.
This aggregation will track all
of the completed downloads.
Because our table contains both
completed and canceled
downloads, we need to apply
slice element to filter some of
the data.
In slice element we specify
column that slice will be
applied on and predicate value
that has to be matched.
Here, we want to take only
completed rows from this table.
We define hierarchy, which is
only one level hierarchy with
image name and columns that will
be visible.
For each image name, we will
specify count and image size.
So we will be summing sizes of
an image.
Next, we have image, time slice,
sorry.
We specify all of the columns
that will be visible.
And to use our instrument more
easily, we can specify our
custom template.
Now let's try and build, and run
our package.
You can see the template appears
here.
I can choose it,
and target my iPhone and
Trailblazer app.
I will record for just a while.
You can see that track view was
propagated with data.
Each plot was created for each
image name.
You can see that label format
matches the one that we passed
in package definition.
And if download is higher rather
than 3 and 1/2 megabytes, our
plane is colored in red.
Size appears on the plane.
Next, we can take a look at all
of the details.
Firstly, we have list downloads.
This is just a list of all the
downloads that happened.
We can choose our aggregation,
which divides all of the
downloads by image name.
You can see that it on top we
downloaded 12 images.
And image for location seven was
downloaded two times.
Next, we can take a look at
active requests.
Here, you can see that when I'm
grabbing my inspection head,
data in detail view changes.
We can track multiple active
requests and see what was the
duration at the time of current
inspection head.
If you would like to take a look
at your data from different
perspective, you would like to
take a look at your stores and
modelers, we give you this by
using the Instrument inspector.
This is a way to debug your
custom instruments.
Here, you can see that I
selected Stores step and I see
store for os signpost being
created.
It looks at networking category
and com apple trailblazer
subsystem and we gather 24 rows
here.
Then, we can see our created
table image download, which has
12 rows.
In bottom area, you see whole
content of this table.
Next, we can jump to modelers
and we can see that we have
auto-generated os log modeler
here.
It took 24 rows and outputted 12
rows.
On the right, you can see
binding solution here.
So our generated os log modeler
took data from os signpost table
and put it into image download
table.
Then it was consumed by our
instrument.
So that's how you capture your
os signpost invocations, create
UI, and look at your data using
Instrument inspector.
Now let's go back to Chad who
will tell you more about
advanced modeling.
[ Applause ]
>> Alright.
Thank you, Kacper.
Okay, so now we've seen how you
can combine os signpost data
with custom instruments.
And we think that you'll be able
to take this, I think you'll be
able to take these two, this
combination pretty far.
Now, we can talk about some of
the advanced topics,
specifically how you create and
define modelers.
Now a modeler conceptually is
very simple machine.
It takes a series of inputs.
It does some reasoning over
those inputs and produces
outputs.
The inputs of the modelers are
always perfectly time ordered
and so if you request several
different input tables, those
tables are first time ordered
and then merged into a
time-ordered queue, which feeds
the working memory.
So as we pull these events off
one by one, they're entered into
what's called the working memory
of the modeler.
And as the modeler sees the
evolution of this working
memory, it can draw inference.
And when it sees a pattern that
it wants to produce an output
for, it simply writes it to its
outbound output tables.
Let's walk over like a really
kind of playful example of how
you might use a modeler.
So let's say you define a schema
called playing with matches,
right.
This is an os signpost interval
schema and it's for an os
signpost that you've defined
where you're going to do some
sort of dangerous operation in
your code.
And we define another schema
called app on fire, right.
It's also a signpost schema but
these signposts mean that the
application has entered into a
bad state and we really want to
know why.
So you create an output schema,
which is a point schema, that's
going to hold the object that
was playing with matches and the
time at which the fire started.
We are going to call that the
started a fire schema.
Now the modeler's world looks
like this.
So we have all of our inputs set
up in time order ready to go and
this dashed line on the left is
what's called the modeler's
clock.
Now when we grab the first input
and we enter that into the
working memory, the modeler's
clock moves to the start of that
interval and then we grab the
next input, the modeler's clock
again moves to the beginning of
that interval and we enter that
into the working memory.
Now the modeler sees both of
these in the working memory and
it can see that if playing with
matches started before the app
is on fire, it doesn't really
make much difference, if it's
the other way around, it's
already on fire, then we can
draw a logical conclusion here
called the cause of fire and we
can enter that into the working
memory.
Now as we grab this third input,
you'll notice that the modeler
clock has moved and it no longer
intersects with our first two
inputs.
And so those are removed from
the working memory.
Now if the cause of fire had
what's called logical support,
it would also be removed from
memory.
Now to recap, the clock is
always set to the current input
timestamp.
And for an input to remain in
the working memory, it must
intersect with the current clock
in the modeler.
This is what helps us establish
coincidence.
It allows us to prune out the
old data and it also allows us
to see if there are inputs that
are possibly correlated in time.
Now the way that a modeler
reasons about its working memory
is defined by you through what's
called a production system.
Production systems work on facts
in the working memory and
they're defined by rules that
have a left-hand side, a
production operator, and the
right-hand side.
The left-hand side is a pattern
in working memory that has to
occur to activate the rule and
the right-hand side are the
actions that will happen when
that rule fires.
Now the actions could include
adding a row to an output table
or include asserting a new fact
into the working memory as the
modeling process progresses.
So facts come from two sources.
One, they're from the table
inputs that you saw, so will
automatically assert these as
facts using the rules that I
showed you with the modeling
clock, and they can also be
produced by assertions from the
right-hand side of a production.
Now if you're going to create
your own facts, Clips allows you
to find what's called a fact
template, which allows you to
provide structure to your fact
and do some basic type checking.
So let's take a look at some
rules in Clips.
Our first rule that we're going
to look at is called found
cause.
And what that says is if there
is an object who's playing with
matches at t1, and the app is on
fire at t2, and t1 happened
before t2, then on the
right-hand side of this
production, we can assert a new
fact called cause of fire with
the object that started the
fire.
Now that will be entered into
the working memory.
Now we come down to our second
rule, which is called record a
cause, if we have an app on fire
at some start time and we know
the cause of the fire and we
have a table that's bound to our
append side, that's the output
side of the modeler, and that
table happens to be the schema
that we define called started a
fire, then we can create a row
in that table and then set the
time and who started the fire to
the values that we captured up
here in the pattern.
Now with that, we basically
created our very first expert
system to look for bad patterns
in our application with these
two rules.
Now you may have noticed that
the rules were prepended by
either modeler or recorder.
Those are what are called
modules in Clips and they allow
you to both group rules but also
control the execution order of
the rules.
So for example, if you kept all
of your output, all of the rules
that produced output the output
tables in the recorder module,
then you can be sure that you
won't write an output while
you're in the middle of the
reasoning process in modeler
because all the rules in modeler
have to execute before any of
the rules in recorder can
actually execute.
Now I mentioned the term before
logical support.
What logical support is usually
tied to what are called peer
inference rules and those are
rules that you say, well, if A
and B, then C.
Right. So by adding logical
support to your production, what
you're saying is if A and B are
no longer present in working
memory, then C should
automatically be retracted.
So what we're saying is C is
logically supported by the
existence of A and B.
Now this is important because it
limits working memory bloat
which helps with resource
consumption but it's also
important to remove facts from
working memory that are no
longer valid.
And if A and B are no longer
valid, then you should really
remove C.
So to add logical support to
your production, your rule here,
you just wrap the pattern with
the keyword logical and then
anything you assert on the
right-hand side of the rule will
be automatically retracted when
these move forward.
And you'll notice these two
rules, I'm sorry, these two
facts here are the names that
come from our schema.
So those are inputs and so when
the modeler's clock move
forward, those will
automatically be retracted.
Okay, so now we know the basics
of how to create a modeler in
our package and we've seen some
of the Clips language and rules.
So let's take a look and see if
we can add an expert system to
our networking instrument to
find bad patterns and potential
misuses in our networking layer.
And to do that, I'd like to
invite Kacper up for one last
demo.
[ Applause ]
>> So now with the existing
logging, I will try to write
modeler to detect some
anti-patterns in our app
networking behavior.
I was playing with my
Trailblazer app and it seemed
that if I'm scrolling pretty
fast, there are some glitches
visible here.
So image is replaced multiple
times, so I suspect that our
cancel doesn't really work.
I would like to write modeler
that detects that.
So let's take a look in our
package definition.
We will start by writing modeler
element.
Modeler has identifier, title,
and purpose.
These fields will be extracted
to your documentation.
We specify production system
path which contains all of the
logic for our modeler.
Then, we define output of our
modeler.
It will be downloader narrative
schema.
Required input for our modeler
will be os signpost table.
This table contains begin and
end events.
Now let's take a look at
definition for downloader
narrative schema.
This is point schema that
defines two columns, timestamp
which tracks the time of logging
that diagnostic message and
description that has information
about what's gone wrong.
Then, we can create this table
in our instrument definition.
We pass downloader narrative
schema ref and unique
identifier.
Then, we could use it in our
narrative element definition.
Here, we define narrative.
We pass table ref for the table
we previously created, define
time column, and narrative
column.
Now we are ready to define logic
for our modeler.
To do this, I will create file
that I previously referenced in
modeler definition.
To create Clips file, you go to
File, New, select macOS
platform, Other section, and
Clips file.
I will fill out the name
and create.
So algorithm for detecting
whether one cell is doing more
than one request at a time will
be as follows.
We'll be tracking every request
as a fact in working memory.
Firstly, we need to create
template for this fact.
So every fact will be storing
time, caller address, which is
cell address, signpost id that
we captured, and the image name
that we are requesting.
We will call this fact started
download.
Then, you'll write modeler rule
that creates this fact in
working memory.
This rule looks at os signpost
table.
We specify subsystem, name, and
event type begin and we capture
all of the information that we
want to have.
So we capture image name, caller
address, time, and signpost
identifier.
Then, we assert new fact to
working memory.
To clean it up after download
finishes, we need to retract
this fact from working memory.
Here, we are looking at the same
table but we are looking at only
event of type end.
We capture identifier of the
signpost.
And here, we are using the fact
that signpost begin and end has
to have the same identifier.
We are looking in working memory
for a fact that has a signpost
identifier that we captured and
retract this fact.
Then, we can write our recorder
rule that will generate all of
the narrative data.
This recorder rule looks at all
of the started download facts
and captures them.
We captured time, caller
address, and image name.
If that's true and there is
another started download which
has the same caller address, you
can notice that variables
referenced here are the same and
happened before the first fact.
We notice that there is some
anti-pattern and there is
overlap in our request.
We can then check whether we
have access to downloader
narrative schema, create new row
in it, set time column to the
time of the first fact, and set
your narrative description.
You'll output some information
about the problem so that
someone could debug it later.
Now I can run Instrument against
our app.
Let's run it again.
Choose Trailblazer Networking in
template again, and record.
I will try to perform some fast
scrolling here and take a look
at my narrative table.
You can see that narrative table
contains lots of diagnostic
messages being outputted.
So we can see that there are
some problems and we can later
investigate it.
You can see that narrative is
interactive detail.
You could for example check all
of the arguments being passed
and you can filter.
So we can add this caller
address as a detail filter and
have this detail filter.
Now, let's back to Chad who will
tell you more about some best
practices when developing
instruments.
[ Applause ]
>> Alright.
Thank you, Kacper.
So we've seen how we can create
some basic expert systems in
Instruments.
Alright, so let's talk about
some best practices that we've
learned along the way.
And the first one is to write
more than one instrument.
Now I don't mean get practice
writing instruments.
What I mean is that if you own
an instrument already and you
want to add some features to it,
sometimes it's really tempting
to just add them, add extra
graphs or details to your
instrument, but you should
really be thinking, you know,
can this be its own instrument.
And the reason for that is if
you create finer-grained
instruments, you give the users
of Instruments a lot more
choices.
They can drag just the
instruments that they want out
of the library and that will
minimize the recording impact on
the target.
If you focus on one instrument
with lots of features, it's kind
of like an all-or-nothing
proposal there.
Now, if you want to create a
combination of instruments that
are going to be targeted at a
certain problem, you want to see
all these instruments used
together at the same time,
rather, then what you can do is
create your own custom template
like we did for the networking.
And so what you do to get that
started on that is create a
document, drag the instruments
in the way that you want to see
them, configure them, go to File
and then say, "save as
template."
And then you can use that
template inside your package
using the element that Kacper
had added to our Networking
Template.
So writing with more than one
instrument is a lot better way
to use tool.
The second one is immediate mode
is hard.
Immediate mode refers to the
recording mode of Instruments
where we're visualizing the data
as it's coming in, in near
real-time, and the reason it's
hard, well there's really two
reasons it's hard.
The first one is it requires
some additional support that as
much as we wanted to cover
today, we just couldn't.
We just didn't have the time.
And so we're going to be working
on the documentation for that.
But the second reason, and this
is the more important reason, is
that, well it's interval data,
right.
So intervals can't be entered
into the tables in the Analysis
Core until they're closed,
meaning that we've seen both the
begin and the end.
And so when you're looking at a
recording live, you have a bunch
of what are called open
intervals.
Now if your modelers require
these as inputs, which is
totally feasible, then what
you'll notice is that if there's
an open interval upstream, well
all of the modeler clocks
downstream have to stop until
that interval is closed because
remember, the modeler's vision
is all in time order.
So it can't move that clock
forward until all those
intervals upstream have closed.
So if you have some intervals
that have a long run, what
you'll notice is that the output
of your modeler appears to stop.
And when the user hits the stop
recording button, well then all
open intervals close and
everything processes as normal
and the data fills in.
But that's not a great user
experience.
So if you hit that, you have one
of two options.
The first one is to opt your
instruments out of immediate
mode support and you can do that
by adding a limitation element
to your instrument and the
second is to move off the
interval data as input to your
modeler, just like we did in our
demonstration here for our
expert system.
We were actually using the os
signpost point events rather
than using the intervals.
So I know we make it look easy
but immediate mode is a little
tricky to implement.
And then third, one of the
things that's really important
if you're creating instruments
that are going to be targeting
high volumes of input data is
that the last five-second
recording mode is by far the
most efficient.
Now the way you switch that is
in the recording options of your
trace document, you'll see that
you have a choice between
immediate, deferred, and this
last end seconds mode.
That is going to be a lot more
efficient because what it allows
the recording technology to do
is use buffering to improve
performance so that it's not
trying to feed the data to
Instruments in real time.
Now this can have a profound
effect and it can have a huge
effect on signpost data where it
can be up to ten times faster
inside five-second mode.
Now of course the trade-off is
that you're only seeing the last
five seconds of data but for
instruments that produce high
volumes of data, that's usually
a good thing.
So this is the common mode for a
System Trace and a Metal System
Trace and Game Performance
template.
And if you're targeting one of
those kinds of applications, I
would also opt your instrument
out of supporting immediate mode
just so that your user
experience is not terrible or
Instruments gets way behind on
trying to get the data or you
run into that problem with the
intervals.
That is the end of the session.
So we did a lot of work here to
create the Instruments feature
and we're really, really excited
that we were able to get it out
to you guys this year.
And so we can't wait to see what
you guys are able to accomplish
with it.
So if you'd like to see us and
talk to us about custom
instruments, we have a lab in
lab eight today at 3:00 and also
Session 405 goes over in detail
how to use os signpost API which
is a great way to get data into
Instruments.
So enjoy the rest of the
conference.
[ Applause ]