WWDC2019 Session 417

Transcript

[ Music ]
[ Applause ]
>> Thanks, everyone.
Hi, how are you all doing?
Welcome to WWDC.
My name is Phillip Azar and I'm
proud and happy to be able to
share with you our talk on
Improving Battery Life and
Performance today.
Your apps are the backbone of
our software experience.
They enable our users to do so
many things and engage in so
many experiences that without
your apps, they could never do.
And so you can imagine when your
apps don't give good battery
life and don't give good
performance, this really
disappoints your users.
And so today we want to talk
about a few things.
We want to go over a set of
tools that you can use that are
both new and old to be able to
optimize your application for
battery life and performance.
We want to go over a set of
metrics that you can collect
using those tools to be able to
understand and quantify your
application's impact on battery
life and performance.
And then I'm going to hand it
over to my colleagues, who are
going to go into these tools in
more depth and go through some
demos with you and show you how
you can use these tools in your
application to improve battery
life and performance.
And then I'll come back and
we'll wrap it up.
So, let's get started by talking
about some tools.
And in order to get started
talking about tools, we need to
talk about the development
process and get all familiar
with it.
Development comes in phases.
We've set it up into three
different phases.
Development and testing is the
first phase.
And at this stage, we're
ideating and creating, and it's
a very iterative process.
We're either at our desks or
maybe in the park, and we're
really trying to understand
what's going to make our app
great.
The next phase is called beta.
After we've solidified the
features in our application and
we're ready for a test run, we
give it to a few people, our
beta users, and we try to get
feedback.
And then finally, when we're
really proud of our application,
when we've collected all the
feedback, we're ready for
primetime.
It's public release.
And this is when it's on the
AppStore and available to
millions of customers around the
world.
For battery life and
performance, every step is
critically important.
We need to optimize at every
step of the way to make sure
that we're fixing bugs and
making our experience as good as
possible for our customers.
And so with that in mind, before
iOS 13 and Xcode 11, we offered
a great set of tools for you to
understand the impact of your
application's battery life and
performance.
During development and testing,
we have Xcode and all the tools
it contains within it.
Things like Instruments, the
Energy Gauges, and Profiling
tools.
And then in beta, you can
collect traces directly on beta
devices using the developer
settings and open those up in
Instruments.
And these are great for
understanding problems that may
not occur in your development
environment but occur in the
field such as when you don't
have good cell reception.
And then finally, the Xcode
organizer gives you access to a
powerful set of logs that you
can see from your public release
such as crashes and CPU energy
reports.
And these help you understand
problems that are maybe specific
to certain regions or specific
to certain customers that may
not have the same conditions as
you.
And so with these tools, we
thought to ourselves long and
hard over the last few years and
wondered there are probably some
gaps we can fill.
And we spoke to many of you,
both indirectly and directly,
and the feedback that we got
that was most of the gaps are in
metrics.
How do we quantify our
application's battery life and
performance?
How do we make a decision about
feature A versus feature B?
Well today, as a part of Xcode
11 and iOS 13, I'm super happy
to announce that we think we've
solved that gap and we've
bridged it with three new tools
that you can start using today,
starting with XCTest Metrics.
This is the first new tool that
we've shipped with Xcode 11 and
this is going to allow you to
collect directly within your
XCTest performance and battery
life metrics of measure blocks.
This is going to give you some
critical advantage when you're
running XCTest and trying to
understand at a very early stage
what is the impact of a certain
feature.
The next is MetricKit, and this
is a powerful new framework that
we've built specifically for
battery and performance metric
collection.
And this is going to give you
metrics directly in your
application and help you
understand from all different
users how your application is
doing in the field.
And then finally, we've beefed
up the Xcode Organizer with
Xcode Metrics Organizer.
And this is going to be a
high-level aggregated set of
metrics that you're going to be
able to look at directly in the
Xcode Organizer without changing
any code.
And this is going to give you a
great high-level overview of how
your application is doing across
the world for all your
customers.
And so looking back at this
graph we just talked about, you
can imagine all these tools fit
really well in the development
process, starting with XCTest
Metrics during development and
testing.
Then, MetricKit, as you might
imagine, falls really well into
beta and public release when you
may not have access to the
devices that you're interested
in collecting metrics about.
And then finally, the Xcode
Metrics Organizer beefs up your
knowledge about the public
release and helps you understand
problems from primetime users.
And so when you put these all
together, we see that we have
more metrics to quantify battery
life and performance at every
stage in development.
And we think this is awesome.
So, we've talked about these
tools then a little bit and my
colleagues will go into depth
later about each of them.
But right now I want to talk
about the metrics that they
provide.
Because without these metrics,
we wouldn't be able to quantify
our impact.
So, let's go through them now.
This year we're providing two
sets of metrics - battery and
performance.
Who would've guessed?
Starting with the battery
metrics, we're providing a
really great set this year that
are going to help you quantify
your impact on battery life.
Processing, location, display,
networking, Bluetooth and
accessory metrics, multimedia
metrics, and camera metrics.
This is a huge set of metrics
that are really powerful but I
want to go through a certain
subset of these that we think
are really important for
everyone.
Starting with the processing
metrics.
Processing metrics, as you might
imagine, are things like CPU and
GPU time, and we want you to use
these metrics to quantify and
understand the workload of your
application.
So for example, you can do
things like find CPU spinners in
areas where you might not expect
them to be.
Additionally, you can use, you
can find unexpected rendering in
your application using these
metrics.
And critically, we want you to
use the processing metrics to
compare the algorithmic
efficiency of your features,
just like we discussed before.
If you have feature A and
feature B, you can use these
metrics to determine which one
is better for battery life.
Next up we have the location
metrics.
And these are going to be
metrics that are going to help
you understand and quantify your
location usage, such as your
cumulative usage, different
accuracy buckets, and your
background location usage.
Use these metrics to understand
your location usage, because
it's a common pitfall when it
comes to battery life.
For example, you might find
cases where you leave location
running when you don't expect it
to be running.
Or you may use an accuracy
bucket that's too powerful for
the use case that you've
implemented location for.
So, these measures are going to
help you understand and optimize
those scenarios.
Next, we're providing display
metrics.
And this year we're giving you a
variety of display metrics but I
want to talk about one in
particular called average pixel
luminance.
On your OLED devices such as the
iPhone X and XS, the color of
your UI in your application has
a direct impact on the amount of
energy that you consume on the
display.
And we represent this through a
metric called average pixel
luminance or APL.
And in a nutshell, the lighter
colors that you use in your UI,
the more energy you'll consume
on OLED devices, and this is
what we call a higher APL.
And the darker colors you use in
your UI, this is what you call a
lower APL, and this will consume
less energy, so keep an eye on
average pixel luminance this
year.
Last but not least, we have the
networking metrics.
And as you might already
imagine, these are going to be
metrics such as upload and
download bytes over cellular and
wifi and connectivity metrics.
We want you to optimize
networking usage whenever
possible because it is a
high-energy subsystem.
So do things with these metrics
such as validating your expected
upload and download counts.
Maybe you have an upload that
should've occurred at a later
time.
You can use these metrics to
figure that out.
And more importantly, we want
you to understand the impact of
connectivity on your network
transfers.
It plays a huge role in the
amount of energy that you
consume when doing networking.
So, these metrics will help you
understand if you're staying in
poor connectivity condition for
long periods of time.
And those are the battery
metrics.
We think these are a great set
of metrics that are going to
help you quantify the impact of
your application on battery
life.
Let's move onto the performance
metrics and this year we're
providing hangs, disk metrics,
application launch metrics,
memory metrics, and custom
interval metrics.
And as with the battery metrics,
I want to focus on a subset of
these that are going to be
really important for all of us
here, starting with the hang
metrics.
Hang metrics this year are going
to be a histogram of the amount
of time your application spends
unresponsive to user input.
And this is a huge user impact.
You can imagine if your user is
user your application and
suddenly it stops working,
that's not good.
So, use these metrics to work,
to understand where you can move
work off the main thread of
possible and utilize things like
dispatches and asynchronous cues
to reduce your hang rates.
Next we have the disk metrics.
And this year we're going to be
focusing on disc logical writes.
And we want you to quantify disk
usage whenever possible because
disk usage, as with all
subsystems, is a resource that
you should only use when you
really need to.
So, use these metrics to verify
if you have instances of
unexpected disk writes and if
you're employing any coalescing
strategies for your disk writes,
you can use these disk metrics
to verify those as well.
Next are the application launch
metrics.
And this is super great, because
this year we're providing launch
and resume time histograms to
help you understand your launch
and resume times of your
application.
We want you to quantify the
impact on performance that
launch and resume have using
these metrics.
And we want you to understand
the impact of the launch
activities, so when you do
things before your application
launches such as a database
update, this can directly impact
your launch and resume times.
These metrics will help you see
that in real time.
And we also want you to see the
difference between launch and
resume, because they're two very
different paths.
And for more information on
that, and how to optimize App
Lauch, I recommend that you go
see the talk tomorrow on
Optimizing App Launch at 4:20
pm.
Last but not least, we have the
memory metrics.
And for memory this year, we're
going to be providing things
like average suspended memory
and peak memory.
Memory management can really
impact launch times, and so we
want you to use these metrics to
keep an eye and keep tabs on
your memory usage, which is a
critical metric for performance.
Use these metrics to understand
your memory usage and if you
have spikes of peak memory that
are way higher than your
expectation, this could be
indicative of a problem such a
hard-to-reproduce memory leak.
If you focus on reducing your
average memory on suspend, which
you can quantify with these
metrics, you'll also be able to
reduce your launch times and
your susceptibility to
background termination.
And so those are the performance
metrics or subset of them, and
we think these are going to be
really key I helping you
understand your performance both
on and off device.
And so to recap, we talked about
the tools that you had before
Xcode 11 and iOS 13 to
understand and quantify power
and performance.
Then we talked about the new set
of tools that we're offering you
this year to be able to take
that quantification one step
further and help you optimize
your application.
And these great set of metrics
that they provide that are going
to really help you take your app
to the next level.
And so without further ado, I
want to hand it off to my
colleagues who are going to go
into some deep dives with each
of the tools that we discussed,
starting with Sastry, who's
going to talk to you about
measuring app impact during
development and testing with
XCTest Metrics.
Sastry?
[ Applause ]
>> Thank you, Phil.
Hello, everyone.
My name is Sastry Vadlamari.
I'm a software engineer here at
Apple.
Let's recap some of the tools at
your disposal to measure
application impact during
development and testing.
Right inside the debug navigator
of XCode, you can get a
high-level overview of the CPU
memory and energy subsystems.
And when you want to dig into
the details or diagnose some
issues, Instruments is a real
useful tool.
It comes with templates that
help you diagnose memory issues,
system unresponsiveness, and
excessive disk usage, and energy
issues.
You can also use XCTest to
measure performance.
XCTest is a program that lets
you write UI and unit tests that
are seamlessly integrated into
XCode testing workflow.
You can not only measure
performance, but you can catch
regressions who have baselines.
But then until last year, the
only metric you could measure
was world clock time.
Performance has more dimensions.
So this year we've added new
performance metrics into XCTest.
So, let's take a look into the
details.
This is how a sample performance
XCTest looks like.
You need to pattern a block of
code where you specify the
actions you want to perform into
the measure method and it would
measure the time it takes to
perform - to execute this
this block of code.
Now, in order to convert this
into the new style of
performance test and to get more
details, we just need to pattern
your time, memory, and CPU
objects, pack them into a list
and pass them in as a parameter
to the measure method.
And with such minor changes,
your existing performance test
can measure multiple dimensions.
We've gone a little further.
With every new UI testing target
that you create using XCTest,
we're going to give you an
application launch test for
free.
So, without writing any code,
you will have a test that'll
measure your application's
launch time.
Let's jump into a demo.
So, for the purpose of this
session, we created an
application that we call an
Awesome Photo App.
Now, it has a few features.
Let me walk you through that.
So, it lets you take images.
And when you take an image, it
geotags it so you can see the
location below.
And then you can apply some
fancy effects to the picture.
We'll be referencing this
application tool for the rest of
the demo.
You also have standard features
like being able to load a
picture from your photo screen,
save the picture, and upload the
picture to the server.
So, what does this mean?
How do you test this application
from Xcode?
Well, as I mentioned earlier,
every new XCTest UI target that
you create comes with an
application launch test for
free.
So, I've already taken the
liberty of running this test
prior to this presentation and
here's how the results look
like.
As you can see, it takes about
.2 seconds for launching my
application.
That's acceptable.
You can see the results of the
multiple iterations and all of
them are around the same
numbers.
And this is an interesting fact.
You can set your baseline.
The baselines are a mechanism
wherein you set guidelines for
what you expect your performance
numbers to be.
So, whenever your performance
numbers go off them, your tests
would fail and that's how you
catch regressions.
So, you can set the average, you
can set the standard deviation,
and whenever you run your test
next time, if any of the numbers
exceed these conditions, your
tests would fail.
So, I've made a change in the
code and I want to ensure that
my application launch time
hasn't regressed.
So, let me just run the test
again.
There's a couple of points you
need to keep in mind when
running performance tests.
It's a good idea to not have the
debugger attached to your
process at it adds some overhead
and it's also a good idea to
turn off all diagnostic options
like sanitizers.
You can do this easily by either
creating a separate scheme or
you could use the test plan
feature that was recently
introduced to turn it off
easily.
So, now you can see that the
test has run and it has failed.
If you were to dig into the
details, you will see that's
becuase the average has
really exceeded.
I mean, what was .2 seconds has
now become about 1.2 seconds.
So, in order to debug this, you
can attach this to Instruments
and use the Timer Profiler
template and find out why your
app launch time has gone so bad.
I'll spare you the mystery and
I'll tell you what I did.
So, I'm actually trying to look
for a database, checking for
database updates in my main
thread.
And as Phil mentioned earlier,
it's a real bad thing to do.
So, the right solution for this
is to dispatch this into a
background queue and hopefully
this fixes our problem.
Let's run the test again and
ensure that the numbers are well
within what we expect it to be.
The point here is you can,
XCTest not only helps you
measure but it also helps you
ensure things don't regress.
So, it's more for you write your
test once and you can forget
about it and keep running it in
your CI system and ensure that,
you know your performance
doesn't degrade.
As you can see, the test passed
and yay, we fixed our bug.
[ Applause ]
So, it's pretty easy to convert
your existing XCTest performance
tests into to have more
dimensions, as I mentioned
earlier.
So, all you need to do is
pattern the objects in a list of
what you intend to measure.
Like for instance, I had a
performance test that was
measuring the time it took to
take a picture and, you know,
use the photo and apply an
effect.
And earlier, this would've just
measured the amount of time.
But now, by just passing the
additional memory metric object,
I can now even measure the
memory, in fact, of performing
these actions.
XCTest doesn't have to be
restricted to only UI tests.
You can also use it for unit
tests.
And I'll give you an example
here.
So I have the Apply Effects
feature, and I have an option of
selecting whether I want to use
one photo or multiple photos.
It adds very little value from a
feature perspective but if the
overhead isn't much, I would
rather like to do it.
So, I took the liberty of
running this test and measuring
the time it takes to run this,
to apply the effect with one
photo.
And it's about 1000KB.
But now it's really easy to
measure the impact of adding, of
a different scenario.
All you need to do is change the
code.
I've changed the code to choose
filters, and you run the test
again.
And the test runs and you'll get
back your numbers immediately.
And when that happens, you can
check your impact.
So, as you can see, the impact
is, the test failed because this
one is about 100% worse.
It's about 2000KB.
So probably I'm going to stick
with one photo.
So, to summarize, I gave you a
demo of using a few metrics like
memory.
But we added a whole bunch of
them.
We added memory.
We added storage.
We added CPU.
We added OS Signpost.
But we didn't stop there.
We put away generic underlying
system, so you can actually
implement your own custom
metrics and use the underlying
reporting system to catch
regressions.
Please look into the
documentation for more details.
And then as I demonstrated in
the last example, you can also
use XCTest to do some sort of AB
testing.
It's really a low-cost, easy way
to check if algorithm A is
better than algorithm B.
Just wrap them around on simple
unit tests and you can have your
numbers.
And because XCTest works so well
with both Xcode and Xcode
server, you could use this
performance test, both in your
development and testing phase
and also as part of your
continuous integration system
and ensure that your app doesn't
regress on the performance
front.
So, that's what's new with
performance testing with XCTest.
Next, I'd like to call upon
Ashish, who's going to talk to
you about measuring your
application's impact out in the
field.
[ Applause ]
>> Thanks, Sastry.
So, after your initial testing
and development phase, there are
many benefits of collecting
field metrics to further
optimize your battery life and
performance of your application.
This includes leveraging your
beta population of a few users
as well as your broader customer
population.
In the field, your application
goes through a wide range of
user scenarios such as different
cellular networks, signal
conditions, different types of
devices, as well as different
locations.
These help identify issues that
may not have been caught during
on-desk testing.
You can also use this field
metrics to compare the battery
life and performance with
previous operations.
You can use this data to figure
out whether there are any new
regressions or egregious issues.
These field metrics already
helps you to also understand the
impact of new features and do AB
testing in the field with a
broader set of users.
To solve this problem, I am very
happy to announce we are
releasing MetricKit, which is an
on-device framework to collect
battery life and performance
metrics for your application.
We also added a capability in
MetricKit to collect metrics
around the critical sections in
your application.
We have built all these features
into MetricKit while protecting
your users' privacy in our data
collection aggregation
mechanisms.
It's very easy to adopt
MetricKit and get started, as
I'll show you next.
So, the code here is all you
need to get started.
First, you import the MetricKit
framework and create a class in
your application which conforms
to the metric manager subscriber
protocol.
And inside the class, you have
the subscribe for metrics.
This lets the device know that
your application is interested
in receiving metrics for the
metric kit so that it can start
collecting that on the device.
Finally, as a developer, you
have to implement a delegate
method called didReceive.
This method is involved whenever
there is a metric payload to be
delivered to your application on
the device.
And it's up to you to take any
actions once you receive this
payload on the device.
For example, you can choose to
save it to a file, or you can
also upload to your server so
you can collect this from the
field for multiple users.
After using metric, after
adopting MetricKit, as your
application gets used during the
day, we automatically collect an
aggregate metrics for our
application.
And at the end of a 24-hour
period, we generate a metric
summary for the entire day, the
last 24 hours, and return this
payload back on the device.
Now, let's understand how we can
measure the impact of critical
code sections in our
applications.
Going back to the awesome
PhotoApp example which Sastry
described earlier, users can
perform any activities in there.
For example, they can choose to
take a photo and on that photo,
they can apply many cool
effects.
If they like the effect that
they took, they can choose to
save the photo on the device.
MetricKit provides the ability
to capture the precise battery
life and performance impact of
each of these features of your
application.
Now, let's see how we can do
that.
We are happy to introduce a new
API inside MetricKit called
mxSignposts which is implemented
as a wraparound OS Signpost.
By bookending the critical
sections in your application
with mxSignpost, you can capture
the precise impact.
Let's take an example.
So, to use mxSignpost, all you
need to do is to use MetricKit's
make log handle maker and create
a log handle with it.
And use that log handle to drop
mxSignpost around critical code
sections.
In this example, I want to
measure the impact of the save
photo feature that we have in
our awesome photo app.
So, I've dropped in mxSignpost
just before and after this
application code.
MetricKit will automatically
collect metrics and process them
for you on the device.
Now, let's jump into a demo
where I'll show you how to adopt
MetricKit in your own
application.
So, now I'm back in the Xcode
project of my application and
I'm in the view controller.swift
file after my application.
As you can see, I have already
adopted the MetricKit framework
here and created a class which
conforms to the metric manager
subscriber protocol.
And inside that class, I've
added some code already for the
didReceive method.
So, this method is involved
whenever there's a metric
payload available.
For my application, I've decided
to save the data to a file so
that I can do some on-device
processing later.
For the purposes of today's
demo, I've also written a
function to print this data so I
can walk you through the
contents of the payload.
Finally, I have written this
function to upload this data to
my own server so that it can
collect this data from multiple
users during beta testing as
well as customers.
Now, as this method is only
involved at most once per day,
whenever there's a payload
available for our application,
we built a new feature in Xcode
to help you test this out.
So, let me show you how that
works.
We're first going to run this
application on my test device
here.
So the application is running
now.
Now, I'm going to go into debug
and click on select MetricKit
payloads.
What this does is it sends a
dummy payload to your
application so that you can test
the code inside the did receive
method.
Now, let's walk through some of
the contents in MetricKit
payload that is available today.
So, the first example here shows
a meta data related to the
application such as the build
version, device type, and the OS
version.
Then we get a few histographs
related to very useful
performance metrics such as
application launch, resume, and
hangs.
Then we get metrics related to
the application usage such as
foreground and background times
and a few background metrics
such as CPU time and GPU time.
Then there are metrics around
location usage as well as
networking, disk IO memory, and
disk play.
And finally we have a section
around the mxSignpost summary
for your application.
As you can see, there are a lot
of metrics that are available
right now in MetricKit.
So, I would highly recommend you
to check out our documentation
to learn more about the details
here.
So, let's go back to the slides
now.
So, as you see, as we saw during
the demo, it's very easy to get
started with using MetricKit and
start getting metrics on your
devices right now.
So, for the, also for the
application, we decided to take
a road trip and collect some
field metrics as well as some
fun photos.
The next day after using
MetricKit in the field with our
awesome photo application, we
get, we got a payload on the
device which is uploaded to my
own service using the
application code that I showed
you earlier.
Now, let's use this data that we
received from the field to
identify some hotspots in our
awesome photo application.
Following the first example,
this data shows the overall
foreground time and the
background time of the
application as well as the
overall location sage by
different accuracy markets.
It shows that the location uses
by the application which is
around 720 seconds, is very
close to the foreground time,
which is very unexpected.
All I'm using the location for
is to geotag a photo whenever I
capture it on the device.
Going back to the application, I
found that if I'm going to close
the location after I started
taking it.
So, this is a very good example
of how you can use MetricKit to
identify an expected application
behavior.
Another thing we can optimize
using this data is to reduce the
location usage if it works for
our use case.
Because the higher the location
accuracy, the more the battery
drain.
The next example shows the
histogram of application hang
durations.
So, the data shows there are
many instances of hang durations
of more than 5 seconds, which is
very bad for user experience.
So, one of the ways you can
solve this as Phil described
earlier is to avoid long
blocking calls in the mainframe,
so we can reduce any hang
instances.
The final example shows how you
can use the mxSignpost data in
MetricKit to identify hotspots
from a specific application code
region.
In my application, I decided to
put mxSignpost around all the
main features.
For example, load photo, apply
effect, date photo, save photo,
and upload photo.
Using mxSignpost, MetricKit was
able to figure out how many
times each of these instances
run in the field as well as a
few back to life and performance
metrics such as this CPU time,
and the overall CPU time for the
entire application.
This data shows that there were
many, so the CP usage by the
ApplyEffect feature was more
than 50%.
So, now I know where I can go
and further optimize my
application so that I can reduce
the battery usage overall.
So, following are the key
takeaways from this section.
You can use MetricKit to collect
field battery life and
performance metrics for your
application starting from iOS
13.
You can use MetricKit to
identify hotspots early from
your application, such as the
example I showed today with the
beta population, and you can do
so in the customer population,
too.
Another example we saw today is
how I use MetricKit data from a
single user to identify
hotspots.
Aggregating the same data from
multiple users can provide you
much deeper insights about
improving your application.
So, now I hand over the stage to
Anshul, who's going to talk
about an out-of-the-box
telemetry solution in Xcode.
Anshul?
[ Applause ]
>> Thank you very much, Ashish.
Hi, I'm Anshul Davra.
I'm here to talk about Xcode
Metric Core Organizer.
That's a new cool tool that we
are introducing this year with
Xcode 11.
Xcode Metrics Organizer is an
out-of-box solution that we are
providing with Xcode 11 to view
your power and performance app
analytics.
You can see how your app is
doing on the customer devices in
terms of battery life and
performance.
There is no change required to
your app.
It is available as-is in Xcode
11 for you.
We have built in privacy into
this whole process right from
collecting the data on the
device to aggregating the data
on the server.
So, you can start using this
data as of today.
The way it works is when
somebody, when a user uses your
app, we collect metrics around
your app.
These metrics are aggregated on
the device and then sent over to
our server.
On the server side, we run
analytics on this data and
extract insights.
These insights are what show up
in metrics organizer.
Please note that insights only
show up in metrics organizer if
there is enough usage of your
app that meet our threshold.
And all of this is available
out-of-the-box with no changes
to your app or your development
process, and is available for
you today.
Now, without further ado, let's
jump into the demo.
[ Applause ]
So, to open Metrics Organizer,
go to window, organizer.
That brings up a familiar
organizer window with archive,
crashes, and energy tab, and
brand-new tab called metrics.
If you click on metrics, you see
all your apps on the left-hand
side that you have published to
the iOS app store.
Let's say our awesome photo app
that we publish shows up here.
When we click that awesome photo
app, metrics around that app
show up in the middle pane.
So, metrics like battery life,
launch time, hang rate memory,
and disk write.
Metrics that we think that you
should consider for an awesome
app experience show up here.
When you click on a metrics,
details about the metrics show
up on the right-hand side.
You can look at the metrics for
a given version of an app or you
can compare it with a previous
version.
So, X axis is here.
It represents app version and Y
axis represent the metric value.
Let's start by looking at the
battery metrics.
Two kinds of metrics show up for
a battery.
The first one is onscreen
battery usage.
That is the amount of energy
that is drained when a user is
interacting with your app
onscreen.
And then background battery
usage is amount of battery
drained when, if the app is
running in the background.
Now, and each of these metrics
are further subdivided by system
components like processing,
networking, display location, so
that you get an idea of which of
these components are consuming
most of the energy.
Let's start with the background
battery usage, because we know
our awesome photo app is
foreground only.
But looking at the background,
it seems like it's consuming
around 10% of the user battery
daily, which is pretty high.
Out of this, it seems like
processing is consuming 5% and
networking is consuming 3.66%,
which is quite high and we need
to debug it further to figure
out why the app is consuming
battery in the background.
Let's look at the latest version
of the app when it is onscreen.
The latest version of the app
when it is onscreen seems like
there is a 10% degradation in
the latest version as compared
to the prior version.
Of which, display seems to be
static.
There is a slight increase in
networking and decrease in
other, but the main culprit here
is processing.
We can look at this data for the
90 the percentile user
population or look at the 50th
percentile user population to
see if the user population is
playing any role in battery
drain.
We can also look at this data
for all iPhone categories or all
iPad categories, or we can jump
to individual devices.
Let's jump to, let's say,
individual device called iPhone
6.
For iPhone 6, it seems like the
battery drain is pretty static.
Slight decrease in 1.0.8 as
compared to 1.0.7.
Let's look at a newer version of
the device like iPhone X.
Ah, iPhone X has a big jump.
If you see here, there's a 14.4%
jump from the prior version and
the main culprit here is
processing.
There are a couple of ways we
can debug this further.
We can directly jump to our code
or we can jump to our familiar
energy tab that we saw last
year.
Energy tab shows you energy
exception reports from the
field.
You can look at the stack frame
to figure out where it is
consuming the most energy.
I know Ashish was talking about
a new feature that he added only
for the newer devices and 1.0.8
version of our app,
applyFilter caused it so we can go
there and start debugging it.
So, this is one way we have
shown you can use metrics and
energy tab to figure out the
problem and fix the problem.
Now, let's look at the other
metrics that are available.
Launch time is pretty important
to our users because apps that
are slow to launch can frustrate
our users.
Ideally, a launch time should be
in low seconds.
Our awesome photo app for all
iPhones is taking around 6
seconds to launch.
You can use the tools that
Sastry talked about in his
earlier demo to debug it
further.
Hang time is unresponsive time
of your app, in seconds per
hour.
Ideally, the hang time should be
0 to avoid user frustration.
We showed two kinds of memory -
peak memory and average memory.
Memory is a resource and we
should be, we should only use
what is absolutely needed.
Disk write is the logical writes
your app is doing.
we need to be mindful of how
much write we are doing and we
can use Instruments to debug it
further.
Now, that was Xcode Metrics
Organizer, an out-of-the-box
tool to view your battery and
performance analytics and start
debugging the problems around
high battery drain, performance
like high launch time.
You can detect, you can look at
the data for the latest version
of the app, compare it with the
prior version, and create a
baseline.
And this tool is available today
to you without any changes to
your app or your development
lifecycle.
So, please, try to use it and
let us know the feedback.
Thank you very much for
listening, and back to Phil.
[ Applause ]
>> Thank you, Anshul.
We saw those great tools and I
want to talk quickly about a
summary of everything we spoke
about today.
We talked about the tools that
were available before Xcode 11
to debug and understand your
power and performance impact on
device.
And then we showed you some new
tools that we think are really
powerful and are going to help
take your quantification of your
performance and battery life
impact to the next level.
And a set of metrics that you'll
be able to collect from those
tools to be able to understand
and debug your code further.
If you take anything away from
this demo, I want you to leave
knowing that we've built three
great tools for you and we'd
love to hear your feedback about
them.
And these tools are going to
help you quantify the impact
that your application has on
battery life and performance.
And that quantification is going
to be able to help you make
decisions about your application
to better your experience for
all your users.
For more information, go online
and check out our session
documentation.
Or, visit us at the Power and
Performance Lab tomorrow, and
we'd love to see you there and
walk you through how to
implement these tools in your
application and use them to make
your experiences great for your
users.
Don't forget about the
Optimizing App Launch talk
tomorrow, where they'll go in
depth about app launch
performance.
Thanks again for coming and
enjoy the rest of your WWDC.
[ Applause ]