Transcript
[ Music ]
[ Applause ]
>> Thank you.
Hello everyone.
A very good afternoon
and welcome.
My name is Bharath Rao.
I'm an engineer with
the Core Motion Team,
and today I'm really
excited to be talking to you
about health and fitness.
In this talk, I'll be showing
you how you can use Core Motion
framework to add some
interesting and engaging health
and fitness features
into your apps.
Before we get started, for
those of you who are new
to Core Motion, I'd like
to remind you to go check
out some of our past sessions.
They have some excellent
information
They have some excellent
information
about how sensors work, and
how we use, in Core Motion,
those sensors to provide APIs
that help your apps
observe stats and activity
and Device Motion,
and a whole lot more.
Go check them out.
So with that, let's get started.
Here's what we have in
store for you today.
I have an update to the
Historical Accelerometer
that I would like
to share with you.
We have a new pedometer
events API,
both for the iPhone
and the Apple Watch.
And finally, we are also
bringing the Device Motion APIs
to Apple Watch, with watchOS 3.
Thank you.
First up is Historical
Accelerometer.
Last year, we introduced
the CM Sensor Recorder API.
With this API, now
your apps have access
With this API, now
your apps have access
to raw accelerometer samples,
or long durations of time.
And you can also get this
access with very low power.
After your apps have
requested the framework
to start recording these
samples, they can get suspended.
And the OS will offer
those samples,
even across device sleeps.
And when your app is ready to
consume them, it can launch
and make a query,
and get access to all
of those buffered samples.
In watchOS 3, we have a
couple of important updates.
First of which is
the greater duration.
Now, your apps have a 36-hour
window in which they can launch
and get access to all
of that Historical
Accelerometer samples.
The second update should not
come as a surprise to you,
this is one of the most
requested updates for this API.
So I'm happy to announce
that in watchOS 3,
now the sample delay is
as little as 3 seconds.
With these updates, now you
can use Sensor Recorder,
not only to observe user
activity, or at long durations,
but there are some
real time applications
for which you can use the
sensor recorder now maybe
to track sports activities or
to record workouts at the gym
or even health diagnosis.
Imagine an app on
the Apple Watch
that can detect hand tremors.
So now, whenever the user
experiences some hand tremors,
they can launch the app.
Your app will be
able to pull all
of the historical accelerometer
samples, do some analysis
on them and get a report that
says how severe that tremor was,
and what kind of tremor it
was and share it with the user
and maybe even with the
physician with consent,
so that it can speed
up the diagnosis
and the treatment
of such conditions.
So that was a great update
to Historical Accelerometer
in watchOS 3.
Next up is pedometer.
CMPedometer is a versatile API.
It records stats and
distance, and flights of stairs
that the users climb
throughout the day.
So you could use it to build
an all-day activity tracker
where you set some interesting
goals for your users to achieve.
And that way you
can motivate them
to lead a healthier lifestyle.
But where CMPedometer
is really powerful is
in the context of workouts.
Take for example,
the pace metric.
We have stride estimation
algorithms that are running both
on the iPhone and Apple
Watch and using those,
we are able to provide
really accurate pace
and distance metrics, even
when the phone doesn't
have a GPS signal.
Or when you go run
with your Apple Watch,
and you leave the phone
behind, even then,
the users get very
accurate metrics.
The pedometer events API that
we are adding today is actually
The pedometer events API that
we are adding today is actually
going to help you make those
workout experiences even more
engaging and accurate.
Let's consider a typical
running workout; an urban run.
As third packet [inaudible],
one of the challenges
that you'll face with this
scenario is how do you detect
all those starts and stops
that the users experience
at intersections while
they're running in a city?
So the user comes to a
stop at the stoplight,
and your app will continue to
accumulate all of that time
that they're just
standing around.
So by the time they get
to the end of their run,
now you have accumulated
enough time, and you --
if you try to compute
their average pace
over their entire run, you'll
probably end up with something
that will resemble
that of their granddad.
Or maybe granddad's gone faster
than them, so never mind.
So what I mean to say is,
you'll end up with
really inaccurate pace,
which is probably much lower
than their running pace.
which is probably much lower
than their running pace.
So one probable solution is you
could provide a manual pause
and resume option.
But now, once they have paused,
they will also have to remember
to resume the workout
when they start running.
And if they don't, now all
of the running they do
while their app is paused,
is going to be not recorded
towards their workout.
So if they forget to pause,
then they get inaccurate pace.
If they forget to resume,
they lose out on distance.
So clearly, you need some auto
pause and resume detection
that is not only accurate, but
it also has to be responsive.
It has to feel like your
app is doing a good job
of detecting those
starts and stops.
At this point, you
might be wondering,
why not just use GPS and steps.
After all, you have access
to those in your apps.
If you have ever used GPS
before, you know that you have
to do a considerable
amount of filtering on it,
so that you can remove
all the noise.
Which means that it is going
to be again, slow to respond.
Which means that it is going
to be again, slow to respond.
And with the steps that you
receive from CMpedometer,
it has a built in delay.
And we do that because we
want to avoid false positives.
We use steps to estimate stride.
And from that we compute
distance and pace.
So it's very important for us
to have accurate step
counts at the first step.
So in this release, we are
giving you pedometer events
that is going to help you
detect those starts and stops,
not only with good accuracy,
but with low latency.
Our pedometer events
solution, or the auto-pause
and resume solution uses
a predictive algorithm.
This predictive algorithm
was trained on user data
so that we can improve
the likelihood estimate
of whether the user is moving,
or have they come
to a complete stop?
Of course, all in a
pedestrian context.
By doing this, now we are able
to recover most of the delay
that is associated with the
step counting algorithm.
that is associated with the
step counting algorithm.
But we are able to do so
with pretty good accuracy.
I would also like to point out,
because the predictive
algorithm keeps track
of whether the user is moving,
or if they have come to a stop,
we can also support walk base.
So when you are -- when
the user comes to a stop,
whether from a walk or a run,
you'll get a pause event.
And as soon as the user
starts to run or walk,
you'll get a resume
event within your app.
Pedometer Events API,
they look identical both
on iOS X and watchOS 3.
Let's take a look
at the API itself.
You have the pause
and resume events.
Each event is timestamped
with the exact time
when the algorithm detected
the transition from moving
to not moving state,
and vice versa.
And you had to start and stop a
pace to help your app register
and deregister for these events.
I've been talking about how
pedometer events can be used
to clearly demarcate just
the running segments during a
complete running workout
and how you can, using that,
derive accurate metrics.
But you can also
use pedometer events
in some other interesting ways.
Let's take a look at an example.
This is a trail-running
app on the iPhone.
So here we are going to use
pedometer events to see --
to figure out when we can engage
with the user, and when we do,
how we can present them
with some interesting
information that's going
to make them respond
to those more actively.
And because this is
a trail-running app,
there is going to be an
elevation change throughout the
run so we are going to see
if there is some
meaningful information there.
We are going to use the
CMpedometer class to register
for the pause and resume events.
And we are going to get the
relative altitude updates
And we are going to get the
relative altitude updates
or the elevation changes
using the CMAltimeter class.
If you want to continue to
receive pedometer events
within your app, even
when the screen turns off,
your app has to stay running.
And one way of doing that on
the iPhone is to subscribe
to continuous background
location.
If you are interested in
knowing more about this,
I suggest that you go check
out the Core Location
Best Practices session
that is happening at WWDC today.
Next, we are going to register
for the relative
altitude updates first
availability check.
And then we provide
an operations queue,
and provide a callback handler
to start receiving
those updates.
In this example, I'm
just going to make a note
of the most recent update.
But potentially in your app, you
can cache all of those updates.
And at the end of the workout
you can potentially provide a
nice elevation profile
for the entire run.
Next, we'll register for the
pedometer events themselves.
Next, we'll register for the
pedometer events themselves.
So first an availability check.
And then register by
providing a callback handler.
This is just a quick tip to
make sure that I don't run
into any concurrence issues,
I'm doing work from all
of the handlers on the
same operations queue.
So now that the app is
set up to exactly figure
out when the user
has started running,
and when they have
stopped running.
We are ready to see
if they can use
that in a very contextual
manner.
As soon as they start
running, we are going
to get the resume
event in the app.
At this time you could make
a note of that exact time
when we got the resume
event, so that way at the end
of the workout, you can
basically add up all
of those running times
to compute very accurate
average pace for the entire run.
In this example, I'm
just going to make a note
of the most recent
elevation update,
so that I know exactly
what elevation they started
when they started running.
And when the user
comes to a stop,
now this is a very
good opportunity for us
to figure out, because we
have elevation, we can figure
out if they have run up a hill.
And if they have done so,
this might be their
first ever hill run.
So why not just give them
a hill bagging achievement.
Or if they have been doing that
same hill run multiple times,
because we have exact, accurate
timings for each of those runs,
now we can compare those
and give them a stat
of how well they are doing
on that particular hill run.
So that was an example where we
use pedometer events not only
to arrive as very
accurate metrics
and demarcate those
running segments.
But it's also an example
where you could use --
do something interesting
with those events.
At the end of the run, of
course you go ahead and pay
down the registrations and this
will also release any block
handlers that have been
captured in the framework.
Pedometer events are available
on iPhone 6s and later iPhones,
Pedometer events are available
on iPhone 6s and later iPhones,
and of course the Apple Watch.
So that's pedometer in
iOS X and watchOS 3.
Next, let's talk
about Device Motion.
As you're all aware,
Device Motion APIs have been
on the iPhone since iOS 5.
With watchOS 3, now we are
bringing the exact same APIs
to the Apple Watch.
With the Apple Watch, we have
a very capable sensor package
that is at a fixed
location on our wrist
and we use it throughout
the day.
And because we use our
hands for almost everything
that we do throughout the day,
it's a really powerful tool
to observe all of
that user activity.
And what Device Motion does
is it takes the samples
from the accelerometer and
the gyroscope, and it fuses it
to give you a very clear picture
of all of that user activity,
and all of this right
on your wrist.
All of that motion at wrist
is described by Device Motion,
All of that motion at wrist
is described by Device Motion,
using four distinct
properties: Attitude, gravity,
rotation rate, and
user acceleration.
If you want to know in depth how
these properties are derived,
and how they behave, I
encourage you to go check
out these sessions
from 2011 and 2012.
In this talk though, I'll be
giving you a very brief overview
of this property so that we can
build some infusion and go look
at some examples of how you
can apply them in some health
and fitness apps in
interesting ways.
The first property is attitude.
Using attitude in your apps,
you can get an observation
of the orientation
of device and space.
And when you use CMDeviceMotion,
you get these three
representations of attitude.
As Quaternion, rotation
matrix, and as Euler angles.
Whenever you are using
attitude within your app,
one thing to note is every
single attitude observation is
one thing to note is every
single attitude observation is
going to be relative to
a fixed reference frame.
What this means is every single
orientation observation is going
to be observed from a point
that is not fixed to
the device itself.
So the reference frame remains
static while the device can move
around, and that's
how you observe
orientation-using attitude.
And furthermore, when
your app registers
to start receiving the updates,
that is when the
reference frame is set.
And so every subsequent
sample that you receive
within your app is going
to be computed using
that relative reference
frame that was set
at the start of updates.
So this is something that
you need to kind of be aware
of when you use attitude
in your apps.
And so that you don't
make assumptions
about where the device is
oriented in absolute space.
The next property is gravity.
Gravity is -- well,
it's gravity.
Gravity is -- well,
it's gravity.
It is the force that
is pulling us all
down to the center of the Earth.
And within Device
Motion, it is represented
as a unit vector in
the device frame.
Using gravity, you
can observe the tip
and the tilt of the device.
But you might be wondering,
isn't different tilt also the
orientation just like attitude?
But one key difference is,
now you're observing all
of this orientation
from the device frame.
So take for example, if I were
to hold my hand perfectly
parallel to the ground
to my side, and then I move it
to my front, you won't be able
to observe this using gravity.
Because the x, y, z
components of gravity are going
to remain exactly same
between these two orientations.
To observe something like this,
you need a point of observation
that is external or fixed
while the device is moving.
Which is basically attitude.
So you could use gravity
to observe orientation,
but only in a limited form.
But it might work just
good for the kind of app
that you are thinking of.
One other thing about gravity is
when you hold the
device perfectly still,
you can observe it
using the accelerometer.
It's a constant acceleration
that the accelerometer
will pick up,
and you can read it right
out of accelerometer.
But as soon as you
start moving the device,
now the accelerometer is picking
up not just the gravity
component, but it's also picking
up all of the user-generated
motion.
So it becomes harder and harder
to just get the gravity
component.
What Device Motion does, is
by using sense of fusion,
we switch over to the gyroscope
to start tracking the
gravity unit vector.
Next property is rotation rate.
As the name suggests,
it's the rate
of change of angular motion.
It's very good for observing
something like wrist rotation.
When you are rotating
your wrist,
there is a rotational motion
around the x-axis of the device,
and you can observe that
using rotation rate.
You can also observe any
rotation around the body.
You can also observe any
rotation around the body.
Because most of us, we fix
our torso and we move our arms
so there is going to be some
kind of an arcing motion.
So there is rotation and you can
observe it using the rotation
rate property of Device Motion.
The last of the properties
is user acceleration.
And the user acceleration
that you get
through Device Motion is
compensated for gravity.
Recall how I mentioned
that accelerometer picks
up both the gravity component
and the user-generated
component.
So this is just the
user-generated component.
But it has been compensated
with gravity
that was derived using
device sensor fusion.
So that was a very quick
overview of these properties.
Now let's take a look
at a few examples
of how you can use
them in your apps.
The first property is attitude.
Attitude is very good
for observing any kind
of repetitive motion.
Like rep counting
in weightlifting.
So when you are lifting
weights, you are literally going
So when you are lifting
weights, you are literally going
through a set of
orientation changes.
So just by looking at how
those orientation changes are
repeating, you can
count the reps.
And the best part about
using attitude for doing
that is now they might
be using a machine
where they're pulling
horizontally or from the top
and you can observe all of
those reps using attitude.
Gravity, as I already
mentioned, it's very good
for measuring tip and tilt.
Which basically means you
can use it in a yoga app.
So if they're doing a downward
dog or holding a warrior pose,
then you can figure out whether
-- how still they're holding it.
And when they're also going
from one pose to another,
you can figure out whether
they're really doing it
in a graceful way, or just
falling over themselves.
One of the most useful ways
of using rotation rate is
to observe the speed
of circular motion.
Take, for example,
a batting cage.
Take, for example,
a batting cage.
So when you are swinging
the bat,
you're not only rotating
your arms around your body,
but there is also the
rotation around wrist.
So you could use both of those
components of rotation rate
to estimate the bat speed.
Of course you'll need to
know how long of a bat it is,
and at what point on the bat
you want to measure the speed.
User acceleration is best used
when there is some
abrupt motion.
Because accelerometer
picks up all of the motion
that the user is doing, if
you try to do some kind of app
where you're using
user acceleration
to measure some small movements,
it might get drowned
out by noise.
So wherever there is a very
abrupt motion is the best place
to use user acceleration.
So something like
a punch and recoil.
You can tell how much
pain they're inflicting
on that sandbag,
or that opponent
that is hopefully
made out of air.
So those were only a
very few set of examples
of how you can use Device
Motion within your apps.
of how you can use Device
Motion within your apps.
By -- after going through this,
if you feel that
you have an app idea
that could use Device Motion
and you want to find out more
about how you can apply
Device Motion, please do stop
by our lab tomorrow,
and we would be more
than happy to help you.
Now let's take a look
at that API itself.
You have the four properties.
Attitude and gravity,
they're unitlness.
Gravity's the uniflector.
User acceleration is in
G's and rotation rate is
in radians per second.
Before you can start
receiving Device Motion updates
within your app, you have
to set their sample rate.
You can set sample
rates up to 100 hertz.
And once you have set that
sample rate, you can go ahead
and register for updates
using the startup date method.
From this point onwards,
you can either choose
to poll the Device Motion
property periodically
to receive the most
recent sample.
Or you could provide
a callback handler
on which you can get every
single update the Device Motion
on which you can get every
single update the Device Motion
is generating for you.
And of course, once you
are done with the --
listening to the Device Motion
updates, you can go ahead
and deregister using the
stopDeviceMotionUpdates.
That brings me to the end
of my section of the talk.
Now I'll be handing it forward
to Paul, who will be walking you
through an example
app on the Apple Watch
that uses Device Motion
in an interesting manner.
Thank you very much.
Over to you Paul.
[ Applause ]
>> Thank you Bharath.
Hello everyone.
My name is Paul Thompson,
and I'm an engineer
with Core Motion.
So Bharath just talked
about what's new
with Core Motion APIs.
What I'd like to do is show
you how to use one of them
in an Apple Watch fitness app.
So in this app, what we'd
like to do is create a
tennis workout session.
Then we'll do -- we'll subscribe
to sensor updates
with Device Motion.
Finally, we'll detect
swings and differentiate
Finally, we'll detect
swings and differentiate
between a forehand
and a backhand swing.
Now to do this, what we'll need
to do is leverage capabilities
from Core Motion, as well
as some new capabilities
from HealthKit.
Now as you may remember,
watchOS 2 apps strongly relied
on HealthKit and Core Motion
to do real-time analysis
and sensor output and
provide you with values
such as step count, flights,
distance, and calorie estimates.
Now, during a workout session,
your app can do limited work
and process Device Motion
samples while the user's not
directly engaged in your app.
But of course, this ability
comes with some caveats.
To begin with, you must have
enabled this new HealthKit
entitlement in your
Xcode project.
Further, this background work
can only be done during an
active workout session.
And in addition,
while backgrounded,
And in addition,
while backgrounded,
you must minimize
your CPU usage.
If you do not minimize
your CPU usage,
your app will be suspended
until the user explicitly
foregrounds it again.
Now since this background
capability is being provided
by HealthKit, I encourage you
to view the earlier session
to learn more in
detail about this.
With that in mind, let's think
about what our app
might look like.
Well with app, you would
expect you and a friend to out
to the tennis courts
with your Apple Watches,
and practicing volleying
for a bit.
There, you simply
start a quick workout,
and hit the ball back and forth.
Then, you'd expect that at any
time you take a quick glance
at your watch, and get some
media feedback on the play.
So with that in mind,
let's think about the
structure of our project.
Here, there'll be three layers
that we want to care about.
First is the UI, where
we'll ultimately present
First is the UI, where
we'll ultimately present
to the user what we've done.
Then, we'll have
our workout manager.
We'll interact with
HealthKit and start
and stop our workout sessions
and enable our background
inability.
Then we'll enable our -- then
we'll have our motion manager.
Here we'll interact with
Core Motion directly,
and implement our
detection algorithm.
There we'll respond to 50
hertz Device Motion updates,
and add the resulting
samples to a running buffer.
Afterwards, on every update,
we'll assess whether a
swing occurred, and if so,
we'll implement the UI.
So how do we want to -- how
do we want to model the swing
that we wish to detect?
Well, tennis is a
complicated sport.
So in this case, all
we'll do is we'll look
at the essence of two gestures.
A full forehand and
backhand swing.
We'll do this using the
gravity and rotation vectors
We'll do this using the
gravity and rotation vectors
as provided by Device Motion.
So if you expect the watch
to be in the dominant arm,
then you would expect
a full forehand swing
to include a simple
rotation about the user.
So if we take the dot
product of the rotation rate
from a potential swing with
the gravity unit vector,
we can isolate this movement
while ignoring the attitude
of the device.
And also ignoring some
extraneous movement.
Then, once we've got
enough samples of this,
we'll see if we've
rotated far enough
and fast enough to
count as a swing.
So now that we know what to do,
let's take a look
at our sample app.
To begin, we'll envision
our simple UI.
We'll display our
information to the user.
Here we'll have three watch kit
interface labels that we'll want
to update during runtime.
These will include
the workout label.
These will include
the workout label.
The forehand count label.
And the backhand count label.
And what the workout label will
do, will simply tell a user
when we've started and
stopped the workout session.
And the forehand and backhand
count labels will simply show
how many times we've
detected the right movement.
Here we'll also have a
force touch menu to start
and stop the workout session.
So now that we know what the
UI will show, let's take a look
at our workout manager.
Here, we'll handle our
interactions with HealthKit,
as well as create
our workout session.
We'll also direct our motion
manager below to start
and stop sensor updates.
So here, in our workout manager,
we'll have to start a workout.
So to begin, we'll create
our workout configuration
which we'll use to initiate
the workout session.
Since we're creating
a tennis app,
let's use tennis as
the activity type.
And outdoors as the location.
Then, after initialization,
we'll have HealthKit start the
workout session and subscribe
to Device Motion updates.
At this point, we'll now be able
to do work while
the screen is off.
In addition, we'll also
need to stop our workout.
Here, we'll simply do the
reverse motion and unsubscribe
from sensor updates,
and then tell HealthKit
to end the workout session.
At this point, normal
backgrounding rules will apply.
So now, let's take a look
at our motion manager.
He will interface with
Core Motion directly,
and implement our
detection algorithm.
So to begin, what we'll do
here is we'll create a link
to the CM motion manager.
As well as create an
NS operation queue for
or samples to do work on.
At this point, we'll
also ask if the watch is
At this point, we'll
also ask if the watch is
on the left or the right wrist.
There's a difference
between the forehand
and the backhand swing
will depend entirely
on this perspective.
We'll also keep a local count of
a forehand and backhand swings,
as well as mark as whether
we've recently seen a swing.
We'll also choose 50
hertz as our sample rate.
And create a running buffer
that should hold no more
than a second's data.
Now as Bharath mentioned
earlier,
Device Motion samples
can be provided
at a rate up to 100 hertz.
While you generally want to pick
the sample rate that's as small
as you can get away with,
while also providing you
with fidelity that you need.
In addition, we'll set three
constants which we'll use
in our detection algorithm.
These will include the minimum
angle's [inaudible] swing,
the lower bound on the peak
speed through the swing,
and an upper bound on the
settling speed of the swing.
Now we chose these values
based on experimentation
Now we chose these values
based on experimentation
and sample data that
we collected.
But generally, you'll find
that the process of collecting
and picking these values will
be half the battle of your app.
Finally, we'll create this
delegate reference here,
which we'll use to
communicate back to the UI.
So now, after we set
all of our properties,
we'll adjust the operation queue
to reflect as a serial queue
that we'll use to handle all
of our Device Motion samples.
I'd like to emphasize
here, that we chose --
we created this operation
queue to ensure that all
of our data processing
happens off of the main queue.
So now we'll also create
this function which we'll use
to reset all the
statement class,
as well as zero out the UI.
And then, as a final set
of convenience methods,
we'll create some complementary
update delegate functions.
Here, we'll simply implement
our count of the swing,
Here, we'll simply implement
our count of the swing,
mark that we've recently
seen a swing,
and then let the UI know it.
So let's start interfacing
with Core Motion.
So as always, the first thing
we should do is ask whether the
API's supported on the
device we're running on.
Then, we'll tell Core Motion to
update us at a 50 hertz cadence.
And finally, we'll subscribe
to Device Motion updates
by passing our operation
queue, as well as a block
that we'll use to respond to all
incoming Device Motion samples.
All this block will do
is simply check to see
if there are any errors with
the update, then pass it along
to our detection function.
So let's take a look at what
our detection function's going
to do.
So as Bharath mentioned earlier,
Device Motion gives
us quite a few things.
But in this example, we're only
going to look at the gravity
and rotation rate vectors.
Now, as you may remember,
the gravity vector is simply
Core Motion's destination.
The gravity univector,
regardless of how much
the device has moved.
And the rotation rate is
simply a rotation vector
for the device, giving
us radians per second.
So now, what our [inaudible]
will do is we'll take the dot
product -- will take the dot
product of the rotation vector
from a potential swing
with a gravity univector.
So we only analyzed
the proportion
of motion about gravity.
Then, we'll add the
resulting scaler
to a running buffer holding
no more than a second's data.
Once we have enough
content, we begin to analyze
in the content within.
So the two metrics we'll use
to analyze the swing are the
accumulated angle of rotation
and the peak speed of the swing.
Here, to get the accumulating
rotation simply integrate all
the accumulated samples that
we've collected from rotation
over the second that
we've collected them.
over the second that
we've collected them.
Then you have peak rate, you
simply take a min or max,
depending on the
direction of rotation.
Further down our
function, we'll check to see
if we subtended far enough and
fast enough to count as a swing.
If so, we'll choose forehand or
backhand conveying the position
of the Apple Watch and
the sign of the scalers.
And finally, to end the
function, we'll add a check
to see that the swing
is settled.
This way we can reject
some of the recoil movement
as the user's arm moves back in
position for the next volley.
And finally, to finish
off the class,
we'll have the stopUpdates
function.
Which we'll use to unsubscribe
from DeviceMotionUpdates
when the workout has ended.
And that concludes the
basics of our sample app.
We simply described a
simple user interface.
We then created a
workout management.
Handle interfacing
with HealthKit.
Handle interfacing
with HealthKit.
And then we created our motion
manager to handle Core Motion,
as well as implement
our detection algorithm.
So I hope you all have
gotten a good feel for how
to use Device Motion as newly
brought to the Apple Watch.
So before I wrap up, I'd like
to emphasize a few details
on the use of this API.
So now while you may expect
the watch to be in a fixed
and predictable location,
always remember to check
which wrist the device
is on as this difference
of position can have significant
impact on your analysis.
Further, when using
inertial senses,
always try to remember --
try to understand what
the reference frame you're
operating in.
And further, as we said earlier,
Device Motion provides you
with samples at a
rate up to 100 hertz.
We always want to
strike a balance
between the resources required
to service your sample rate,
and the fidelity demanded
by your application.
This is especially important,
given the restrictions
placed during backgrounding
of a workout session.
And so to summarize the talk
as a whole, we earlier talked
about the performance
improvements
to historical accelerometer.
Then we demonstrated how
you can use pedometer events
to precisely tag
segments, provide important,
contextual feedback to users,
and then we introduced you
to Device Motion
on the Apple Watch.
And walked you through
an app to detect forehand
and backhand swings
during a tennis workout.
So now if you'd like to
find out more information,
such as documentation
and sample code,
please check out this link.
And further, as you saw, this
app interacts with new features
from HealthKit so
I encourage you all
to view this HealthKit
session to learn more
in detail what's
new for watchOS 3.
In addition, Core Location has
some Best Practices for you
to review, and encourage you
to check them out as well.
And finally, I also recommend
checking out what else we have
And finally, I also recommend
checking out what else we have
in store in watchOS 3.
Thank you.
[ Applause ]