WWDC2015 Session 213

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[Applause]
>> JOHN EARL: Hello.
My name is John Earl,
and I'm an engineer
on the ResearchKit team.
Today I will be talking about
ResearchKit and about building
and contributing
to research apps.
So our agenda today
has four parts.
First, we will cover
what ResearchKit is.
Then, I will cover a few issues
that may affect the design the
design of your app-based study.
The meat of the talk though will
be when I cover building apps
with ResearchKit, how
ResearchKit can help you
to build research apps.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And finally, since it's
an open source project,
I will talk about how you can
contribute to ResearchKit.
So let's get started.
What is ResearchKit?
Well, the short answer is that
it's an open source project
and it's available on GitHub.
But for a slightly
longer answer,
we'll need to start
with some motivation.
Even if you never participated
in a medical research study,
you probably have seen
something like this
at a university or
at a hospital.
And if you have participated
in one,
you probably rang the
number, met the investigator,
and had the study and its risks
and benefits explained to you.
Then you might have
come in a few more times
to answer questions and
perhaps have samples taken.
Now, this is a pretty
heavyweight model
and researchers have told us
that there are three
problems with it.
The first problem is
limited participation.
Posting flyers around university
campuses limits participation
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Posting flyers around university
campuses limits participation
to those that live
near that institution,
and to make a large study,
you need collaboration
across multiple research groups
which means these studies
happen infrequently
if they happen at all.
The second problem is
subjective and infrequent data.
Data is often collected
using surveys at monthly
or even longer intervals
and this affects both
the questions you can ask
and limits data quality.
The third problem is that
communication is one way.
You probably never
heard of the results
of the study you participated
in unless you knew the
investigators personally
and that's where we
think apps can help.
The wide reach of the App Store
distribution model can help
researchers to reach a
broader subject population.
So the first five research
apps using ResearchKit have
over 70,000 participants
enrolled which makes them some
of the largest studies
ever conducted
in their respective fields.
Secondly, apps can stream
data continuously in contrast
to subjective and infrequent
manual data collection.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to subjective and infrequent
manual data collection.
And finally, apps can
keep a local record
for each participant
to give them feedback
on how they are doing in
managing their symptoms.
And this helps to
keep them engaged
without raising the
burden on the investigator.
Now, at Apple, we wanted
to help make this a reality
for more studies and that's
why we built ResearchKit.
ResearchKit is an
open source framework,
and it's available on GitHub.
You can use it to more easily
create research apps whether
they are commercial apps
or they're for part of --
for an academic study.
Now, iOS already has great APIs
for collecting passive
data information,
like HealthKit and CoreMotion.
But there are quite a few other
things that you need in order
to conduct a successful research
study from an app and we hope
that ResearchKit can help you
with some of those things.
Right now, ResearchKit
has three modules.
The first module is surveys.
ResearchKit provides standard
UI templates that you can use
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
ResearchKit provides standard
UI templates that you can use
for doing surveys on
an iPhone or on an iPad
and we've tested it with some
of the most common
survey instruments
from health research,
like SF12 or EQ5D.
The second module
is informed consent.
It's a common requirement in
human observational research.
To obtain consent from
participants while making sure
that they are fully informed
about the details of the study.
The details will
differ for every study,
and so again ResearchKit
provides templates
that you can use to
show this in your apps.
ResearchKit's third
module is active tasks.
An active task is a
semicontrolled test
where the participant is given
step-by-step instructions
to perform the task while data
is collected using sensors
on the device.
For example, in this gait and
balance task, the phone is
in your pocket while
you walk back and forth,
and accelerometer and the
gyro are used to collect data
that can assess your gait.
So again, we've got three
things in ResearchKit.
We've got surveys, informed
consent and active tasks.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We've got surveys, informed
consent and active tasks.
Now, when we announced
ResearchKit,
the investigators we worked
with simultaneously released
their apps to the App Store.
And even now, these apps are
being used to collect study data
from conditions as diverse
as Parkinson's, diabetes,
cardiovascular disease,
asthma and breast cancer.
And with these partner
institutions we've open sourced
the codes for these apps,
as well as the app core common
library that they all share,
so that can help you
get started on your app
if you need a jumpstart.
Now what do the apps
actually do?
Well, they all used the
informed consent module
from ResearchKit during
the onboarding process,
but then after that, once the
participants are enrolled,
they collect data
in a couple of ways.
First they use scheduled
activities,
using ResearchKit's surveys
and active task modules
to collect more subjective
measures.
In addition, they get
objective measures
by doing passive data collection
using HealthKit and CoreMotion.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
by doing passive data collection
using HealthKit and CoreMotion.
And in addition, they address
that one-way communication
problem
by including a dashboard tab,
which allows participants to see
and track both subjective
measures like their mood,
perhaps derived from surveys
and objective measures
like their weight which might
be derived from HealthKit.
So that's ResearchKit and
the apps that are using it.
But as we developed ResearchKit,
we learned a few things
about what else is involved
in building an app-based study
that we thought is
important to share with you.
So during this section
of the talk,
I'll share some of
those with you.
So if you're an engineer
building a research app,
you're probably not the
only person on the project
and the rest of your team
will have a variety of things
that they'll need to do, and
I'll cover some of those now.
Probably the most
important will be
to approach an ethics committee
or institutional
review board associated
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
or institutional
review board associated
with your institution,
in order to --
in order to obtain some
sort of ethics review
for the study protocol.
As part of this, you'll
need to decide what it means
for the participant to be
informed about the study.
You'll take this
paper-based form
and hopefully you will
be able to compress it
down into something that's
appropriate for a mobile app,
and we will cover that into a
little more detail when we get
to the inform and consent
module of ResearchKit.
Next, since ResearchKit doesn't
provide a back end service,
you need to plan for how
you'll store your study data.
So that might mean you stand up
the survey yourself or contract
with a third-party
service provider.
Either way, you'll
need to account
for both data security
and privacy.
And lastly, you'll need to plan
for sharing your
study data whether
with participants perhaps in
the form of a dashboard tab
or some other method, or
with other investigators
which might require you
generating a very broad
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which might require you
generating a very broad
informed consent.
So as you can see,
there are a variety
of issues outside
the actual app build
that will affect the design
of your study-based app
and for more resources on these,
I would point you
to ResearchKit.
org, our website, and also
to our ResearchKit
user's mailing list
where you can reach others
who have also been
through this before.
So without further adieu,
let's get on to building
apps with ResearchKit.
How can ResearchKit help
you in your studies?
So as I mentioned before,
there are three modules
in ResearchKit, surveys,
informed consent
and active tasks and
all of those modules
in ResearchKit behave
more or less the same way.
Each activity that
the user is asked
to do is modeled as a task.
And each task can
contain one or more steps.
Now, in order to use a task,
you will want to present it
to the user and to do that, you
will use task view controller.
Now, a task view controller is
a container view controller,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, a task view controller is
a container view controller,
a bit like a navigation
controller
or a tab bar controller that
you are probably familiar
with from the UI kit.
Now, when you present
the task view controller,
it will get the first step
from the task and then display
within the task view controller
a step view controller
that displays the
data for that step.
Then when the step completes
the task view controller will
collect the result from
it and collate the results
from all the steps in that task.
Finally, the task view
controller will notify its
delegate when the
task is complete
and you will get a task result.
This task result will have
a corresponding step result
for the step you
have been through,
and if you have more steps,
you will have correspondingly
more step results.
So that's the object
model in ResearchKit
and now let's take a little
bit deeper dive into one
of the core objects
which is the step.
So we will look at steps next.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So we will look at steps next.
In ResearchKit, a step
really corresponds
to the basic template that
you use for each screen
in a ResearchKit task.
And subclasses are
a kind of step,
which is just an abstract base
class, can be used for each
of the different types for
each of the different steps
that you might need
for different modules
in ResearchKit, like surveys.
So you might have an
instruction, survey question
and multiquestion forms or for
other things like active tasks
where you need countdown timers
and perhaps a memory game
for a cognitive task.
So this basic template generally
presents the step content
in the middle of the
screen and this will --
this has some predefined
elements which we'll see next.
In addition, it generally
includes the forward navigation
controls which are displayed
within the step view controller.
So what does that
look like in code?
Well, you will get an ORKStep
as the abstract base class,
and you'll see here that it's
a subclass within this object.
Now, I should mention that the
framework itself is written
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, I should mention that the
framework itself is written
in Objective-C but it's
perfectly usable from Swift
and we added both
nullability and generics to it
so you can use it with Swift 2.
0. So what are some
of the important
properties of the base class.
The first and probably the most
key property is the identifier
and this is a string that
you the developer provide.
It could be a human readable
string, or it could be a UUID,
or an identifier that
corresponds to a record
of this step and the
corresponding task
in your database.
The importance of
this identifier is
that it links the step with
the corresponding step result
and it needs to be unique
within the context of your task.
Next, all steps have
a title, and text.
And these generally fit into
the same corresponding place
in each step template.
If you are writing a question,
for example, for a question step
in a survey, you will
typically put a short title
and the actual question itself
will go into the text property.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and the actual question itself
will go into the text property.
One more property
worth mentioning is the
optional property.
So each of the steps in a survey
and ResearchKit or, in fact many
of the other steps can
be optional and, in fact,
they are optional by default.
If you need to turn that off,
for example because a
particular answer is required,
then you can use this property.
So that's steps, but to use
a step you need to put it
in the context of a task.
Now tasks in ResearchKit don't
have an abstract base class,
instead it's just a protocol
which defines how the task
view controller will interact
with each task.
Again, there's a key property
which is the identifier
which uniquely identifies
this task result compared
to other task results that
you might collect in this app.
Next, though, the task
view controller needs
to know what is the next step
in this task, and for that,
we define the step after
step protocol method
where we pass the current step
from the task view controller.
Often, you'll just
return the next step.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Often, you'll just
return the next step.
But sometimes you
will want to know --
you will want to
decide what test --
what step to show based
on the results so far.
So, for instance, if I answered
A, you want to get to step A,
and if I answered B, you want
to go to step B and for that,
the task view controller will
pass the task a result --
the task result so far, that you
can use to make that decision.
When looking for the
first step in the task,
the task view controller will
pass nil as the current step
and when the task is complete
and there are no more
steps you can return nil
to tell the task view controller
that there are no
more steps to go to.
Similarly, the task
view controller may want
to ask your task what the
previous steps should be.
So the step before method
allows to you do things
like prevent backward
navigation.
So there are a variety
of other properties
and methods on the
task protocol.
And implementing a task can
be a complicated endeavor
and so ResearchKit includes
an implementation of ORKTask
which is ORKOrder task for
the simple case where you want
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which is ORKOrder task for
the simple case where you want
to present your steps
in consecutive order.
So here you pass an
identifier and an array of steps
and you can get those steps
back from a read only property.
But the task view
controller only interacts
with its ordered task
through the ORKtask protocol,
so you call it step
after step instead
of accessing that
step's property.
Now if you need conditional
logic,
there is now another version of
ordered task, a subclass of it,
called ORKNavigable ordered
task, which allows you
to specify predicates
on the results
and corresponding
destination steps.
This is a recent
addition to the framework
from an external contributor
and we don't have time
to cover it today but you can
find the details on GitHub.
So that's tasks.
But to use a task, you
will need to present it
with a task view controller.
So let's look at the task
view controller next.
So you'll start with a task.
And you'll create a task view
controller passing at the task.
But you notice this second
property, task run UUID
and that is a UUID
which uniquely identifies this
particular run of the task.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which uniquely identifies this
particular run of the task.
So if here we pass nil
which indicates this
is a new task instance.
We're starting from scratch.
But if I were to save my
work, say I had a long survey
and I wanted to pause in the
middle, then I might save
and when we restarted
this task by scheduling it
in new task view
controller, we would want
to restore the old task run UUID
because it's really
the same run of a task.
Then you need to
set the delegate.
If you want to find out
when the task is finished,
you use the delegate callback.
But similarly, there are
also delegate callbacks
that give you opportunities
to customize the task
view controller's behavior
for example, substituting a new
step view controller in place
of the default one
for a particular step.
In addition, some tasks
can produce output,
a file-based output.
So for example if you are
using a microphone in your task
to collect audio data, we'd
want to write that data
to audio file during the task.
In ResearchKit, to support that,
you'll specify an
output directory
to your task view controller
where the file-based output
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to your task view controller
where the file-based output
from that task should go.
When the task completes, you
will need to process those files
and then be responsible
for cleaning them up.
Finally you'll present the
task view controller modally
and get something like this.
So, this task view
controller as you can see,
has a fairly standard behavior
and the task view controller
itself only controls a very
small amount of the
screen real estate.
So it controls this
navigation bar,
setting the progress indication,
and possibly some backward
navigation controls.
In addition, it gives the step
view controller just enough
information in order to
show the forward navigation
controls correctly.
So that's the task
view controller.
But what about getting results?
Well, let's look at how
you get results next.
So here's the did finish
with reason delegate callback
and your delegate will receive
this when the task is complete.
At that point you will get
a reason why the task view
controller finished and one
reason it might finish is
that the user chose to save
their work in the middle of task
and in that case you'll want
to extract the restoration data
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and in that case you'll want
to extract the restoration data
which is an opaque NSData
property and then save it
for later, when you want
to resume the task you will
instantiate a new task view
controller and pass it
that restoration data
in a special initializer.
But in the usual case, your task
will have completed successfully
and in that case you
want to grab the result
from the task view controller's
result property and do something
with it, like serialize
it, send it to a server
or perhaps analyze it
in order to display some
of the information to the user.
Finally, you'll need to dismiss
the task view controller,
ecause you presented the
task view controller,
you are also responsible
for dismissing it.
Now you have seen a basic
overview of ResearchKit and how
to use itsthe task-based model.
Now let's see how that
fits in with the first
of ResearchKit's modules,
the surveys module.
So surveys in ResearchKit
are made up of three things:
instructions, survey questions
and multiquestion forms.
And in turn, each of those
corresponds to a step.
So we have a survey question --
I'm sorry, we have
an instruction step,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I'm sorry, we have
an instruction step,
and a question step,
and a form step.
Now, instruction steps
add very little on top
of what we already have
in the ORKStep base class
but question steps and form
stems have a little more
to them.
So we will look at that next.
Here's a question step,
and like any other step,
it has an identifier which needs
to be unique within the task
and that question step
also has an answer format.
That answer format is a
subclass of ORKAnswer format
that corresponds to
the particular type
of step you want to present.
So for the subclasses
cover a wide gamut
so we might have a
Boolean answer format
for a yes/no question, a
text choice answer format,
an image choice answer format
to give an image scale,
or date based and
time interval formats
and there's a wide variety.
You can see them on GitHub.
So that's question steps.
What about form steps?
So here's a form step, and the
form step has an identifier,
and it also has an
array of form items.
So here's the array
of form items
and each form item has
an identifier which needs
to be unique within the
context of that form step.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to be unique within the
context of that form step.
Like a question step, a form
item has an answer format
and ResearchKit supports
all the same answer formats
in form items that are
supported in questions.
So, for example, we might mix
a text choice answer format
with a numeric answer
format in the same form.
So that's the object model
for the model objects
for ResearchKit surveys.
Now how do you get the results?
So here's an ordered
task with an identifier
and an array of steps.
And when the task completes
you will get a task result.
Again, with an identifier which
matches and an array of results.
If this ordered task started
with an instruction step,
then that instruction step
would have an identifier
and when the task completes
you have a step result
with a corresponding identifier.
The results property of this
step result will be empty,
however because there's no
data being collected during
that instruction step.
All we did is show
them the instruction.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
All we did is show
them the instruction.
This object does have some
useful properties, though,
like the start date and the
end date to show you how long
that instruction
was on the screen.
In addition, you might
have a question step.
And that question step
will have an identifier
that matches its
corresponding step result.
And then unlike the step result
for the instruction step,
this will actually
have a child result
which carries the actual
answer corresponding
to that answer format.
Results from forms
work very similarly,
so here's an ordered task
containing a single form step,
with two form items and again
these identifiers are unique
within the form step.
And when you get
the results back,
you will have a task result
with the identifier that matches
and you will have a step
result with identifier
that matches that form step.
And then you will have
corresponding child question
results, this time you will have
an array of question results one
for each form item
with identifiers matching
the corresponding form items.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with identifiers matching
the corresponding form items.
So that's the object
model in ResearchKit
and how you can use survey,
use it to present surveys.
Now I will show you a
brief demo in Xcode.
So here we have an example app,
which displays the table view
with a list of the available
tasks and here I'm going
to be putting together a
survey task to show you some
of the features that I
have just been through.
Now, the table view
controller here,
the table view controller here
has a -- when you select a row,
it will instantiate a task
view controller and display
that particular task,
setting the delegate
and the output directory
and when the task completes,
we'll display -- we will
dismiss the task view controller
in the did finish callback.
Now, switching over to
the task enumeration
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, switching over to
the task enumeration
which holds the actual
tasks you'll see
that I defined all cases to
include just the survey case
and the represented task
is currently an empty task
but I will now define
it to be a survey task,
which I will create next.
So to create my survey task, I
will define a computed property,
survey task in which I
create an ordered task
with an identifier survey
and attach an array of steps.
Right now this array is empty.
So I will add an
instruction step.
Now this instruction
step has an identifier.
Here I'm just generating
the identifiers
because I know they will
be unique within this task.
I set a title and text,
which will be displayed
on that screen.
And you'll notice
that I've marked these
as localized strings.
And that's because the content
of these model objects is
really localized content.
So if you're going to use
your app in multiple locales,
then you'll need to
localize this content as well
as other things you might
localize in your app.
Then I might add a question step
and this question step
is a yes/no question.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and this question step
is a yes/no question.
So it's got a boolean
answer format,
it's got an identifier set and
a title, and the question is:
During a typical day, does
your health now limit you
when climbing several
flights of stairs?
This is a fairly typical
question that you might see
in a general health survey.
Since we've been talking about
forms, I will add a form step.
So here's a form with its
identifier, and a title
and a general question: Over the
last two weeks how often have
you been bothered by any
of the following problems?
Now we are listing
problems and each one
of those problems
will be a form item.
So each of these problems
will be a question
that is a text choice
answer format.
So it's a multiple
choice question
where you can select
only one answer.
And you can see the answers I
have given here are not at all,
several days, more than half
the days and nearly every day
and they have corresponding
values as well
and these are the values
that will be encoded
into the result object so that
you can analyze the result later
and these could equally be any
value that's a property list
type, it could be an integer
-- an s-number, rather,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
type, it could be an integer
-- an s-number, rather,
or it could be a
string like this.
So we have got this
interest item,
one problem I might have is
little interest or pleasure
in doing things and we will
add a couple more items
to round out our form.
So having set that up,
one more thing I want
to do is show you the
results and to do that,
I could have built you
some UI so we could look
through some results in the
app itself, and we actually do
that in the ORKCatalog
sample app on GitHub
but today I will just
serialize to JSON
and show you what the JSON
output might look like.
So to do that, we'll
go back here
to the task view controller
delegate DidFinishWithReason --
oops, it wanted one more step.
I'll just move that over.
I'll just add the
conclusion step to the task.
I forgot to do that.
So there's our conclusion step.
So switching back here.
I will be switching
on the reason.
So in the case that the task is
actually completed successfully,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I will extract the result
from the task view controller.
Then I will want to
serialize that to JSON
and the ResearchKit framework
itself does not include any JSON
serialization, we did
include something in one
of our test apps
in order to prove
that the JSON serialization
would work for a real app
and so I've included that here
now so I can demo it to you.
So I will run this and
show you what surveys look
like in ResearchKit.
So when I start my task,
I'm presented immediately
with this instruction step.
It has some indication
of progress.
I can cancel out of the task.
I've got my title and text and
I've got my navigation controls.
And you can see that it's been
prepopulated to get started
and that's automatic
from the framework set
up from the task
view controller.
When I come in I get
my boolean question,
I can choose my answer, If I
choose to skip this question,
the answer I entered
gets cleared.
So if I come back,
the answer is gone
and I can answer something
different when I come back.
So I'll just go through,
and give some answers,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So I'll just go through,
and give some answers,
and maybe I don't answer all the
questions and I can continue.
And finally, I'm done.
So let's take a look
at the results.
So as I described before,
we've got a task result here,
and it's got a start date
and an end date showing how
long we were actually looking
at that survey.
We've got an output
directory a task run UUID
that comes straight through
from the task view controller
and we've got the identifier
that came from the task.
The child results array contains
an array of step results.
So we have it instruction
step result
for that initial instruction
step that has no children
because we collected no
additional information
from the user.
In addition, we have
a question step result
which contains a Boolean
question result that contains
in the actual answer
to my question.
So in this case, I answered yes.
And then from the form step
here, we have a form step result
with the feeling form
identifier and then each
of the answers I entered.
I answered the first
two questions
and the final form item in
that form I didn't answer
and you can see there's no
actual answer property here
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you can see there's no
actual answer property here
and finally we have
another step result
for that closing
instruction step.
Now I will show you how easy it
is to modify the answer formats
if you need to change
things around in your survey
as you develop your app.
One thing I might want to
do is change is my boolean
answer format.
Maybe yes isn't enough
information.
Maybe my health is limited a lot
or maybe it's limited
only a little and so to do
that I can switch to a
text choice answer format
and provide some detailed text
on each choice which allows me
to qualify the overall answers,
so I can have yes, limited a lot
or yes, limited a little.
One other thing you'll notice
here is the exclusive setting,
which if this were a multiple
choice question would allow you
to set one or more choices
as being an exclusive choice.
If you selected that choice
all the other choices would
be deselected.
I might also want to
change my form step.
So as you saw, that was a very
long, vertically scrolling form
and maybe that's not
what I want in my app.
Perhaps I'd like to have
some horizontal slidders
that I could use to display
more or less the same content,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that I could use to display
more or less the same content,
get the same answer, but in
a different presentation.
I can do that very easily in
ResearchKit just by switching
to the ORKScale format and
specifying descriptions
for the minimum and
the maximum values.
So I will run that again and
show what you that looks like.
Here's our survey again.
Very similar.
And now we have got some
different options here
in this step.
And what used to be multiple
choice questions are now sliders
that I can use to
adjust the value.
When I come through and
look at the results,
the results are very similar
in structure but the types
of results are different,
because the corresponding
answer formats are different.
So your interpretation of those
results would need to change.
So that's surveys
in ResearchKit.
[Applause]
So the second module
in informed consent --
the second module in
ResearchKit is informed consent.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
the second module in
ResearchKit is informed consent.
So we will look at that now.
So what is informed consent?
Informed consent is
the process of ensuring
that participants fully
understand the why
and the how of the study.
What does the study
entail and also the risks
and benefits of participation.
Now, this will often
be conducted in person.
And the detailed
requirements of what needs to go
into your consent
process will come both
from your study protocol and
from your ethics review process.
Now, as the participant and the
investigator review the consent
together, they will
often initial each page
and perhaps sign at the end.
And so you probably need to
sign during your informed
consent process.
And finally, this informed
consent is usually a
legal document.
Now App Store submissions
for human subject research
must now include evidence
of some form of ethical review.
That doesn't necessarily
mean that you need
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
That doesn't necessarily
mean that you need
to include an informed
consent process
like the one we will
be discussing as part
of this ResearchKit
informed consent module.
Because low-risk
studies might be exempted
from informed consent
and certain high-risk
studies may actually need
to work in person.
But in many cases, the
informed consent module
in ResearchKit will
be appropriate.
And you will be able to
determine that during the course
of your ethical review process.
So assuming ResearchKit
can help,
let's look at how the
informed consent module works.
So there are two steps in
ResearchKit for informed consent
which need to present consent
from the informed
consent document.
And so both of those
steps get their content
from the ORKConsent document.
And this consent document
is made up of two arrays.
The first array is
an array of sections
and these sections might
be of predefined types
like data gathering,
privacy, and data use,
which are the types of
sections you might expect to see
in an informed consent document.
But they also might
be custom sections.
So ResearchKit doesn't intend
to provide a full solution,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So ResearchKit doesn't intend
to provide a full solution,
we provide an 80% solution.
And for your app, if you
need additional sections,
you should add them.
In addition the consent
document will have an array
of signatures.
So we might have an
investigative signature
that contains a prepopulated
name and image
and a participant signature
where we collect the name
and image during the course
of the consent review process.
So let's see how these
look in ResearchKit.
We have got the visual consent
step, the consent sharing step
and the consent review step
and I will dive into those each
in a little bit more detail.
The visual consent flow
typically has one screen per
section in the document.
It has neat animated transitions
that I'll show you in a demo
in a minute for the
predefined sections
and it's fully customizable,
so you can replace the imagery,
you can replace the
animations and you can fill
in the exact content from
your consent document.
So let's look at how
you do that in code.
You'll create a consent
section of a particular type,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
You'll create a consent
section of a particular type,
so in this case, data gathering.
Then, you'll set
some properties.
In this case I didn't
need to set a title
because that would have
already been localized
to all the languages in iOS
because I chose a
predefined section,
but if I didn't use a
predefined section or if I need
to specify my own, then I
can override them like this.
I can specify a summary which is
shown during the visual consent
process and I can specify
some content which is shown
if the user taps learn more as
they go through visual consent
or if you are going
through the whole document
which is displayed
in consent review.
In addition, you can set
a couple of other things,
so you can set a custom
image or custom animation.
This custom animation
is just a video file
that you might include
with your app
which would override
whatever the default is,
or provide something new,
if it's a custom step type.
Once you have the
consent section,
you attach your document
to a visual consent step
to present your visual
consent sequence.
The next step in the informed
consent module is the consent
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The next step in the informed
consent module is the consent
sharing step.
Research data collection
is hard work.
So it makes sense to reuse it
across multiple studies
when that's possible.
So it often makes sense
to obtain a broad consent
that will allow you to share
the data that you collect
with other researchers.
But that can pose a
problem for participants.
If the data is really sensitive
they might want to contribute
to your study but not to others.
This issue came up while we
were developing the initial apps
using ResearchKit and so
as a result we created
the consent sharing step
which has been prelocalized
to all the languages
that iOS supports
where we can substitute
in just your institution name
and a couple other details
to allow you to ask
this question
of participants whether
they would
like to share their
data more widely.
Over 80% of participants
in these initial studies
have actually said yes
to this question but we still
think it's an important thing
to include if you are looking
at such a broad consent.
So that's consent sharing step.
The final step
in the ResearchKit informed
consent module is the consent
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in the ResearchKit informed
consent module is the consent
review step and here the
participant reviews the actual
document and possibly
enters their name
and maybe signs with
their finger.
And to show you how
that works in code,
we will look at that next.
You might start with your
consent document here
and then you'll need
to add a signature
which is the signature
you want to collect.
So this is the participant
signature.
You set the title
for the participant
which is wat goes beneath the
signature line if you were
to generate a PDF
of the document.
And we've got an
identifier which identifies
which signature this
is if we are trying
to find a particular signature
in the consent document.
You can turn off either
the name collection
or the signature
image collection,
in this case we're
turning off the
by setting
RequiresSignatureImage to false.
Then you'll add the signature
to your consent document
and you will attach
your consent document
to a consent review step while
also specifying what signature
it is that you are
trying to collect.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
it is that you are
trying to collect.
You can use more than
one consent review step
if you have more than one person
reviewing the same document
on the same device,
which might happen
if you are doing this in person.
So those are the steps that make
up the informed consent module
in ResearchKit but to really
understand what this means
when you are running through the
app, I need to show you a demo.
So we will do that next.
So to begin doing the
informed consent in code,
we'll need to start by
creating a consent document.
So I will do that first.
Here's my consent document
and the first thing I'll
need some consent sections
to display during
the visual consent.
So here I've created an
array of consent sections
and created a consent
section of TypeOverview
on which I set a summary.
I don't need to set the title
because that's already been
propopulated from my language.
I'll also want to add some more
sections so you can see some
of these animations, so I'll
add a data gathering section
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and a privacy section, each
with some lorem ipsum text.
In addition, I'll want to show
you how you add actual content
to your consent document.
So the consent that would
go within the section
in the consent review document
or also what you would find
if you tapped learn more
from a particular
visual consent section.
You can specify the
content property directly,
which is just text, or you
could specify some HTML
if that's what you wanted,
so I'll leave the HTML in.
So those are my consent
sections,
Additionally I will
need some signatures
for the consent review step.
So I'll add a participant
signature,
just like the code I showed
you before on the slides
and the investigator signature
with a different title
and a different identifier,
with a name
and an image prepopulated.
And I've added those to
the consent document.
Then, once I have
my consent document,
I'll need to create a task to
display this consent process.
So here's my consent task,
with the identifier Consent,
I've just chosen that because
it's different from Survey
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I've just chosen that because
it's different from Survey
and it's something that I can
understand when I read it.
So I've got an array of steps
which is currently empty
and the first thing I'll want
is a visual consent step.
So I will add that here and this
has the identifier VisualConsent
and I pass the document
from my computed property.
Next, I'll want to show you
the consent sharing step,
with just a few properties
that I have to set in order
to fully populate the text.
And last, we will want
a consent review step
where the participant
has an opportunity
to enter their signature and
review their overall document.
So here I grab the
first signature
from the consent
document, which I happen
to know is the participant
signature,
and the consent document
itself and attach them
to this consent review step
with another unique
identifier within this task.
There are a couple of things
I can additionally customize
on the consent review step
like the text that is displayed
as the user enters their name
and the text that's displayed
in the confirmation
dialog when they agree.
And I've added all these
steps to my step array.
Then I'll justneed to extend my
table view to display that task.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Then I'll justneed to extend my
table view to display that task.
And specify that that's the
actual task I want to display
and I'll show you
the consent process.
So when I start my informed
consent task now we jump
straight into the
visual consent step.
And in contrast to some of
the other steps we've seen,
this visual consent step
has multiple screens
for each section.
So this corresponds to the first
section in my consent document.
When I navigate to the second,
I get this beautiful animation
as I transition from
one screen to the next.
These animations are the
predefined animations
that I was talking about.
NowYou can further customize
this screen, for instance,
in order to make it
fit in with your app.
If you set the tint
color using UIAppearance
that would override both
these controls at the top
of the screen and the
next button down here
and actually change the
tint color of this image
and the corresponding
animation so it can fit nicely
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and the corresponding
animation so it can fit nicely
into whatever app
you're building.
As I proceed to my next section,
you'll see I've got this
"Learn more" button.
And when I tap that "Learn more"
I can see the actual content
from the particular
consent document.
As I proceed, I'll come on
to the consent sharing step
and this is really
just a question step
where I'm asked whether
I want to share data just
with your institution or
with researchers worldwide.
And I want to share my
data with all researchers
that you think are qualified.
And as you continue, I see
this consent review step
which summarizes the document.
I have got the titles for each
of the sections and the content
for each section and I only
set the content for one section
but if you set them for all of
those for the particular consent
for your consent document, this
would be a full legal document
that the user might
be agreeing to.
Now, sometimes the
sections you want to show
in this document will be
different from what you want
to show in the visual
consent and you can accomplish
that easily in ResearchKit
either
by using only in-document
sections, which will only appear
in this consent review step
or by using a completely
different consent document
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
or by using a completely
different consent document
object to represent
this or a third option,
you could set the
HTML review content
which completely supplies
your own HTML to display
in this consent review if you
need complete customization.
Once I agree, you will see
whatever custom text I provided
and I will enter my name.
Continue. I might sign,
which I won't do very well.
And finally when I
complete the task,
you'll get back a result just
like for any other
task in ResearchKit.
And this has a step
result corresponding
to the visual consent
showing how long it was
that I was looking at the
visual consent process.
A step result for that sharing
question step with an answer
of true because I answered yes.
And you will see a result for
that final consent review step
that includes the data for the
actual signature that I entered.
So my name, and if I were
looking at the actual object
in Swift then I'd be able to
pull out the actual UI image
to the signature that I drew.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to the signature that I drew.
So that's informed
consent in ResearchKit.
Now there are a couple
of other things we need
to cover before we move
on that are worth noting
about this informed consent
process that we learned
in developing these
initial apps.
I will divide them
into two categories.
The first category
is the informed part.
The first part of
this is form factor.
We have tried to make the
visual consent work really well
adapting your consent document
to this smaller iPhone
form factor.
However, for your
app, you may need
to add additional custom
content and when do you that,
you just try and make
that content fit well
on these devices.
Next, we really encourage
you to use custom sections.
What we put in ResearchKit
is only there
in an advisory fashion.
We want you to actually
represent what you need,
what comes out of
your ethical --
of your ethics review process.
Next, you should plan
for accepting questions
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Next, you should plan
for accepting questions
from participants, whether
during the consent process,
that is, before the
user has fully consented
and afterward once they have
actually joined your study.
Maybe they will have
more questions once they
start participating.
And finally many of the apps
that are already using
ResearchKit have incorporated a
comprehension quiz.
Now, this can be a
bit of an extra load,
but it can also give you
a lot more peace of mind
that participants really
understand what it is
that you are trying to convey to
them during this visual consent.
And to do that, you can
just use the same steps
from the surveys module and
mix them into your consent task
in order to accomplish
that kind of behavior.
On the other side,
we have got consent.
And there are a couple
of points here.
One is verifying identity.
ResearchKit itself
doesn't do anything
to actually verify the
identity of your participants,
but the initial apps using
ResearchKit actually did some
form of email verification
to make sure
that they were actually talking
to a person, but for your study,
possibly coming out of
your ethics review process,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
possibly coming out of
your ethics review process,
you may need either
less or more than that.
For instance, you may need
to use a third party service
to identify identity
more robustly.
And once you have a concept
of identity, you probably want
to tie that identity with the
actual record of that consent,
in which case it might
make sense to use some form
of cryptographic signature.
So that's informed consent.
The third module in
ResearchKit is active tasks.
An active task is a
semicontrolled test
in which the participant is
given step-by-step instructions
to perform the task while data
is collected using sensors
on the device.
And the key properties
of such tasks are
that they're interactive
and very short in duration.
So these are session-based
tasks.
The longest task in one of
the ResearchKit apps so far is
about six minutes and most
of these tasks are about one
or two minutes in duration.
Let's look at the
structure in some
of the predefined
tasks in ResearchKit.
These tasks typically have a
couple of instruction steps
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
These tasks typically have a
couple of instruction steps
to introduce you to the task
basically tell you what it's
about, and then what
you will need to do.
Then some kind of introduction
to get you ready to act,
and then an active step
in which data is actually
being collected.
And finally when the task is
complete, we will thank you
for your participation.
What makes this an active
task is the existence
of the active step.
And the active step here is
really a base class in each
of the individual active
tasks that we predefined,
subclasses that to produce the
special behaviors that we need
for each of these tasks.
Now, when we released
ResearchKit,
there were five active tasks.
The first three collected data
using sensors open the device.
So we have the gait and
balance task, where you're asked
to walk back and forth we
collected accelerometer
and gyroscope data.
And we have the fitness task
where you are asked to walk
for six minutes as we collect
heart rate and podometry data
and the voice task, where
we use the microphone
to collect information
about your voice.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
In addition to these,
we have two more tasks
which are more cognitive
measurement tasks.
So these use sort of more
interactive touch behavior,
so we have spatial memory task
in which you have a sequence
and asked to repeat it
and the tapping speed task
where you are asked to
rapidly alternate tapping
between two buttons.
In addition since we
released ResearchKit,
there have been two
more active tests added,
we've got a hearing test,
and now a reaction time test
where you are shown
a stimulus and have
to shake the device in response.
To give you a flavor of how
these active tasks worked
in practice, I will
need to show you a demo.
So that's next.
So this is a really short
demo because all I need
to do is instantiate one of
these simple predefined tasks.
So I will create this active
task computed property
which returns a two finger
tapping interval task,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
which returns a two finger
tapping interval task,
and I just have to
specify a few parameters.
This is basically
the same for each
of the different active
tasks that we provide.
So you specify an identifier
which should be unique
within your study.
You specify an intended
use description
which is a localized string
that will be substituted
in into the prelocalized
content that we provide for each
of these active tasks and
you specify the duration
which is how long you
want the user to tap for.
It could also specify
some additional options
which control whether we include
the actual instruction steps
at the beginning and end of
the task because you might want
to provide your own instructions
if the ones we provide
don't work for you.
So now we have got our
active task, and we'll want
to add it to our table view.
So I just have to make a couple
of changes to support that.
I will add it to
my list of tasks
and mention my computed
property as a represented task
and when I run this, I
should have a third task now,
which will be this two
finger tapping task.
So the intended use description
gets populated in here
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So the intended use description
gets populated in here
and the rest of this content is
localized to all the languages
that iOS supports and
we will be maintaining
that for all the new
active tasks we add.
So here's our next instruction
which tells me what
to actually do.
So I will be tapping on each
of the buttons alternately.
And when I come into the task
thetimer doesn't actually start
until I start tapping,
so as I start tapping,
we start to see the timer
going faster and faster
and the task completes.
When the task is finished, I
will get some results back.
We'll take a look
at how that works.
So because this is one
of the more cognitive tasks
the data aren't written
out to files.
Instead they are returned
as object in memory
which I've serialized to
JSON so we can have a look.
And so here we have our
task result as usual
with our start date
and the end date
and various other properties.
We have a couple of step
results which corresponds
to the introductory
instructions,
and then we've got another
step result that corresponds
to the active step and it
contains a child result
which is the tapping
interval result
for the two finger
tapping interval task.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for the two finger
tapping interval task.
That in turn contains an array
of samples which contain --
which in turn contain time
stamps for each of the taps
that I have made and locations,
which are just coordinates
on the screen and a
button identifier showing
which button I tapped.
and taps outside the buttons
would be recorded here.
In addition, there's some other
properties in this result,
which detail where
things were on screen,
so I can actually tell what
those locations corresponded to.
For other types of
active tasks you will tend
to see file-based results and
those would be an ORKFile result
with a file URL that points to
a particular location on disk
that would be inside the output
directory that you specified
when you set up your
task controller.
So that's my demo for
active tasks in ResearchKit.
[Applause]
Now today we've covered
three modules in ResearchKit.
We'vecovered surveys, informed
consent and active tasks
but we really don't
think it will stop there.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We think that ResearchKit is
going to continue to expand both
as third parties continue to
contribute, so that's you,
and as we continue to add
additional features and keep it
up to date with the latest
versions of our software.
So it will only get
better if you contribute.
So let's talk about
how you can do that.
Now ResearchKit is just an
open source project on GitHub.
So that means you will
interact with it in the same way
that you would interact with any
other such open source project.
First, you'll need to pick an
existing issue or open a new one
and ideally comment on it, so we
know that you're working on it.
Then, when you've got something
that you want to share with us,
you will submit a pull request.
And at that point,
reviewers both from Apple
and our other active external
contributors will review your
contribution both for the
quality of the submission
and also for how it matches
up with the ResearchKit
and how it fits in
with the project.
Now, so far, about 90%
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Now, so far, about 90%
of the changes we received
have eventually got merged
into the code base.
I think that's a really
good starting point
for how we want things
to be going forward.
Once your change is merged,
though that's not
the end of the story.
At some point after that,
we will start the
convergence process
to bring ResearchKit
towards a new release.
And in fact we actually
concluded our first release done
using this process
yesterday with ResearchKit 1.
1. During that process, we
will review your change again,
both for things like
accessibility and also
to localize it to all the
languages that iOS supports
and we may ask you
to help out again.
Hopefully this doesn't
sound like too much work
and you would like
to help us out.
Let's look at some of the areas
where you could contribute.
One area where we've already
mentioned contributions is
active tasks and we've already
had those two new active tasks
contributed in the
last month and a half
with this project public.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with this project public.
But other areas where we've seen
contributions have been answer
formats, where someone's added
a vertical slider answer format,
and new steps where someone
added an image capture step.
In addition, going forward we
expect to see more contributions
in areas like device support.
So if you have a hardware device
that you think would make sense
to be used by people in
medical research studies,
then you can add some support
for it into ResearchKit
to help more researchers
use it in their apps.
Also, we expect to add
some back integrations.
So if you have a
back end service
and you think it would
integrate well with ResearchKit
and be a great data
solution storage
for researchers then it would
make some sense to add some code
to ResearchKit to support
your particular back end.
We know there are several
contributors out there
who are already interested
in doing this.
So I don't have time to
talk about all of these
in much detail but
what I do want to look
at a little more
is active tasks.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
at a little more
is active tasks.
So you have already seen this
structure for active tasks
where we have some instruction
steps, a countdown step,
an active step and
a completion step.
And this active step, base
class actually has some other
behaviors that will be
useful for you when you go
to implement your
own active step.
So, active steps support
recorder configurations,
where you can configure
the active step
to automatically collect
data from various sensors
on the device during the
duration of that step
without writing very much code.
So take a closer look at how
those work -- at how those work.
Right now, we have
five recorders built
into ResearchKit.
We have an accelerometer
and device motion recorder
and a pedometer recorder that
collect data from CoreMotion
and we have a health
quality type recorder
for pulling data from HealthKit.
So for instance, that could
be used to collect heart rate.
Finally we have a
location recorder
that can pull some information
from CoreLocation during the
duration of your active step.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
from CoreLocation during the
duration of your active step.
Now, when you use these on
iOS, you would normally need
to obtain user permission
to get access to that data
and ResearchKit isn't
a system framework
and doesn't let you bypass
any of those controls.
But we do try to smooth out
the process so if you use these
in one or more of your steps
then the task view controller
will notice that and try to ask
for those permissions upfront
just after the instructions
but before beginning the
actual steps themselves.
So to give you a bit of flavor
of how recorders are used,
I thought I'd use the
fitness step example
from the six-minute walk
task in ResearchKit.
So here's our fitness step,
which is a subclass
of ORKActiveStep.
And when you subclass
ORKActiveStep you also subclass
the active step view controller.
You have an active
step view controller
which has a pointer
back to the step.
Now this fitness step is
configured with a couple
of recorder configurations.
It will have a health
quantity recorder configuration
with a particular identifier
which needs to be unique
within this step because
this is going to correspond
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
within this step because
this is going to correspond
to the result as you
will see in a minute.
This recorder configuration
has a couple of parameters.
So it's got a quantity type
which in this case is the heart
rate quantity type and the unit,
which in this case
is beats per minute
from your heart rate monitor.
You can have more than
one recorder configuration
so in this case we will
have the pedommeter recorder
configuration, again with
a different identifier,
so that you can identify the
results from this recorder.
When you run the fitness step,
you will get two
recorders instantiated
when the step begins this will
be a health quantity recorder
and a pedometer recorder
and each
of these will configure
themselves based
on the configuration
model object attached
to the fitness step.
When the task completes, you
will get back a step result,
as part of your task result
and that step result will
contain two child file results
one for each of those recorders,
with identifiers that correspond
to the corresponding
recorder configurations.
The file URL as I mentioned
already will point to a file
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
The file URL as I mentioned
already will point to a file
in the output directory
where that data got recorded.
The actual serialization format
in ResearchKit right
now is JSON,
but if you needed a different
format then it would be
straightforward to add
a different formatter.
So that's recorders.
Now, I have alluded
already to some
of the process you
will need to go
through to make a
custom active step
but let's delineate those
points now before we finish.
So when you go to create
your custom active step,
you will first subclass
ORKActiveStep
and subclass the active
step view controller.
Usually we have pairs
of these classes.
Then you will need
to build the UI.
And that could mean that you
completely override the UI
of your active step view
controller if you need
to control the whole
screen or it could mean
that you just set
the custom view,
which fits into the
built-in active step template
in ResearchKit.
You will need to configure
some recorders if you need
to actually collect sensor data
using the recorders we already
have and you may need to
add some new result classes.
So for the cognitive
game, for example,
you saw we had the tapping
interval task result there.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
you saw we had the tapping
interval task result there.
That was an example
of a result class
that was created
specifically for that step.
And when you create an active
step that's like that one,
you will need to correspondingly
create your own class structure
which needs to be serializable
and that will introduce a couple
of limitations on what you
will include in those results.
That's it for active steps.
We talked about our three
modules and we talked about how
to contribute to ResearchKit.
And I just want to leave
you with a few thoughts.
First, ResearchKit is open
source and as a result,
if there's one thing
I want you to do
after this result is go away
and clone it and have a look
and see how it will
fit into your apps.
And since it's open
source, you can contribute.
You can make a difference to
the future of medical research.
And that's really
the main point.
This is an open project and it
will become what you make it.
There are a few other areas
where you can get some
additional information
about ResearchKit.
We have the ResearchKit.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We have the ResearchKit.
org which is our
primary landing page.
And that will have links to all
the different projects included
in ResearchKit, both the
apps using ResearchKit
and the framework
itself we've linked here.
For general inquiries,
for instance,
if you are a principal
investigator and you want to get
in touch with others who
might be able to help you
with your project, then you can
reach us at ResearchKit@apple.
com. And finally, for
technical support,
we've two mailing lists.
We'vegot ResearchKit Users
if you need to reach others
who are using the framework
or ask questions about how
to use it, and ResearchKit
Dev if you have questions
about how to contribute.
There are some related sessions
that may help you as you try
to put together a research app.
So there's What's New
in HealthKit yesterday.
We introduced some new data
types among other things.
So you can go and
visit that online
and also we had a HealthKit and
ResearchKit lab this morning
and there's another one
tomorrow morning at 11:00.
Finally there's a
health, fitness,
and research get together
in just a half hour
and I hope you'll
join us for that.
With that, thank you very
much and thanks for listening.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
With that, thank you very
much and thanks for listening.
[Applause]