WWDC2016 Session 106

Transcript

X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Music ]
>> Hi. Good morning.
Good afternoon.
My name is Ajit, and I used
to live and work not very far
from here on the East Bay.
I was an electrical engineer in
the heart of the Silicon Valley.
And in 2007, I did
something rather unusual.
I moved back to India from
California because I wanted
to be an entrepreneur.
I'm an inventor at heart,
so I dabbled in a number
of different fields, a
number of different devices.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But eventually, I
chose an area to work
on that was rather
unconventional.
I decided I wanted to work
with kids with disabilities.
I wanted to make assistive
technology for children
who had difficulties with
various kinds of disabilities.
Now the way I got into
this particular field --
when I went back
to India in 2008,
there was a friend of mine.
And she runs a school in the
city of Chennai in the south
of India, and it's
called Vidya Sagar.
Now, Vidya Sagar is
a special school.
It's a school for kids with
various kinds of disabilities --
cerebral palsy, autism.
And when my friend found
out that I was looking
for instant problems
to solve, she suggested
that I visit the school and
interact with the kids and see
if there was anything
that struck me.
So I went to Vidya
Sagar and I sat down
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So I went to Vidya
Sagar and I sat down
and I met some of the kids.
Now that visit was
an eye-opener for me.
You know, I saw that these
kids were intelligent,
they were smart, they
were eager to learn,
they were enthusiastic,
they were social.
But the reason that they weren't
in a regular school was
primarily one reason.
And that was that they had
difficulty with communication.
Now, if you look at kids with
cerebral palsy, the main reason
that they can't communicate is
because the part of the brain
that process muscle
movements is impaired,
so they have difficulty
controlling all
of the different muscles that
are used to generate speech.
But with autism, it's actually
a very different story.
The first kid that I met
with autism was actually
at Vidya Sagar during
that visit.
His name was Santhosh.
Now I was sitting across
the table from Santhosh,
and Santhosh had a lot of
difficulty with a few things
that we take for granted.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
that we take for granted.
For example, he found it really
hard to read my body language.
He would find it really hard
to see my face and to make
out what kind of emotion I
was feeling at that time.
The other thing that
Santhosh had a lot
of difficulty with was language.
Because language is
fundamentally very symbolic.
You know, it's very abstract.
It's not something that
we realize every day,
but it's really one of the most
abstract things that we use.
We have all of these
different sound sequences,
all of these different words,
and these words are something
that we assign meaning
to almost arbitrarily.
For example, if I
said the word "dog",
the word "dog" doesn't
have anything to do
with the animal the way
that I pronounce the word,
the way that I write the word.
You would never be
able to figure
out that it actually refers
to this particular animal.
It's very much abstract sound
through which we've assigned,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It's very much abstract sound
through which we've assigned,
you know, a convention
that we agree on.
And that's why someone
with autism has a lot
of difficulty with this.
Our brains have the ability to,
you know, encode and decode tens
of thousands of these
arbitrary sound sequences.
But kids with autism find
it a little too fuzzy
for their comfort.
So my friend who runs
this school, Vidya Sagar,
she wanted me to look at
this particular problem
and she wanted me to see if I
could come up with an invention
that could give a voice
to these children.
I thought that was a fascinating
problem to work on just
from a technology perspective.
I also thought it was a huge
opportunity in this case
because there were kids all
over India, and really all
over the world, who
needed something like this.
My starting point to
provide these kinds
of assistive technologies
was something
that had actually been
discovered in the 70s
and the 80s and was being
used quite effectively.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and the 80s and was being
used quite effectively.
What people working with kids
with autism had discovered was
that many of them have extremely
strong visual intelligence.
So they have a lot of
difficulty with the abstractness
of language, but they
feel very comfortable
with the concreteness
of pictures.
They're able to remember
pictures.
They're able to recall pictures
much easier than they're able
to do with sound
or with alphabets.
Even in India, it wasn't
very uncommon to see kids
with books like these.
Picture books, picture charts,
picture bracelets sometimes
that they would wear
around their wrists.
And if they wanted to ask for a
specific object, if they wanted
to request for water for
example, they would point
to a picture of water in their
book and that would communicate
to other people that
they wanted water.
At that time in [inaudible]
America, in 2007 and 2008,
this concept was being used
quite effectively as part
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
this concept was being used
quite effectively as part
of these devices that
were being built.
And this is an example
of one of these devices.
This is a fairly -- you know,
these devices were pretty big.
They were pretty bulky.
They ran a special
software on them.
They were often custom-built,
and they were paid
for by insurance.
So they were pretty expensive.
This device, for example,
would have cost anywhere
between $5,000 to $10,000.
And that price point is
completely out of range
for even the most affluent
of parents in India.
But what was happening
with that in 2007,
for example, the
iPhone came out.
And that was the beginning
of the smartphone revolution.
Right? So it was possible
at that time, 2008,
to actually buy electronic
components off the shelf
which were being
used in smartphones,
in these touchscreen
devices all over the world.
And to assemble them together
to create a touchscreen tablet
for a fraction of the price just
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for a fraction of the price just
for a few hundred
dollars instead of having
to create these things
from the ground-up.
So that became my strategy
going into this particular area.
I thought I would
create a special software
that they could use on top of
a device that we would build
from these smartphone
components,
and we would then use that
to give these children a way
to communicate.
So I took this strategy and
then I plunged into the field
of creating assistive
technology.
And I quickly came up with
a very interesting talent.
You know, those of you who've
worked on solving hard problems
for a specific class of people,
you know that the process
of coming up with these
solutions is actually
pretty standard.
You go to the field, you talk to
people, and you listen to them,
you see what they have to say.
You ideate to come up
with these prototypes
and you build these
things that you take
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and you build these
things that you take
to the people whose
problems you want to solve.
And then you get their feedback,
and then you iterate a number
of different times
till you have something
that works beautifully.
You know, that process
is very standardized.
In my particular case though,
I had a very peculiar problem.
Because the very reason that we
were building this device was
because these children
couldn't communicate.
So it was a chicken-and-egg
problem.
You know, we were
very happy to go
and ask them what they thought
about the prototypes
we were creating.
But what would actually
happen is
that the kids would use the
prototype for maybe a minute,
two minutes, and then
they would walk away.
And we had no idea -- you
know, were they tired?
Were they bored?
You know, did they not like it?
Did they think it was
completely worthless?
Were we solving the
wrong problem?
So that was a frustration
that, you know,
we lived with for
several months.
You know, every developer --
I'm sure every one of you here
has a happy-customer story
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I'm sure every one of you here
has a happy-customer story
that's very close to your
hearts and that you really love.
You know, in my particular case,
you know, there was this boy.
His name was Rohit.
Rohit was one of the
people that we were trying
out every single one
of our prototypes with.
And one day, he came
into the room
where we were trying
out the prototype.
And I could see --
you know, on that day,
I could see the determination
in his face.
He was going to communicate
with me.
He was completely -- he had
that look of grim determination
as he walked into the room.
And, you know, I gave
him the newest prototype.
And I sat there, waiting
with anticipation.
And Rohit actually took
the device that I built.
Very painstakingly,
over several minutes,
he tapped on various buttons.
And then the device
said, "This sucks."
[ Laugther ]
[ Applause ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Applause ]
It's not exactly the kind
of testimonial that is put
up on your website, but
it was music to my ears.
Believe me.
Because from that point onwards,
the going was much faster.
You know, we could convince him
to tell us exactly
what he liked,
exactly what he didn't like.
And within a few months,
we'd come up with a device
that met Rohit's expectations.
And that was the device that
we launched to the market.
We called it Avaz.
Avaz is the Hindi
word for "voice".
It also means similar things
in many other languages.
Let me show you the device
that we actually built.
So this was the first prototype
that was ready for release,
the first product that
we built off Avaz.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
the first product that
we built off Avaz.
It's the big, bulky,
black metal box,
and it's stuffed
with electronics.
You know, you could
mount it on a wheelchair.
You could put it on a desk.
Along with its wheelchair mount,
it weighed about 8 pounds,
so it was pretty heavy.
It had about an hour
of battery life.
You know, there was a really
embarrassing thing that happened
on the day that we were
supposed to launch this.
We kept it on a desk, and a
kid accidentally pushed it
down with his hand.
And it fell down on the
floor and it broke one
of the tiles on the floor.
[ Laugther ]
That was how heavy it was.
You know, as a concession to
safety, among other things,
we replaced the case
with plastic.
So that's the final version
of Avaz that we put out.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So that's the final version
of Avaz that we put out.
This device costs about $600
-- 30,000 Indian rupees.
And at that price, it was
about 1/10 of the price
of similar dedicated
devices that were
in the market at that time.
And it was a breakthrough
in India.
It was the very first time that
anyone had created a device
for speech-assistive
technology in India.
And almost immediately
after we had it out there,
we started getting orders
from all over the country.
You know, every special
school in India wrote to us
and they wanted to know how they
could get their hands on Avaz
and how they could use it.
But Avaz was still a
very niche product.
I mean, it's meant for the
very specific niche of people
that have these particular
disabilities.
So we had this -- you know,
making dedicated hardware
in these small volumes
is always a challenge.
So even though the
response was fantastic,
even though we got
a lot of interest,
and even though this was
really helping the kids
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and even though this was
really helping the kids
that we were intending
this to help, you know,
the more devices
we were selling,
the more money we were losing
just because we were spending
so much money in creating these
devices in the first place.
You know, we were
struggling to figure
out how we could make
the business sustainable.
And then in 2010, the
big news happened.
Steve Jobs announced the iPad.
And that was the start
of the tablet revolution.
But it also was a revolution
in the fortunes of my startup
because we saw this as a
way of getting ourselves
out of this hardware corner
that we painted ourselves into
and being able to make
these devices reach people,
making this application
reach people without having
to make the devices to do that.
So, in 2010, my team
and I started learning
how to code for iOS.
And in 2011, we launched the
first version of the Avaz App.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And in 2011, we launched the
first version of the Avaz App.
Now the core functionality
of Avaz actually remains
the same today, you know,
compared to the first version
that we launched
way back in 2011.
And let me show you a
quick demo of how it works.
So this is Avaz.
And you can see that
it's essentially a bunch
of different pictures.
So there's pictures in here,
and there are pictures in each
of these different categories.
So there are literally thousands
of pictures in the app.
And a parent, or a teacher,
or a therapist could go in
and add words from the
child's life with photographs
from the child's life as well.
And the way that Avaz
works for a child
with a disability
is very simple.
A child would be able to recall
and remember these pictures.
And every time they
tap the picture.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And every time they
tap the picture.
>> I
>> It will speak that out.
And they would be able to
sequence pictures together.
>> Feel
>> This way, and they could go
into these different categories.
>> Describe feelings.
>> And they could pick
different pictures out.
>> Excited
>> To be able to
create sentences.
>> I feel excited.
>> And then the app
would speak that out.
So that's Avaz -- essentially,
a bunch of different pictures
with a picture vocabulary that
helps children communicate.
So Avaz was literally
an artificial voice
for kids with disabilities.
You know, when we put Avaz
out in the field in India,
it was immediately
a game-changer
in many different ways.
For this kid, Rohit,
who had worked with us
to build Avaz, he
grew to love Avaz.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
to build Avaz, he
grew to love Avaz.
You know, he grew to be
able to customize and use it
so effectively that that same
year that we launched the app,
he went on to use Avaz to write
his exams and to finish school
and to go on to college.
You know, the media all over
India picked up the story
of this kid who'd
finished his exams
with an assistive-technology
device.
And not just this kid.
You know, they started
picking up stories from all
over the country about who kids
who had used Avaz to do things.
And even to this day, I think
Avaz has shaped the narrative
around these disabilities
in Indian media.
People started concentrating
less
on what these children could
not do and more on the abilities
of these children and
what they could do.
So that's one of the things that
Avaz has accomplished in India.
My personal favorite --
[ Applause ]
My personal favorite anecdote
about Avaz users is actually
something that happened back
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
about Avaz users is actually
something that happened back
in the school where
this all started,
in the school of Vidya Sagar.
You know, the children at Vidya
Sagar, every year they put
up this play for the parents
and for the well-wishers
of the school.
And it just so happens
that, you know, every year,
it was almost a convention that
the lead roles in this play,
the most important roles,
would go to the kids
who could speak -- I
mean, the verbal children.
And the presence
of the nonverbal children was
very subdued in this event.
The year we put out Avaz,
the principal made this
incredibly bold decision.
He said, "This year, all of the
parts are going to go to kids
who can't speak,
and they're going
to deliver their
dialogues with Avaz."
You know, I was sitting in
the front row on the day
that that play was going on.
You know, I was extensively
a special guest.
But really, the principal
expected me to jump
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But really, the principal
expected me to jump
up on the stage and fix things
if something didn't work.
[ Laughter ]
But I was sitting in the
front row right there.
And the atmosphere
was incredible.
It was electric.
You know, I could see how much
fun these kids were having
delivering their dialogues and
doing all of their, you know,
all of their actions
and everything.
And I turn back and I looked at
the parents of these children,
and they had tears
in their eyes.
You know, this was
the very first time
that these kids were
able to participate
in this quintessential
school experience
of being a part of
a school play.
And at that moment, I think it
sank into many of those parents
about how transformational
this device would be
in the lives of these kids.
[ Applause ]
Another really important
moment in the history
of Avaz happened a little
later, a few months later.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We got a call -- you know,
our customer support person --
she got a call from a
parent in central India.
You know, so this
parent called her up.
And this call was actually --
it played out very differently
from most of the
calls that we get,
you know, for customer support.
This lady had bought
Avaz for her son.
And, you know, she wanted
to know a bunch of
different things.
You know, she wanted to know
how she could assess her son
for whether this device
was suitable for the son,
and she wanted to know how
to set it up for her son.
She wanted to know how to
integrate it into his lessons.
She wanted to know how to
set up curriculum around it.
She wanted to know how
she could track progress,
you know, over the months.
She essentially wanted
to know how
to implement the
app for her son.
Now this is actually a pretty
atypical question that we get.
Because in most installations
of Avaz that we've seen,
a speech therapist would be
the person that would work
with the kid in order to
implement the app for them.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
with the kid in order to
implement the app for them.
So the speech therapist
would meet
with the kid maybe once a week,
they would do assessments,
they would do all of
these different things.
And so we asked this lady,
"Why don't you ask your speech
therapist these questions?
You know, that person is
probably much more qualified
to give you an answer to
some of these things."
On that day, I found
out something very
interesting about India.
This lady's son didn't
have a speech therapist.
In fact, the closest
speech therapist
to that lady was 300 miles away.
If she had to find a speech
therapist for her son,
she'd have to get on a
train, travel for six hours,
and go to a speech
therapist and come back home.
In fact, I'll tell you this.
In the entire country of India,
the country of 1.3
billion people,
there are 1,800 speech
therapists.
You could take all of the
speech therapists in India
and you could put
them in this room,
and you could fill this room
three times over with the number
of speech therapists
that are in India.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So this was a mind-blowing
moment for us.
This was really an
eye-opening moment for us.
And what we had to do eventually
was that we had to take the app,
Avaz, and we had to build this
complete new app inside it.
So it's almost like a
sub-app within Avaz.
And what that sub-app did --
it was a pretty new
idea at that time.
What that sub-app did was
that it trained a parent
to be a therapist
for their child.
That was the only way that we
could get over this problem.
You know, as a startup,
we couldn't train the 5 million
therapists that India needs.
But this was actually more
transformational in my opinion
than even Avaz itself because
we were not just giving a device
for the child, we were actually
building up the resource base
for these children to get
high-quality intervention,
and from the people that these
children mattered the most
to -- to their parents.
So this happened, you
know, in 2011 shortly
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So this happened, you
know, in 2011 shortly
after we brought Avaz out.
And shortly after
that, I won an award
from the President of India.
I had less facial
hair back then.
[ Laughter ]
But even more rewarding
than this --
you know, after five
years of lobbying --
this year, at long last, one of
the big states in India decided
to buy iPads, install Avaz on
them, and to give them away
to every school that they
supported in the state.
[ Applause ]
This is the very first time
that the Indian government has
invested in assistive technology
for kids with speech
disabilities,
the very first time that the
Indian government has invested
in this stuff.
And in fact, the Apple guy
that's handling this order,
the guy at Apple that was
coordinating the delivery
of these devices, he told me
that this was the
largest-ever government order
of iPads in India.
And it's
for an assistive-technology
application.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for an assistive-technology
application.
How cool is that?
For an assistive-technology
application.
[ Applause ]
Today, there are
about 30,000 kids
around the world whose lives
have been touched by Avaz.
Avaz is an artificial voice
for 30,000 children
around the world.
You know, at these
scales -- 30,000 users --
we could see some very
interesting patterns
in the way that Avaz was used.
And let me talk to
you a little bit
about how Avaz was being
used around the world.
What Avaz did very
successfully was
to help kids have this
alternate access to words.
So instead of having to speak
them out, instead of having
to remember how they sounded,
they could use pictures
to communicate.
So, for example, if a kid wanted
to say that they were hungry
and they wanted something to
eat, they could go into the app
and find the word
"eat" and tap on it.
If they wanted to say that they
wanted to go to the bathroom,
for example, they
could go into the app
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
for example, they
could go into the app
and find the word "toilet".
If they were eating something
and they found the food
incredibly gross, they could go
into the app and they
could find the word "yucky"
and they could say that.
The problem happened when
kids had to communicate
with more than one word.
So, for example, if a kid
had to say, "I want water,"
it was quite common that
they would pick the word
"water" first and then they
would pick the word "want".
And they would do
it right sometimes.
They would do it
wrong sometimes.
But they were doing
it wrong as many times
as they were doing it right.
Or, for example, if they wanted
to say, "The boy is swimming,"
they would just pick
the word for "swim"
and they would pick
the word for "boy".
You know, they weren't really
creating these sentences.
You see, Avaz was very
successful at breaking
down this symbolism of words
and putting the concreteness
of pictures into that.
But this is actually a different
level of symbolism at play here.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But this is actually a different
level of symbolism at play here.
This is a symbolism of grammar.
And grammar is something
that's so abstract,
and there are the
rules of grammar.
And those rules of
grammar are so abstract
that they're even very hard
for us as native speakers
to explain and to understand.
You know? For example, if
you take this sentence,
"I want to eat,"
what's the meaning
of the word "to"
in the sentence?
You know, if you asked me,
I wouldn't have an idea.
It's just a word that's there
because it's ungrammatical
if it wasn't there.
[ Laughter ]
If you take these sentences,
"The window broke," and,
"He broke the window,"
it's interesting, you know,
these two sentences are
both grammatically correct.
But the object that's
breaking is actually the same.
Right? The window is the
object that's breaking.
Why is it that it appears on one
side of "broke" on one sentence
and on the other side of
"broke" in the other sentence?
Or if you take this sentence,
"I eat," the past tense
of this is obviously, "I ate."
But if you negate this sentence,
if you say, "I didn't eat,"
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But if you negate this sentence,
if you say, "I didn't eat,"
why is it wrong to
say, "I didn't ate"?
Right? So there are these
meta-rules at play here.
There are these things
that go beyond the rules
of which word presents
which meaning.
There are these meta-rules
of grammar.
And this was actually
very frustrating to us.
We went out behind the
pursuit Avaz with the intention
of giving communication,
full communication,
equal communication to
kids with disabilities.
And we knew that our mission
was actually being completely
underfulfilled if we were
only giving them words
without giving them the
ability to also create sentences
and to put these words together.
The problem was that,
unlike in the case of Avaz,
there was actually
very little research,
very little theory behind
how you can teach children
with autism grammar,
how you can teach them
to put words together.
We trawled through all of
the scientific journals.
We went to all of the
scientific conferences.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
We went to all of the
scientific conferences.
But very little insights
that people had got.
The insight that kind of
triggered a lot of ideas
in our head quite consistently
came from the work of somebody
who lived 2,500 years ago.
He was an Indian philosopher.
I don't know if he
exactly looks like this.
[ Laughter ]
The Indian government put out
a stamp about him in 2004.
He has the unfortunate
distinction of sharing his name
with a kind of sandwich.
It's called Panini.
[ Laughter ]
But Panini lived in -- it didn't
mean "sandwich" back then.
[ Laugther ]
Panini lived around 500 BC.
And Panini did something
which was phenomenal,
which is unprecedented
even today.
He took a language, the
language of Sanskrit,
and he created a concise,
complete, consistent set
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
and he created a concise,
complete, consistent set
of rules -- 4,000 rules -- that
described the entire language.
OK, so he had this set of 4,000
rules, and those rules are
of course still relevant today.
But with these 4,000 rules, if
you apply these 4,000 rules,
any sentence that you
created would be guaranteed
to be grammatically-correct
Sanskrit.
And conversely, any
grammatically-correct sentence
in Sanskrit, any sentence, any
book, any verbal reference,
any grammatically-correct
Sanskrit could be explained
with these 4,000 rules.
It was almost like he had
written a computer program
to generate Sanskrit.
So that codification of a
language and the way that he did
that was absolutely beautiful.
It was phenomenal.
But that codification of
Sanskrit gave us the first clue
that maybe this is an
approach that we can take.
Maybe we can codify the
rules of grammar and put it
into a computer in some
sense and see if we can use
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
into a computer in some
sense and see if we can use
that to give kids with
autism an access to grammar.
But even though we
had that insight,
the other missing component
was really the user experience.
How could we get kids to
interact with grammar?
And the answer to that really
appropriately came when I was
in a school for kids with autism
and I happened to interact
with -- I happened to
be present, you know,
where there was this little
girl, and there was her mother,
and there were her teachers.
And this girl, you
know, she had autism
and she was almost
completely nonverbal.
But once in a while,
she could speak.
And that's what happened
that day.
She jumped up and she
said the word "eat".
And there was no
context to this.
She just finished
eating her lunch,
so her teacher thought
perhaps she wanted
to talk about her lunch.
So they tried to get
her to communicate
about her lunch on Avaz.
But it was very clear
that, you know,
that was not what she
was trying to say.
So we were all standing
there and trying to figure
out what she meant
by the word "eat".
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
out what she meant
by the word "eat".
And then her mother started
asking her questions.
So her mother said,
"OK, eat what?
Eat lunch?
Eat breakfast?
Eat snacks?"
And then the girl took out her
iPad, and she took out Avaz
and she pointed to
the word "ice cream".
And then her mother
asked, "OK, who eats?
You eat? Someone else eats?"
You know? And then she
pointed to the word "I".
And then her mother
was like, "Eat when?
Eat now? Eat later?"
And it turned out that
this particular girl wanted
to eat ice cream on her
way back home from school.
Now it's not often that
I can pinpoint the moment
when the lightbulb
goes off in my head.
But in this particular case,
I remember this episode almost
like it happened yesterday.
Because what struck me was
that feeling of realization --
that realization that this
mother had gotten this girl
to communicate what she
wanted without using grammar.
I mean, this girl could not put
more than one word together.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
I mean, this girl could not put
more than one word together.
She could not put multiple words
together to create sentences.
She could not select
different forms of words.
She did not know what
the meaning of "to" was.
She did not know the way in
which English grammar worked.
It's almost as if her mother had
peered into her head and figured
out what the meaning was
that she wanted to convey.
So this was the other
part of the puzzle for me.
You know, and I took this idea
that we could really represent
meaning without grammar
if we joined words together
using questions and answers.
And I combined that with
the algorithms that came
from my study of Panini and all
of the resource in linguistics
and the resource in
neuroscience that had been
over the last few years.
And I put that together
to create an app.
It took me three years to
actually create this app.
I started in 2012.
It was in 2015 that the
pieces finally came together.
And in 2016, we put
this app out --
this year, just a
few months ago.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
this year, just a
few months ago.
It's called FreeSpeech.
And let me show you a quick
demo of FreeSpeech to explain
to you how we solved
this problem.
OK, so this is the
FreeSpeech App.
And you can see that it
has pictures the same way
that Avaz has pictures.
In fact, if I tap on the
More Words button here
on the bottom-left, it has all
of the pictures that Avaz has.
But unlike Avaz, when
you construct a sentence
in FreeSpeech, you
don't have to start
at the beginning
of the sentence.
In fact, you can start
anywhere in the sentence.
So, for example, supposing
I pick the word "take"
and I drop it here
on the screen.
Here's the interesting part.
>> Take
>> The app actually gives you
this lattice this scaffold
of questions around the word
that you've just dropped
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
so that you can add more words
to create a sentence out of it.
It's very similar to what
the girl's mother was doing.
For example, if I drag
the word "teacher"
and I put it into
the word "who".
>> The teacher takes.
>> It automatically
constructs a sentence
with that specific meaning.
It says, "The teacher takes."
I can take the word "book", and
if I put it into the "what",
it will say, "The
teacher takes the book."
And if I take the word "school"
and if I drop it on the word
"to", it will say, "The teacher
takes the book to the school."
Let me show you a slightly
more complicated example.
So, supposing we pick the
word "want", and let's put
that in here, it pops up
all of these questions.
And I can take the word "parent"
and I put it into the "who".
If I put into "want",
for example,
it will say, "Want the parent."
But if I took that from that
location and if I put it
into the "who", it will
say, "The parent wants."
And you see that every
time you add a word,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And you see that every
time you add a word,
it actually gives you
even more questions.
So more questions pop up.
So you can see that
there's a "whose"
which is next to "parent".
So if I tap that, it will
ask you the question,
"Whose parent wants?"
And then I can drag the
word "she", for example,
and drop that in here.
It will say, "Her parent wants."
If I tap this little button on
the top-left corner of "parent",
I can convert that
into a plural.
So I can make it,
"Her parents want."
If I add the word "see",
for example, I can say,
"Her parents want to see."
And I can drag the word
"you" and I can make it,
"Her parents want to see you."
In fact, I can do even more
complicated things here.
I can say that all of
this happened in the past.
And I don't have to use the
rules of grammar to do this.
All I can do is I
can tap the button
at the bottom that says Past.
It will say, "Her parents
wanted to see you."
And I can negate that.
So if I press the Not button
at the bottom, it will say,
"Her parents didn't
want to see you."
I can make a question out of it.
>> Didn't her parents
didn't want to see you.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> Didn't her parents
didn't want to see you.
>> Or I can make even more
complicated questions.
[ Applause ]
>> Why didn't her
parents want to see you.
[ Applause ]
Thank you.
So that's how FreeSpeech works.
And FreeSpeech is pretty amazing
because I think it's
the very first time
that someone has actually come
up with a grammar predictor.
That's what the app
is doing, right?
I mean, you have all of
these different pictures
and the app is putting
grammar around it.
Now why is that important
for kids with autism?
The best way to kind of explain
that is to draw an analogy
to a different disability.
I'm sure many of you here
have heard of dyslexia.
Some of you may have
even struggled
with it when you were kids.
Kids with dyslexia have a lot
of difficulty with spelling.
You know, English
spelling is horrible.
I have difficulty
with English spelling.
But if you give a kid
with dyslexia an iPad
and you ask them to type,
magically their disability
goes away.
Why? Because of spell check.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Why? Because of spell check.
Because of autocorrect.
So a kid with dyslexia who
is typing on an iPad is able
to communicate, is able to type
perfectly correct English even
though they have
difficulties with spelling.
Avaz does exactly that for kids
who have a language
disability like autism.
It plays to their strengths.
They're able to take
pictures, they're able to pick
out pictures from the tablet,
and they're able to put pictures
in different configurations.
But every time they do that,
the algorithm is automatically
creating a perfectly-grammatical
sentence and is speaking
that out.
Now, we did trials for about
an entire year before we put
FreeSpeech out on the market.
And the results with kids
with autism were dramatic
-- truly dramatic.
There were kids who were
communicating with no words
or they were communicating
with one word.
And suddenly, in minutes --
literally minutes after they
started using the app --
they were constructing these
four or five-word sentences.
In fact, there was this
one particular kid --
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
In fact, there was this
one particular kid --
I want to show you
a video of this kid.
This kid was in -- you know,
he was using, you know,
single words to communicate.
And you can see that he's not
just using FreeSpeech very
fluently, but he's actually --
FreeSpeech is teaching
him to talk.
Let's look at the video.
>> Eat want.
>> So, the kid is
trying to speak.
>> Drag.
>> Eat.
>> He eats.
More words.
Food and drink.
Meal.
>> Meal.
[Inaudible]
More words.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
Press meal.
Press drag.
>> He eats lunch.
>> He ate lunch.
>> Did he eat lunch?
>> Good job!
[ Applause ]
>> This is pretty incredible.
You know, this kid was in --
this is a kid in
Portland, in Oregon.
And this kid's speech
therapist, you know,
he was completely
blown away by this.
In fact, his therapist
-- his name was Lucas.
You know, before Lucas
became a speech therapist,
he was a linguist.
And so he sent this video to us.
And to say he was blown
away is an understatement
because the same day that I
got this video, I got an email
from Lucas asking me if he could
quit his job and join my team.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Laughter ]
And that's how Lucas
became our resident expert
in speech therapy in autism.
In fact, when we put
FreeSpeech out, you know,
Lucas also had a
very interesting idea
because Lucas had this
other kid on his caseload.
And this other kid had a --
you know, he was an Avaz user
but he was facing a very
different kind of problem.
This other kid that was
on Lucas' caseload,
he was bilingual.
He came from a bilingual
family --
bilingual, English and Spanish.
So, at school, he would
be learning in English.
All of his teachers would
talk to him in English.
But when he went back home, his
family, and all of his friends,
and all of his neighbors, they
would be speaking in Spanish.
Now this is a challenge that
speech therapists have battled
with for many, many years.
You know, devices like Avaz,
apps like Avaz do exist.
They do exist in
different languages.
But the problem of how do
you give a voice to somebody
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But the problem of how do
you give a voice to somebody
who speaks two different
languages has always been a
challenge for kids with autism
and for the therapists
that work with them.
So, let me give you a short demo
of why exactly that's a problem.
Let me go back into
Avaz and explain
to you what the issue is.
So, this is Avaz, right?
So we looked at the
-- I'm just going
to create a very quick sentence.
I'm going to say, "I want
to," and I go to Actions.
And let's say, "I
want to go home."
Right? So I pick out these
different words here.
And so let's say that a kid
is able to make this sentence,
"I want to go home,"
and they can speak it.
Now there is a Spanish
version of Avaz available,
so let me switch into
that very quickly.
So -- yeah.
How many Spanish
speakers in the audience?
OK. I don't know Spanish at all.
I know zero Spanish.
But, you know, I'm going
to try to use the pictures
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
But, you know, I'm going
to try to use the pictures
to create a Spanish sentence
of exactly the same thing.
So, let's see.
So I have the picture
for the word "I".
I can recognize that.
That's here.
And then I have "want".
And this looks like the
picture for the word "to".
And this is "go".
And so there's places in here.
And this is "home".
Is that the right
sentence in Spanish?
If I went to Mexico and if I
said this, a kid would laugh
at me because this is
completely grammatically wrong.
You see, I've actually
done a pretty good job
of picking the right
words out the words --
the words "I", "want", "go".
All of these are correct.
But in Spanish, the way that the
words work is very different.
For example, the word "querer"
-- the word that means "want" --
you actually have to inflect
that so it becomes "quiero"
when you use it for
the word "I".
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
And similarly, you
don't use the word "tu"
in Spanish the same way that
you use it with English.
You'd actually put that
word before the word
"casa" to say "tu home".
So the problem here is
that when you switch
between two different languages,
the interface actually
looks very similar.
And you can see that, right?
The Avaz interface
looks almost identical
between the two languages.
But the user experience is
actually completely different
because kids have to learn
completely different patterns
to be able to construct
these sentences
and to be able to
put them together.
So this was a problem that
not just my friend, Lucas,
but really everyone
who was working
with kids with disabilities.
Essentially, the child had to
learn two different patterns
of using the same app for them
to be able to communicate.
So the idea that kind of
went off in our heads was,
"Can we use FreeSpeech
to be able to do this?"
FreeSpeech is essentially,
in some sense,
a representation of meaning.
Right? So the way that you
put these words together,
the way that you
construct these sentences,
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
they're essentially
conveying what the meaning is
that you're trying to say.
And the app is putting
the words together.
So can we use that to be able to
help kids construct sentences,
you know, even if they
have a different language?
So I'm going to go
into the Settings here,
and there's a Spanish
setting here.
And you can see that,
when you do that,
the interface completely
changes into Spanish.
And all the words
are in Spanish.
The labels are in
Spanish and everything.
And when I drag a word
and I put it in here,
all of the interface elements
here are in Spanish as well.
The questions are in Spanish,
the tenses, everything.
But here's the interesting part.
When I start putting words in,
>> I want.
>> It actually gives me an
English sentence as the output.
So I can say -- so if I want to
try to make the same sentence,
I can take the same words
>> I want to go.
>> And I can put that in here.
And I'm following exactly
the same user interface.
>> I want to go home.
>> But I'm creating a
sentence in another language.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
>> But I'm creating a
sentence in another language.
You know, the --
[ Applause ]
So we put this out, you
know, only a few weeks ago
and we're already getting
some reports of people
who found this quite useful, not
in a special-education classroom
with a disability, but in a
special-education classroom
for ELL -- for English
Language Learners --
and kids who are in
schools in America
who are entering the
American education system
without knowing English.
There are teachers who
are experimenting to see
if they can do this with those
kids to teach them English.
So could you use FreeSpeech
to teach someone English?
Or, for that matter,
could you use FreeSpeech
to teach anyone a new language?
Let me show you something else.
You know, I'm going to go into
the app now, and I'm going
to go back into the Settings
and I'm going to pick Chinese.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
How many Chinese-speakers
do we have in the audience?
Oh, OK. All right.
Well, typically when
I give these demos,
I try to pick a language that
nobody in the audience knows.
[ Laugther ]
Unfortunately, WWDC is a
little too international
for that trick to work for us.
I'll just keep my
fingers crossed.
You can see that the
interface has changed
to Chinese now the
way that it changed
to Spanish the last
time I did that.
But there's something very
interesting that's going
to happen now.
So I'm going to take words out.
So let's pick the word
"want" and put it in here.
Can you see what's happening?
It's actually creating
a sentence.
It's actually creating a
sentence not in English
but in English and in Chinese.
And I can do all of the things
that I did with English.
For example, I can say,
"That happened in the past."
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
For example, I can say,
"That happened in the past."
I can say, "It didn't happen."
And that's how you can
actually construct sentences
in a completely different
language using
FreeSpeech interface.
[ Applause ]
Could you use this
to learn Chinese?
Or, vice versa, can
Chinese people use this
to learn English?
That's an experiment
that we did.
I was in China in May.
And I went to China and I tried
to see if we could get kids
in China to start
using FreeSpeech
to learn English instead.
And I want to show
you a short video
of an experience that we had.
So this is a school in Shanghai.
You can see the kids are
playing with FreeSpeech.
[ Foreign Language ]
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
[ Foreign Language ]
They're having an
incredible amount of fun.
The kid loves the iPad.
[ Foreign Language ]
They're killing it, right?
[ Applause ]
So, that's the next big
project that we're working on.
And we're trying to create
a game that uses FreeSpeech
at its heart to teach language
to kids that are trying
to learn a new language,
particularly kids
that are trying to
learn English.
You know, it's very interesting.
There was a study that
was published, you know,
about five or six years ago.
Of all people, it was
published by the NSA.
So there was this spy
master probably in the NSA
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
So there was this spy
master probably in the NSA
who did this very
interesting study.
He took a bunch of the
world's languages and he tried
to sort them, he tried to
order them in the order
of which language is
the most difficult
for an English-speaker to speak.
Which one do you
think came out on top?
[ Laughter ]
Well, number 3 was Arabic.
Any Arabic speakers?
Number 2 was Korean.
And number 1 was Japanese.
Any Japanese speakers here?
OK, wow. All right.
So, the native Japanese
speakers here, congratulations.
The NSA thinks yours is the most
difficult language to learn.
And by the time you
were three years old,
you were speaking
Japanese fluently.
You know, why do kids
learn language so easily?
It always frustrates me.
It always makes me very jealous.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
It always makes me very jealous.
You know, I have a little son.
He's one and a half years old.
And I know that in another
one and a half years,
he'll be speaking
completely fluently.
And it's always been
a source of puzzlement
to me in why they do that.
What the theory says
-- what the theory
of language acquisition says --
is that kids learn a
language much faster
and much more effectively
than adults
because they're immersed in it.
You know, they hear it,
they try to speak it,
and they know what the meaning
is of what's being said to them
because they understand
the context
in which it's being said.
What if we could
simulate emotion
and provide the same capability
to adults and older children?
That's why I'm excited about
the prospect of FreeSpeech.
Because with FreeSpeech,
you're making these sentences,
and you know what the meaning is
because you're using
pictures as a bridge.
And every time you make
sentences in FreeSpeech,
the app is telling you
what the sentence means
in the language you're
trying to learn.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
in the language you're
trying to learn.
That's the interesting thing,
and that's what we're
working on.
You know, language is something
which is really interesting
for multiple reasons.
In this connected world
where anyone anywhere
in the world is two clicks away
on Facebook or on WhatsApp,
language is really
the last barrier
that we have as a species.
I mean, think about it.
Right? How many of
you have a friend
with whom you don't
share any language?
You know, communication
is the fundamental barrier
that children with
disabilities are born with.
But it's also the last barrier
that we have to cross as people.
And that's why the
work that I do is
so satisfying and
exciting to me.
You know, I build apps that
help kids with disabilities,
kids with disadvantaged
education backgrounds, refugees.
X-TIMESTAMP-MAP=MPEGTS:181083,LOCAL:00:00:00.000
kids with disadvantaged
education backgrounds, refugees.
You know, I build apps
that use Apple's devices
to make the lives of
these people better.
But I'm hopeful that
some of the work
that I'm doing could
also help all of us look
with a different light and
with a different appreciation
at the most beautiful,
most expressive
of human inventions -- language.
Thank you.
[ Applause ]