Transcript
>> Welcome to Creating Secure Applications.
I'm Matt Murphy.
I'm an engineer on the Product Security Team.
Thank you for coming.
So why are you here?
You probably know if you're in the audience
but, you know, it helps to go over it.
So you're here because you want to avoid
the consequences of security issues.
Things like negative press, lost revenue, and so on.
You realize that security is a complicated business.
We're dealing with more and more connected devices, you
know, things like MacBooks, iPhones, iPads and so on.
And those environments demand a
lot of special care and attention.
You're here because you wanted to determine
the optimal ways to prevent security issues.
And if you're developing for our
platform, who better to ask?
Finally, you want to maximize the benefits in terms of
security with your available resources because we all know
that security is not really what you want to
do unless you're writing security software.
You want to develop cool applications
and cool features for your customers.
And finally, if you make a security
mistake, it's really expensive.
Believe me, we know.
[ Laughter ]
>> So in this part of presentation, I'm going
to cover some design tips for secure software.
I'm going to give you some tools that you
can use to help find bugs in your code.
And some tips to avoid frequently seen security issues.
And then later on, I'm going to hand it over to David who
will discuss a little bit about Objective-C and Cocoa
and we'll also give you some practical examples of how
to apply some of the theoretical stuff that I'm going
to discuss in the first part of the presentation.
So the discussion will flow in
term of the Security Lifecycle.
The four distinct components of a secure piece
of software--design, code, test, and maintain.
Now, we're not going to cover the maintenance part
here. That is, as we say, an exercise to the reader.
But that involves things like fixing bugs
and delivering fixes to you customers.
Now, this is what I'm going to cover in
the design portion of the discussion.
Now, as you'll note in the last bullet, there's quite
a bit of information about designing security software
that we don't have time to cover in an hour.
So we encourage you to look at the Secure Coding
Guide which is available on Apple Developer web site.
And contrary to its name, it actually
has some very useful design tips as well
as cutting instructions that you can use.
First, we're going to talk a little
bit about privilege separation.
Now, privilege separation is the art,
shall we say, of separating your code
into privileged and non-privileged components.
So that an ordinary user like me
can walk up to your Mac and use it.
The first thing you want to know about
supporting privilege separation is what not to do.
And the first thing you want to not do is you want to
not use the AuthorizationExecuteWithPrivileges API,
because I've seen many, many developers
say, "Oh, but this is so easy."
And they use it and they introduce security bugs
or they introduce complexity into their software.
What you want to do instead is want to factor
privilege code into a background service.
And we'll discuss that in a little more detail here.
So you want to use launchd and you want to use the
service management APIs to set up your background service,
things like SMJobBless, SMJobSubmit, those
are good reference points for you to look
out if you're looking at writing privilege code.
You can also see, the SampleD example which gives you
a starting point for how to write a launchd service.
And it's available from the Apple Developer web site.
Next, we want to cover a little bit about reduced
privilege, which is sort of practical consequence
of privilege separation but it's important enough
that it gets its own section and here's why.
The most important thing about
running with reduced privilege
on the Mac OS is you want to test as a standard user.
So many developers don't do this and they unwittingly
introduce dependencies that require the users
to run as administrators to run their software.
You don't want to do that.
It is important for you to be a part of
good security posture as it is for you
to avoid introducing bugs in your own code.
When you're testing as a standard user,
your application should just work.
I should be able to walk up to it and that should work the
same as a standard user as it would as an administrator
with perhaps a few exceptions if you have
really truly privileged functionality.
If that doesn't happen, you found
a bug and you should fix that bug
because your user should be able to
run your software as a standard user.
Now, a corollary to that is you don't want to rely
on the special capabilities of administrators.
These are usually things like places on the file system
that you can modify when you're running as an administrator.
And obviously, you can't if you're
a standard user as we just noted.
But also these kinds of things might
break in the future for administrators.
So we're constantly tightening the permissions of
the file system and what administrator users can do
without explicitly authorizing
and gaining elevated privilege.
So what I mean when I say places
on the file system you can write
to /Applications including your own application bundle.
If you have one user who installs you
application for example and another user
who runs it, it won't be writable to that other user.
Also, you know, the administrator of the system may
change the permissions on your bundle to be tighter.
For example, the ownership may be root.
And if your application attempts to write
into that bundle, that is going to break.
/Applications/Utilities unless you are writing
and you're are installing a system utility.
If you're writing a system utility
you don't want to be modifying things
in that directory at runtime, only at install time.
/Library and its sub-directories.
Now, that's a huge list but there're a couple of common
culprits that we see people write to all the time.
One is /Library/Application Support
and the other is /Library/Preferences.
But as I mentioned there's quite a few others.
So there are several reasons why you might write to one
of those directories and they're not always obvious.
So three of the most common problems that we see
that prevent applications from running properly
in a reduced privilege environment are
registration, things like serial numbers
and license keys are installed in a
global directory at the first run.
And so if the person running your application
is a standard user-- oops, that just broke.
What you should do instead is if your--
you're going to ask for a serial number
or license key during your application's launch.
You should do it at the install time because your
installer has privilege and then you can save that file
and then every run of the application can
simply consult it and make sure that it's there.
Second, things like global preferences.
A lot of applications rely on /Library/Preferences,
for example, being writable as I mentioned previously.
And this will break-- you know, if
you're running as a non-administrator.
So if you have a preference that you truly want
to have in effect for all users of the system
and you want standard users to be able to use
your application, you should have a background job
and you should protect it with authorization as necessary.
Again, you can use launchd in the service
management APIs to facilitate that.
Finally, and this is the most complicated
one, is the case of a custom installer.
We'll see an application that you just double click and
run and it throws a bunch of files around somewhere.
And most of the time that works because the user is
running as an administrator but it doesn't always.
So what you want to do if you're
going to write an installer.
The first thing we recommend that you do, obviously,
is use PackageMaker and the Apple Installer
or one of the third party installers
that's already available.
But if you need to write a custom
installer and some people do,
you can also have your custom installer UI use the installer
command under the hood, basically, making your installer UI
around the package system that's already there.
And finally, if you can't do that, you can run
your-- you can run your installer as a launchd job
and then remove it when the install completes.
Now, that's a little bit hackie but it will work
for you and it will get around the privilege issue
and allow your application to run properly in a reduced
privilege environment which is really important.
Next, I'm going to talk a little bit about avoiding setuid.
And setuid is a security curse
because it's an attacker's dream.
So the attacker controls tons of things that are input to a
setuid program file descriptors, the environment, and so on.
And because of the way setuid works, there can be
bugs in your code that can be exploited by an attacker
or there can be bugs in third party code or
even our code that you expose by being setuid.
A great many of the system frameworks for example,
don't expect to be loaded into a setuid binary.
You don't want to write self-repairing tools.
We've seen self-repairing tools written many, many times.
Hey, if I don't have the setuid bit
AuthorizationExecuteWithPrivileges
and then I'll restore the setuid bit.
That's great.
Right? No.
Please don't do it.
A local user can alter your self-repairing
binary because usually
if the setuid bit is gone, the permissions are wrong.
And once the local user can do that, your self-repair
will happily set the setuid bit on that modified binary
and elevate the malicious code
to root the next time it runs.
It is great for an attacker.
Not so great for the user.
So what you want to do instead of self-repairing tool is
you want to have an installer package and you want to have
that installer package set the setuid bit on your file
at install time requesting RootAuthorization to do so.
So that it does not subvert the privilege model.
Now, to give you an idea of why we want you to
avoid setuid, we have a little diagram here.
Now, to user a setuid is very simple interaction.
Hey, I run a tool.
It does something.
I don't even really know what it does but it works.
So you might not even realize that
you're running such a tool.
Now to an attacker, it's a lot more interesting.
An attacker has control of I/O and your setiud
tool, command-line arguments, environment variables,
the working directory, file descriptors, the file mode
mask, the emask, interval timers, signal mask, mach ports,
and there's more that I couldn't fit the diagram.
So, you know, it is a very complicated interaction.
You don't want to do it.
So finally, and perhaps the most important point of
the security design discussion for a network task
and particularly those of you developing
on iPhone is to protect data in transit.
You have to assume users of applications
are mobile, as I mentioned before.
You know, MacBooks, iPhones, iPod touches, iPads.
The days where users are sitting on a desktop, they're not
quite long gone yet but they're on the way out the door.
So you have to be suspicious of things like the
Domain Name System and the local network in general.
Now, you know, if I am sitting at
Starbucks and I'm running your application,
you know, I think, I think I am pretty safe.
You know, I am on a network that I know and, you know,
I am sipping my latte and having great, great time.
But what's bad about those networks is that they can be--
traffic on those networks can be monitored
and can be tampered with by third party
from the network relatively easily and there
are a lot of third parties on those networks.
So what you want to do is any kind of sensitive data
that your application is transmitting
or receiving protects it with SSL.
Now, there are two really easy ways to do that.
One is NSURLConnection with an https: URL and the
other is CFReadStream with the SSL extensions.
Now, here is something you don't want to
do if you're protecting data in transit.
Don't disable chain validation.
Now, that's really important because if you disable chain
validation, you've basically told the certificate system,
"Hey, any certificate is acceptable here.
I don't care where it came from, anything."
So I can get a certificate, present it to your
application and the system will say, "Sure, OK."
So here's what that looks like.
I wanted to give you an example
because I've seen it recommended a lot
and people don't call it disabling
chain validation but that's what it is.
And so if you see code the sets this constant
a kCFStreamSSLValidatesCertificateChain,
yes I know it is a mouthful, to false, no.
You need to rethink the design angle there.
Now, the one case that you may want to do this is if you
are expecting to use a self sign-certificate for example.
But you have to validate specifically that the
certificate you've been presented is the one you expected
and that is a very complicated process.
But you cannot simply disable chain
validation and plug and go.
You will have introduced a vulnerability
and you'll have effectively neutered SSL.
It's also important-- you know, this isn't necessarily
sensitive data per se, but it's integrity critical.
So if you have code or other content that your
application will download on demand such as updates,
it's really important that you sign that stuff
so that the application can verify, "Hey,
this hasn't been manipulated in transit."
It's very important when you're doing code
signing that you verify the signing certificate,
because this poses a similar problem
to chain validation in SSL.
Anyone can get a code signing certificate
and say, "Hey, this code is signed by me."
I can get a certificate that says,
"Hey, Matthew Murphy signed this code."
And your application will accept
it unless you explicitly validate
that you've received a signature
from a particular certificate.
You don't want me signing out on
packages for your applications.
Trust me. So now we have talked a little bit about,
you know, designing your application securely.
So there are some coding tips that I wanted to cover next.
And we're going to talk about a few
different things in the coding section.
But again, highly recommend reading the Secure Coding Guide.
I really like it.
It's great stuff.
First, let's talk a little bit about safe file handling.
Now the biggest part of safe file handling is
using a safe temporary or cache file directory.
Now what do I mean by safe?
Well, you can get them from confstr or NSTemporaryDirectory.
Those are going to be good safe
directories for your application to use.
Now what's an unsafe directory?
It's a world-writable directory, something like say,
/tmp or /Library/Caches where other users can dump files
and can potentially interfere with
your applications expectations.
If you have to use these directories, these
world-writable directories--and be very, very careful--
it's important to note that the higher level APIs like
the writeToFile method on NSString and NSDictionary
and NSFileManager, those are not safe
to use in a world-writable directory.
You'll introduce race conditions.
So what you have to do is you have
to use a lower level open API.
It's also important to note that
you can't open existing files
in a world-writable directory in a security critical way.
You have to create those files yourself to ensure
that you're manipulating the file you expect.
And you have to use the exclusive flag
I've highlighted on the slide here
to ensure that you don't follow links for example.
Next let's cover a little bit about permissions.
Now on the Mac like most Unix-like platforms,
files are actually world-readable by default.
Now this stuns some people but the
most important thing to note here is
that the directory structure is largely what
controls-- what controls access to a file.
If you can't execute through the parent directory of a
particular file, you can't see the file much less read it.
However, world-read permissions
are not appropriate for everything.
So if you have a file where world-read permissions
aren't appropriate or simply aren't necessary,
we encourage you to set tighter
permissions when you're creating a file.
Please, please, please, please, please,
avoid creating world-writable files.
Don't do it.
The reason why you don't want to do it is
because it's subject to race conditions.
Your file can be deleted and replaced for example.
It can be substituted with a link and your program
will follow the link and so on and so forth.
Also, an unprivileged user can simply
destroy the file if they want to,
which is not good for your application stability.
What you want to do instead, again, you want to have a
background service that your application can make calls
to when it runs as any user that will edit a file under the
hood and you don't want to have that file be world-writable.
Next, I'm going to talk a little bit about bounds checking.
So bounds checking is a classic
defensive programming technique
and it's designed to stop buffer overflow vulnerabilities.
Buffer overflow is a bug where you have a piece of data
that's too large for the memory buffer you've allocated
into it and your application attempts to copy it anyway.
So you have to perform a lot of sanity checks
on any kind of untrusted or unvalidated input,
things that come off the network from files and so on.
And you have to use the safe string functions.
The safe string functions are listed here for you, things
like strlcat, strlcpy, snprintf and vsnprintf, and fgets.
Also the functions that we recommend you avoid using
are on the left under the X here, strcat, strcpy.
Now you may be surprised to see
strncat and strncpy on that list.
Gee, I thought those were bounded functions and they are
but they're bounded in such a way that makes it really easy
to introduce security bugs and we'll
show you that on the next slide.
But also on the list are obvious suspects like
sprintf and vsprintf, and of course the gets function,
which is the source of one of the oldest
compiler warnings in the book for security.
Now to give you an example of what I mean and
why we don't recommend you use this strncpy
and strncat functions, consider this example.
Here we have a 5 byte destination
buffer and a 6 byte string.
Obviously, if we copy the entire string into the
destination buffer, we're going to cause an overflow.
This is not good.
So here we have three prospective copies that we can make.
Of this source string into this
destination buffer one, using each function.
First let's look at strcpy.
Notice how it copies right on pass the end of the buffer.
So you've just corrupted memory and potentially
allowed an attacker to execute arbitrary code.
Not good. Now let's look at strncpy and see what it does.
Notice how it copies 5 bytes but the last byte is the E.
There's no null terminator.
That's not good either because the
next time you manipulate that string,
you're going to get a read right off the end of it.
And you may even introduce an exploitable security hole.
Now let's look at what strlcpy does.
You can see it also copies only 5 bytes, but the
last byte is the null terminator so that you end
up with a properly terminated string
and this is why we recommend
that you use strlcpy instead of
strncpy and strcat and strlcat.
Next let's cover a little bit about integer overflows.
Now, an integer overflow occurs when an
arithmetic operation produces a value that's larger
than an integer type can hold.
Now that's a big complicated dictionary definition.
So let's explain it with a little bit of code.
Here, we have a structure that has an N entries
value in it, that is potentially hostile.
We'll say it came from the network.
And then we have an NSData initializer
which says dataWithLength, N entries times,
you know, this size up here which is a constant.
Now if that N entries is really,
really large that integer will overflow
and the allocation will be unexpectedly
small then the memcpy down here which is done
in a loop will copy right on past
the end of the allocated buffer.
To show you exactly why that happens, here's an example
of what we get if we specify a particular large value.
And you can see we have an editor
here that has 33 significant bits.
So the 1 that would be at the high end of that integer
simply gets truncated away and we end up with a four
by allocation and into which we're
copying tens of millions of entries.
That's not good.
So what we recommend that you do to avoid
integer overflows is use our checkint API
which has been available since Leopard.
So you can see the code now includes the
checkint.h but the meat of it is down here
where we have this check_uint32_mul function which
will set an error condition bit in the error value
if the multiplication results in an overflow.
Now you can see it says CHECKINT_NO_ERROR
here so we can succeed.
Now we know this multiplication was safe.
So that's integer overflows.
Now I can give you a little bit of background on
how to test your application through security.
Now, there're obviously two components of testing.
One of which is manual auditing and
David will cover a little bit more
about manual auditing in his section of the talk.
But I want to give you some useful automated testing tools.
So we're going to cover two different type of tools.
We're going to cover static analysis
and we're going to cover fuzzing.
So the Developer Tools include a static analyzer for you.
You can run it with Build and Analyze menu item in Xcode.
Checks your code for some really
common bugs, memory management issues,
things like reference counting, that sort of thing.
A small subset of buffer overflows, it
won't catch them all, which is important
and some non-security bugs, things
like dead stores and so on.
There's really detailed warnings when it finds a bug that
help you document the data flow through your application.
The rules aren't very detailed yet, but we're
improving them with each updates the Developer Tools.
So you can see an example here of
where the analyzer has found a bug.
I have an NSString value that I don't initialize and I pass
to NSLog and then I built my code with Build and Analyze.
So Build and Analyze has flagged this and said,
"Hey, you declared this but didn't initialize it
and then you passed it to a by value function.
Now this example is obviously pretty contrive
but the analyzer can catch some more details
and more sophisticated example of this same type of bug.
So the important thing about the analyzer
is that we recommend you use it often.
If you run it all the time, you're going
to catch any new bugs that you introduced,
at least bugs that the analyzer, again, has rules to catch.
So that's important.
Also, we keep adding rules.
So even if your code isn't changing.
It's important that you run this periodically
to get the benefit of the new rules.
There's a project configuration option that you can use
that will let you run the analyzer
with every build of your project.
And you can see that here in this screen shot.
In the build options, there's a pretty self
explanatory option called the Runs Static Analyzer
and you can see I have checked it in my project
which has made it turn fold because it's non-default.
But when I build my project now,
the analyzer will run automatically.
Other important class of automated testing for
you to be doing with your software is fuzzing.
So fuzzing is where you subtly alter valid program inputs.
If I have data that comes from a file or from
the network, great candidates for fuzzing.
It doesn't have to complicated, change a bit
here, right there, a couple of bytes there,
you've got a fuzz file that could
potentially cause havoc in your application.
And as you might guess, the most likely thing that your
application is going to do if you find a bug is crash.
So if it crashes, there you go, you found a bug.
And our CrashWrangler tool can help you prioritize
to determine which of those bugs are important
for you or most important for you to fix.
So you can run CrashWrangler with either crash logs
or with a live target and that is an application,
it's running and it's being hit with data.
And it has a heuristic for identifying exploitable
bugs things like out of bounds rights, double freeze,
so on so forth, from nonexploitable simple crashes.
Things like null pointer to your references and the like.
And you can download it from the Apple Developer website
at connect.apple.com you just search for CrashWrangler.
And it's available for you to use today.
So we've covered three of the four
pieces of the security life cycle.
We've covered design, we've covered
code and we've covered testing.
And now I want to hand it over to David for a little
more detail on practical applications of these things
and what you can do with Cocoa and Objective-C specifically.
[ Applause ]
>> Hi I'm David Remahl and I'm a product security engineer.
I think it will be instructive to
try to apply some of the techniques
that Matt has talked about to a real world application.
And the application we'll be talking
about today is called Naivete.
It's a magical and revolutionary feed reader for Atom feeds.
It supports some of the Atom feeds on the
web, it's not perfect yet, it's a version 1.0.
It has a ground breaking feature that's an industry first,
it's document-based using the Cocoa document architecture.
It opens this new URL format that we've invented naive:.
It has a gorgeous icon and if it
crashes that's a feature not a bug.
Now I don't anticipate that the App
Store reviewers will take that excuse.
And that's why we've put this program app on the sample
code sites for WWDC as an example of code not to write.
So now I want to demo some of the
features of this application.
Let's start it here from the dark.
It opens up a blank document, and it
has these bookmarks pop-up where we can
for example load the hot news feed from apple.com.
Here we go, it's a simple RSS reader or Atom in this case.
You select the stories in the source list here.
It even supports sorting on the dates or the title
and if you hit a link it will open it up in Safari.
I also prepared a feature feed that demonstrates
some other unique features of this application.
It supports HTML5 and the proof of that is this.
[ Laughter ]
>> If you load an Atom feed that contains a
podcast file it supports downloading that so
if I hit download here you'll see the podcast file arrived
in my Downloads folder and that works for pictures as well.
Oh, I forgot to show you the document support.
If I save this--
[ Pause ]
>> Here's the file, if we close this
you'll be right back at this feed.
Now let's go back to the slides and think
about the security of this application.
So as we're going through this part
of the presentation we'll be filling
out this security scorecard for the application.
I'll be returning to this periodically
and will see how it fairs.
It's important when you're developing
an application as early as possible,
at the design phase to understand the attacks and the
environment that your application will be exposed to.
And we call this the attack surface.
And one way of understanding the attack surface is
to enumerate the entry points to the application.
In this case we have the naive: URLs.
If I click on a naive link in Safari, it
will open up that feed in the application.
Documents, users love to share documents among each other
so that's a way that an attacker might attack the user.
Feeds of course are downloaded from the web and could
potentially be used in attacks against the application.
And finally in this case, we support enclosures.
In order to understand the attacks that might be used
against us, one way is to enumerate the APIs that we use.
In this case, the stories are loaded in the WebView.
It's-- you need to know that a WebView--
that the entire security modeling
of the WebView is based on the document origin.
Since this deals with HTML and other web formats,
we need to be concerned with cross-site scripting.
This is normally an issue that web application
developers or browser vendors are tasked to solve.
But when we make connected applications that embed WebViews
for either for internal use or for viewing web contents,
then cross-site scripting becomes important to
desktop or iPhone iOS application developers.
External links, we, of course, need to be
careful when we're dealing with those.
And we could fill an entire session on WebKit security.
So this will be some of the things we focus on today.
URL handlers naturally have to be written to expect
some malicious input and do careful input validation.
And we also need-- for the documents, we
need to think about the serialization format
and make sure that that's aptly chosen to be secure.
And as Matt has often referred you to the Secure
Coding Guide, this is where you will find the source
of security concentrations for the APIs that you are using.
So let's go back to the application and see
how it fares against some of these attacks.
I've prepared an attack feed here that tests
some of the attacks we were talking about.
So as you've seen, the application
handles links from feed contents.
But what if someone would try to link
to an application on the local disk?
I've prepared this item here, and when I
click this link, we will see what happens.
Uh-oh. That launched the application, and that's
probably not something we intended in this application
because we wanted it to open up in Safari.
So we've found a bug.
Let's go to the code and fix that.
So when the user clicks on a link in the
WebView, this Web Policy Delegate gets evoked.
And we see here that if the source of the
navigation was a user click, then we do this.
We use NSWorkspace openURL to open the URL of the request.
And if you read the NSWorkspace documentation,
you'll see that openURL has a dual purpose.
One is to open safe URLs like as HTTP or AFP.
And another is to open local files.
So it's one of the recommended APIs
for opening up a local document.
In this case, we only want the safe case, and
we want to handle the file URLs separately.
So I've written some code here
that addresses this vulnerability.
We check whether the scheme of the
URL is equal to the file scheme.
And if so, we select the file and the finder
instead of opening it up through NSWorkspace.
So we build and run this.
[ Pause ]
>> Attacks, and now, the file gets
selected instead, which is what we want.
There is a bug still in this code.
I don't know if you spotted it, but here's a hint.
Someone saw it?
So if the scheme starts with all capital
letters, then we still have a bug.
And we could solve this by doing
a case insensitive comparison.
But a better way is to use the NSURL API as
filed URL, which handles all these cases,
regardless of capitalization or future changes or whatever.
It will do the right thing.
[ Pause ]
>> Now we handle that case correctly too.
I mentioned JavaScript injection
and the importance of the origin.
In this case, I've wrote a great script that
tries to first print the origin of the document,
and we see that it's served from an applewebdata URL.
And then it tries to get the EDC
password database or any other local file.
And since applewebdata live file is handled especially
by WebKit, it was allowed to get this database.
And it could, potentially upload it to the web.
In order to fix this we need to
do something about the origin.
[ Pause ]
>> So we're using this API loadHTMLString passing in the
feed contents, and then we pass and analyze the base URL.
What we want to do is to pass in the URL of the original
feed, and that's available as this property on the document.
So now, if we try this exploit again.
[ Pause ]
>> The origin is now an http: URL, and it was no longer
able to access the user database, which is what we want.
The fourth attack I want to talk
about now is against enclosures.
So I'm sure you've seen this feature in Safari
where it tags files with the download source.
So when I downloaded this file in Safari, this is a shell
script, and when I start it, the user gets this warning.
So if he wasn't expecting a shell
script, maybe an image or something,
then he is alerted to that something, something is strange.
Let's see if we get the same support from Naivete.
Downloading closure
[ Pause ]
>> Running arbitrary code.
>> Yeah, so--
[ Laughter ]
>> This shell script just ran without
warning the user at all.
Fortunately, the face is very simple.
You just go into the info.plist
and check the file quarantine flag.
[ Pause ]
>> So, there is the warning.
If every bug was this easy to fix, we'd all be out of work.
[ Laughter ]
>> So let's go back to the slides
and look at the Score Card.
Well, we had a fail for the Cross-Site Scripting initially.
Local URLs, both of the applewebdata variety
and the file variety, were mishandled,
and it didn't provide adequate protections against Trojans.
File URLs are special, both to
the WebView and to NSWorkspace.
And the lesson of this is partially to be
aware of that fact, but to read the document--
API documentation of every API you are using and
understand the security consequences of the documentation.
And finally, if your application downloads
files, you should be using file quarantine.
And the simple info.plist fix provides most of the
security benefits, and it's very easy to turn on.
But if you want to provide the user with even
more information, such as the source of the file,
then you can use the Launch Services API on the slide.
Next talk-- let's talk about file formats.
You can consider file formats and documents from two levels.
First, there's the semantic content, which is the
information that you want to encode in your document.
And then there's the serialization format, which describes
how this semantic contents match the bytes on disk.
So what signifies a secure serialization format?
First of all, we want it to be simple and predictable
because security issues tend to thrive in complexity.
We want the attack surface to be small,
and that means that we don't want--
we want as few lines of code to be running basically,
because there will be fewer lines to have bugs in them.
And we want to be sure that the format
parser was written to expect malicious input,
or potentially malicious input, and
do the appropriate input audition.
Now, I want to look at the format path that Naivete uses.
So we already saved this features file, and I
happen to know that this is a property list.
So we can view it in the Property List Editor.
This first line is a cue-- is a clue to
what format the application is using.
It's says NSKeyedArchiver, and that means that
it's a Cocoa Archive using NSKeyedArchiving.
Keyed archiving is a relatively complicated format
that can encode an arbitrary graph of objects.
It's very easy to use.
You just pass it the root object that you want to encode.
But it has some curious properties that makes it
unsuitable for use as a general purpose document format.
For example, here, we see the NSURL string, and you might
wonder, "What happens if I change this to some other class?"
Let's use something that the application will not expect.
So this means that any object that implements
initWithCoder, the NS coding protocol,
will be exposed to potentially
malicious data coming from your document.
And we can see what happens if we open up this document
that I prepared that replaces the URL with a button.
In this case, we just got some warnings
about an unrecognized selective being sent
to the object, because the code was expecting an NSURL.
It found an NSButton.
This can be-- get worse, and I'm just experienced
log message a document that doesn't load.
And I won't show you the details on how
this works, but this document managed
to start an application that could have run any code.
So the fix we want to apply in this case,
is to replace the serialization format.
And it started out looking pretty good.
This was a Property List.
And Property Lists are fine to use for untrusted input.
The problem was that it was a keyed archive.
So we will replace this document
serialization code and loading code with--
[ Pause ]
>> With some code that uses just
a plain property list instead.
[ Pause ]
>> So if we open this up in Property List
Editor, we'll see a much simpler hierarchy here,
encoding just the information that we wanted to contain.
And there are no class names to replace, for example.
So for the document serialization, it was another X because
it allowed arbitrary code execution from a document.
So, some of the formats that are safe to use, if
you use them correctly, are XML Property List,
Binary Property Lists, and spotless serialization basically.
There is NSXML which is made to deal with untrusted XML,
and Core Data is also appropriate as
a general purpose document format.
So you should use these for your
document formats, your natural protocols,
and other cases when you share
data across privileged boundaries.
There are some things you shouldn't use, and we've
already talked about the NSArchiver and NSKeyedArchiver.
And if you're using this class, it's way overdue
to be replaced because it was deprecated in 10.2.
These APIs, with the exception on NSSerialization, might
still be appropriate to use in your internal storage,
like preference files, or if you're doing something
similar to Interface Builder with frozen code,
or if you have an IPC connection where you're passing--
serializing data to pass across a connection
where there's no real trust boundary.
Now, let's move on to the code
part of the development cycle.
And connect cleanups in here.
So first, we'll use the Build and Analyze menu item to see
if the static analyzer will catch any bugs for free for us.
So we got one warning and one static analyzer warning.
Let's start with the warning.
In this case, we're calling NSLog in
the handler for the naivete: URL format.
And we're taking the URL and passing
it as the first argument to NSLog.
NSLog takes a format string.
And format strings are some of the oldest
and most obnoxious security bugs out there.
In C, they are usually exploited using the %n specifier.
Cocoa does not support %n.
But what it does support is the
%at operator which sends a message.
So first, I'll show an example of how this bug can trigger.
[ Pause ]
>> Actually, I will show you that later on a different bug.
The problem with sending messages
to an object off of the stack
when we are not passing any object
beyond the first parameter here,
so it will just take some random value off the stacks,
send as message, and when Objective-C message send picks
up that pointer, it does a series of pointer references,
and finally arrives at the IMP which is a function pointer.
And if the attacker is able to control any of these pointers
along that path that Objective-C message send traverses,
then he might be able to point execution
to his code, which would obviously be bad.
The fix is very simple.
We just use a static string as format specifier and
pass the document-- the object that we want to print.
That is the second one.
Or in this case, it is just debug codes so we can remove it.
Remove your debug code before shipping.
The other warning we need to-- run the analyzer again.
Here's the analyzer warning.
We get an error on the return line here.
It says that an object had order release sent too
many times, and of course, this is a stupid bug.
I called order release on an object
that was returned already, or released.
Fix is very simple.
As Matt mentioned, the static analyzer will catch a
lot, but it definitely would not catch everything.
Actually, there is another format
string vulnerability on this line,
and that is what I wanted to show you a minute ago.
When I load this format string URL, it contains some %x
specifiers that's translated into random data off the sack
in this very dialog when we are using NSAlert.
And as an example, when that causes a potential security
vulnerability, if we are using this %at specifier, yep,
we got a crash, and the debugger has broken.
That's bad.
The fix though is as simple.
Just like that.
The order release case was a reference counting mistake.
And reference counting mistakes can become pretty hairy.
And one example of that is when I'm using NSURLConnection
in this document, I set the document as the delegates.
And as you will know, if you have read NSURLConnections
documentation, the connection retains its delegate.
So even if the user closes the document, the object--
if the URLConnection finishes or causes an error,
the object will still be around to get that message.
So we should be OK.
So let's see what happens if we try to connect
to a fake web server here that will just hang.
We have the spinner here showing
that something is happening.
The user gets impatient, closes the
window, and then this connection closes.
Ah,we crashed.
Why did we crash?
The document is still around.
And indeed, the document did get the
message about the connection failing.
The problem is that when the window was closed,
the-- any IBOutlets to that window got invalid,
because the IBOutlets are typically-- unless you have
retained them specifically-- they're weak references.
So when we're sending a message to this
release progress indicator, we get a bug.
And again, if the attacker is able to fill
the address space with malicious data,
he might be able to highjack execution here.
The solution in this case is too complicated
to show in a demo, but it involves making sure
that the connection gets canceled before
any referenced objects get released.
And as I mentioned, this code is all up
on the [inaudible] site of sample code.
So, there are some vulnerabilities
that I won't be able to show you today
but if you're curious, you can
look at them in the sample code.
So, format strings and reference counting, another X there.
Static analyzing your code is a great aid.
It will find some bugs but it definitely
won't catch everything.
So you need to stay vigilant and
make sure that you code correctly.
Be careful with format strings.
It's very easy to-- whenever you don't
pass a static string as a format string,
it's likely that there might be a vulnerability there.
And the complier will give you warnings in most cases but
not all format string methods and functions are annotated.
And reference counting and weak references are hard.
It's easy to get mixed up, especially
around the edge cases like in this case
when the download hang and the user close the document.
Using Garbage Collection on the Mac might help.
On the other hand, it might introduce other
edge cases and it's not available on the phone.
Next apply-- let's play some fuzzing
techniques that Matt talked about.
It's a very simple and effective technique
and I wrote a fuzzer for a general--
in generic fuzzer for binary and XML
property list in less than an hour.
It's about a hundred lines of code really simple Python.
It enumerates a plist and at each element, it
does the number of permutations to it to try
to confuse the application that
will later be reading on the file.
And I won't be doing that now but you can run it
with CrashWrangler in order to find duplicate crashes
or to determine whether they are
likely to be exploitable or not.
So, I'll just quickly run this fuzzer against
one of our documents that we've created.
This is also available at sample code for this session.
And it's easy to run.
So now you'll see that it created 20 files here that
are all subtle variations of the original document.
And when I open them up, you'll see some perhaps
unexpected behavior from-- we're still crashed.
[ Pause ]
[ Laughter ]
>> Yeah, so some of these were not expected
and I think there are some log messages
emitted about in other selectors being invoked.
So there are other vulnerabilities in this
code and they're all listed in the sample code.
Fuzzing, spending an hour fuzzing this application would
have eliminated a number of security vulnerabilities.
So, fuzzing is important and should
be part of your testing strategy
and use multiple fuzzers because
they excel at different things.
This one just generated a lot of valid XML, valid
properties XML but you will want to try some binary fuzzers
and you want to try a fuzzer that focuses on boundary
values and maybe one that just does random permutations.
And if you are developing a protocol,
then you might have a fuzzer that does--
that has a deep understanding of the protocol and another
one that just does dumb manipulation of the traffic.
You shouldn't only rely on fuzzing, of course. You should
also be writing unit tests and focus on the edge cases.
Not the ones that the user will most often hit
but the ones that an attacker will focus on.
Penetration testing, which is pretty much
what we started this part of the talk doing.
Trying some different attacks and seeing
how the application responds to them.
Even if you think you've mitigated the
vulnerability it doesn't hurt to verify it.
So, in summary you should think about security
throughout the entire development process from design
to code, to testing, and then post release.
Be aware of the security properties of the API CUs.
Read the Secure Coding Guide with the API DOCs, talk
to others and have a deep understanding of the APIs.
Understand the attacks that affect your application space.
So, in this case we were dealing with a feed reader
and the bugs we were looking at had a lot to do
with cross-site scripting, URLs, input validation.
Your application will be different.
It might be running on different devices.
It might be in a completely different problem space and
it's up to you to have a deep understanding of that space.
And take advantage of the hardening
techniques and security APIs like key chained
and content protection on the phone because why not?
It might save your users from being exploited.
What are some simple steps that you can go out and do today?
First of all, you can visit the Dev Forum Security sections.
There's a lot of helpful people there
that will answers your questions.
Read the Secure Coding Guide if you haven't already
and run the static analyzer against your application.
You might find one bug, you might find 20.
It's definitely worth the time investment.
And finally fuzz your application.
Use one of the commercially available fuzzers or
write your own, it doesn't take a lot of work.
And that will put you on the path to giving
your application a clean security score card.
Here are some other sessions that
might be interesting to you for launchd
or if you're writing network applications
specifically for iOS.
And there's the iPhone securing your application
data session which might be interesting to you.
Thank you.
[ Applause ]