I'm designing an iPhone app that communicates with a server over HTTP.
I only want the app, not arbitrary HTTP clients, to be able to POST to certain URL's on the server. So I'll set up the server to only validate POSTs that include a secret token, and set up the app to include that secret token. All requests that include this token will be sent only over an HTTPS connection, so that it cannot be sniffed.
Do you see any flaws with this reasoning? For example, would it be possible to read the token out of the compiled app using "strings", a hex editor, etc? I wouldn't be storing this token in a .plist or other plain-text format, of course.
Suggestions for an alternate design are welcome.
In general, assuming that a determined attacker can't discover a key that is embedded in application on a device under his physical control (and, probably, that he owns anyway) is unwarranted. Look at all of the broken DRM schemes that relied on this assumption.
What really matters is who's trying to get the key, and what their incentive is. Sell a product aimed at a demographic that isn't eager to steal. Price your product so that it's cheaper to buy it than it is to discover the key. Provide good service to your customers. These are all marketing and legal issues, rather than technological.
If you do embed a key, use a method that requires each client to discover the key themselves, like requiring a different key for each client. You don't want a situation where one attacker can discover the key and publish it, granting everyone access.
The iPhone does provide the "KeyChain" API, which can help the application hide secrets from the device owner, for better or worse. But, anything is breakable.
The way I understand it, yes, the key could be retrieved from the app one way or another. It's almost impossible to hide something in the Objective-C runtime due to the very nature of it. To the best of my knowledge, only Omni have managed it with their serial numbers, apparently by keeping the critical code in C (Cocoa Insecurity).
It might be a lot of work (I've no idea how complex it is to implement), but you might want to consider using the push notifications to send an authentication key with a validity of one hour to the program every hour. This would largely offload the problem of verifying that it's your app to Apple.
I suggest to add some checksum (md5/sha1) based on the sent data and a secret key that your app and the server knows.
Applications can be disassembled so that they could find your key.
More information is needed to determine whether the approach is sound. It may be sound for one asset being protected and unsound for another, all based on the value of the asset and the cost if the asset is revealed.
Several earlier posters have alluded to the fact that anything on the device can be revealed by a determined attacker. So, the best you can do is determine valuable the asset is and put enough hurdles in the way of the attacker that the cost of the attack exceeds the value of the asset.
One could add to your scheme client-side certificates for the SSL. One could bury that cert and the key for the token deep in some obfuscated code. One could probably craft a scheme using public/private key cryptography to further obscure the token. One could implement a challenge/response protocol that has a time boxed response time wherein the server challenges the app and the app has X milliseconds to respond before it's disconnected.
The number and complexity of the hurdles all depend on the value of the asset.
Jack
You should look into the Entrust Technologies (www.entrust.com) product line for two-factor authentication tied to all sorts of specifics (e.g., device, IMEI, application serial number, user ID, etc.)
Related
When thinking about iPhone/iPad applications security, I can notice that there is:
Widely available hacking tools allow filesystem access
Network interception, men in the middle attack
==> data theft threat
and also:
Availability of hacking tools that allow to freely share a paid app with friends/community (seen in Cydia)
Availability of hacking tools that allow to get in app purchases without paying (seen in Cydia, and heard that it doesn't work with any app)
==> Revenue loss threat
So I am wondering #1 what are best practices to get a better security in iOS application?
Also, #2 what are best ways to reduce revenue loss and minimise hacking exposure?
for #1
I've seen some WWDC slides about security
1 2 3 4
+ apple docs
and I can say that between theses best practices there are:
Using APIs Offering Data Protection (like NSFileManager with NSFileProtectionKey attribute)
Using Keychain
Protecting sensitive data with SSL and using certificates
for #2
I think that using a business model based on free application, then in app purchase with Store Receipts verification can be the model with minimum revenue loss.
What are your best practices for security, and best way to minimise app hacking chances?
#1 what are best practices to get a better security in iOS application?
Appropriate data security is highly dependent on the nature of the information. Is it long-lived or short-lived? Is it a general credential that can be used to open other things, or a single piece of data? Is the potential loss privacy, financial, or safety? Determining the appropriate protections requires a specific case and has no general answer. But you ask for best practices and there are several. None of them are perfect or unbreakable. But they are best practice. Here are a few:
Store sensitive information in Keychain
Set Data Protection to NSFileProtectionComplete wherever possible.
Do not store sensitive data you don't actually need, or for longer than you need.
Store application-specific authentication tokens rather than passwords.
Use HTTPS to verify the server you are contacting. Never accept an invalid or untrusted certificate.
When connecting to your own server, validate that the service presents a certificate that you have signed, not just "a trusted certificate."
This is just a smattering of approaches, but they set the basic tone:
Use the built-in APIs to store things. As Apple improves security, you get the benefits for free.
Avoid storing sensitive information at all and minimize the sensitivity of what you do store.
Verify the services you communicate with.
#2 what are best ways to reduce revenue loss and minimise hacking exposure?
This has been discussed many times on SO. This answer includes links to several of the other discussions:
Secure https encryption for iPhone app to webpage
The short answer is: worry about your customers, not your non-customers. Many pirates will never, ever pay you money, so your time and money are better spent helping your actual customers want to pay you, and making it easy for them to do so. Focus on making more money rather than protecting yourself from money that you could never have. Never, ever, tick off a paying customer in your efforts to chastise a non-paying customer. Revenge is a sucker's game and a waste of resources.
There are two great ways to avoid piracy:
Don't publish.
Publish junk no one wants.
There are some basic things you can do that are worth it just, as they say, to keep honest people honest (some are discussed in the various linked discussions). But don't lie awake nights worrying about how to thwart pirates. Lie awake worrying about how to amaze your customers.
And always remember: Apple spends more money than most of us have ever seen in our lives trying to secure the iPhone. Still it's jailbroken. Think about what your budget is going to achieve.
When the attacker gains physical access to the device (e.g. theft), he can do almost anything.
Note that is very easy to read application files.
Stolen device can be jailbroken easily and the attacker gains access even to the protected files.
My advice for storing sensitive data to the device:
don't do it if they can be stored on secure server
use your own encryption, decrypt when user is logged in, delete decrypted file when they logs out or after some time the app is in the background.
every password and encryption key must be stored into the keychain.
Rob Napier mentioned good points. But to make it more secure,
1 what are best practices to get a better security in iOS application?
Store sensitive information in encrypted format in Keychain.
Upon physical access to the device keychain data can be dumped easily.
Set appropriate Data Protection class (NSFileProtectionComplete preferable).
Always use custom encryption along with built in API to store data.
Even if hackers find loopholes in built in API, your app is secure.
Over write temporary stored data before deletion.
Forensic techniques can be used to recover the deleted data.
Use HTTPS and certificate pinning. Never accept untrusted certificates.
Store important plist, sqlite, etc... files in Library/caches folder.
Files stored in the caches folder are not backedup with iTunes.
Always build the app with latest XCode.
Adds support only for latest SSL Ciphers
2 what are best ways to reduce revenue loss and minimise hacking exposure?
It may not be possible to stop the piracy but we can make it tough.
Prevent the app from running on Jailbroken devices (think twice, you may lose valid customers)
Add code that detects the existence of Jailbreak
Prevent the app from attaching to debuggers
Apps downloaded from AppStore are encrypted. Debuggers are used to decrypt and analyze the App. Add code that detects debuggers.
It varies really depending on what you do. As for accessing an API, all you really need to do is hash and/or salt user information and then save the information (if necessary) in the keychain (you could add extra security by encrypting the passwords before pushing them into the keychain. It's best that you don't use NSUserDefaults as the data entered into it is stored in a .txt file on the iPhone filesystem, which as you said, can be accessed by hackers.
Adding more to improve the security of the application
Do not send parameters using HTTP GET instead use HTTP POST.
You can do SSL Pinning to avoid MITM Attacks.
Remove all the logs from the source before moving to production.
Do not hardcode encryption key's in the app itself, it is better to keep them somewhere remote.
When making a request always use the latest SSL version (TLSv1.2).
If your app has WebView's then beware of Link Injection. If you do not expect any URL's other than HTTP in the WebView then check all the redirect URL's with prefix "http" so that the loaded website does not allow other link's to be opened.
You can choose to allow or block Keyboard Extensions since they listen to all your keystrokes.
I'm storing some healthcare data on a mobile phone and I'd like to know what the best system of encryption is, to keep the data secure. It's basically a bunch of model objects, that I'm serializing and storing using NSKeyedArchiver / the equivalent on Blackberry (the name eludes me for now)
Any tips? I don't want to make up security protocols as I go along, but one of the other threads suggested the following approach.
Generate a public / private key pair
Store the public key
Encrypt the private key with a hash of the user's password.
Use the public key to encrypt the byte stream.
Decrypt the pvt key, keep it in memory, whenever the user logs in, and decrypt the stored data as needed.
Is there a more standard way of doing this?
Thanks,
Teja.
Edit: I appreciate it that you're trying to help me, but the things currently being discussed are business level discussions, on which I have no control of. So rephrasing my question, if you ignore that it's healthcare data, but some confidential data, say a password, how would you go about doing it?
There might be an easier way for secure data storage. With iOS 4.0 apple introduced system provided encryption of application documents. This means that the OS is responsible for doing all the encryption and decyryption in a fairly transparent way.
Applications that work with sensitive user data can now take advantage of the built-in encryption available on some devices to protect that data. When your application designates a particular file as protected, the system stores that file on-disk in an encrypted format. While the device is locked, the contents of the file are inaccessible to both your application and to any potential intruders. However, when the device is unlocked by the user, a decryption key is created to allow your application to access the file.
So only when your app is active, the files can be read back in unencrypted format. But the nice thing is that they are always encrypted on disk. So even if someone jailbreaks the device, or backs it up, the retrieved files are worthless.
This was probably introduced to conform to some specific data security standard that is required. I can't find that anywhere though.
For more info see the iOS 4.0 release notes.
http://en.wikipedia.org/wiki/HIPAA
Make sure you read and understand this!
edit: Sorry, didn't even bother to check to see where the OP is from, but even if they aren't from the USA there are still some good practices to follow in HIPAA.
HIPPA is a business practice and total system level privacy/security regulation. As such, an app can't comply by itself on random hardware for a random user. You need to determine how your app fits into a client health care provider's total regulatory compliance process before you can determine what algorithm might be found to comply with that process.
My best advice would be, don't store sensitive data in the user's mobile phone.
If that is not an option for you, then some kind of public/private key encryption, such as one you described, would be the next best option.
I am developing an iPhone app together with web services. The iPhone app will use GET or POST to retrieve data from the web services such as http://www.myserver.com/api/top10songs.json to get data for top ten songs for example.
There is no user account and password for the iPhone app. What is the best practice to ensure that only my iPhone app have access to the web API http://www.myserver.com/api/top10songs.json? iPhone SDK's UIDevice uniqueueIdentifier is not sufficient as anyone can fake the device id as parameter making the API call using wget, curl or web browsers.
The web services API will not be published. The data of the web services is not secret and private, I just want to prevent abuse as there are also API to write some data to the server such as usage log.
What you can do is get a secret key that only you know, Include that in an md5 hashed signature, typically you can structure signatures as a s tring of your parameters a nd values and the secret appended at the end, then take the md5 hash of that...Do this both in your client and service side and match the signature string, only if the signatures match do you get granted access...Since t he secret is only present i n the signature it w ill be hard to reverse engineer and crack..
Here's an expansion on Daniel's suggestion.
Have some shared secret that the server and client know. Say some long random string.
Then, when the client connects, have the client generate another random string, append that to the end of the shared string, then calculate the MD5 hash.
Send both the randomly generated string and the hash as parameters in the request. The server knows the secret string, so it can generate a hash of its own and make sure it matches the one it received from the client.
It's not completely secure, as someone could decompile your app to determine the secret string, but it's probably the best you'll get without a lot of extra work.
Use some form of digital signatures in your request. While it's rather hard to make this completely tamper proof (as is anything with regard to security). It's not that hard to get it 'good enough' to prevent most abuse.
Of course this highly depends on the sensitivity of the data, if your data transactions involve million dollar transactions, you'll want it a lot more secure than some simple usage statistic logging (if it's hard enough to tamper and it will gain little to no gain to the attacker except piss you of, it's safe to assume people won't bother...)
I asked an Apple security engineer about this at WWDC and he said that there is no unassailable way to accomplish this. The best you can do is to make it not worth the effort involved.
I also asked him about possibly using push notifications as a means of doing this and he thought it was a very good idea. The basic idea is that the first access would trigger a push notification in your server that would be sent to the user's iPhone. Since your application is open, it would call into the application:didReceiveRemoteNotification: method and deliver a payload of your own choosing. If you make that payload a nonce, then your application can send the nonce on the next request and you've completed the circle.
You can store the UDID after that and discard any requests bearing unverified UDIDs. As far as brute-force guessing of necessary parameters, you should be implementing a rate-limiting algorithm no matter what.
A very cheap way to do this could be getting the iPhone software to send extra data with the query, such as a long password string so that someone can't access the feed.
Someone could reverse engineer what you have done or listen to data sent over the network to discover the password and if bandwidth limitations are the reason for doing this, then a simple password should be good enough.
Of course this method has it's problems and certificate based authentication will actually be secure, although it will be harder to code.
The most secure solution is probably a digital signature on the request. You can keep a secret key inside the iPhone app, and use it to sign the requests, which you can then verify on the server side. This avoids sending the key/password to the server, which would allow someone to capture it with a network sniffer.
A simple solution might be just to use HTTPS - keeping the contents of your messages secure despite the presence of potential eavesdroppers is the whole point of HTTPS. I'm not sure if you can do self-signed certificates with the standard NSURLConnection stuff, but if you have a server-side certificate, you're at least protected from eavesdropping. And it's a lot less code for you to write (actually, none).
I suppose if you use HTTPS as your only security, then you're potentially open to someone guessing the URL. If that's a concern, adding just about any kind of parameter validation to the web service will take care of that.
The problem with most if not all solutions here is that they are rather prone to breaking once you add proxies in the mix. If a proxy connects to your webservice, is that OK? After all, it is probably doing so on behalf of an iPhone somewhere - perhaps in China? And if it's OK for a proxy to impersonate an iPhone, then how do you determine which impersonations are OK?
Have some kind of key that changes every 5 minutes based on an algorithm which uses the current time (GMT). Always allow the last two keys in. This isn't perfect, of course, but it keeps the target moving, and you can combine it with other strategies and tactics.
I assume you just want to dissuade use of your service. Obviously you haven't set up your app to be secure.
I need my application to use client's phone-number to generate unique ID for my web-service. Of course a phone-number is unique, but it must be secured. So it can be implemented with symmetric encryption (asymmetric will be later, because leak of resources), but I do not know where to store a encryption-key.
1.
I do not know why, but seems bad to store a key as a static field in code. May be because it's too easy to read it from here even not running an application.
2.
It seems better to store a key in Keychain and get it from here by request. But to avoid #1 it's necessary to install a key to Keychain while installation process. Is it possible? How to do that?
3.
I do not know what certificates do. Are they helpful to the problem?
4.
To transfer a key from server is also a bad idea, because it's very easy to sniffer it.
The way you solve the sniffing problem is that you communicate over HTTPS for your web service. NSURLConnection will do this easily, and all web service engines I know of handle HTTPS without trouble. This will get rid of many of your problems right away.
On which machine is the 100-1000x decrypt the bottleneck? Is your server so busy that it can't do an asym decryption? You should be doing this so infrequently on the phone that it should be irrelevant. I'm not saying asym is the answer here; only that its performance overhead shouldn't be the issue for securing a single string, decrypted once.
Your service requires SMS such that all users must provide their phone number? Are you trying to automate grabbing the phone number, or do you let the user enter it themselves? Automatically grabbing the phone number through the private APIs (or the non-private but undocumented configuration data) and sending that to a server is likely to run afoul of terms of service. This is a specific use-case Apple wants to protect the user from. You definitely need to be very clear in your UI that you are doing this and get explicit user permission.
Personally I'd authenticate as follows:
Server sends challenge byte
Client sends UUID, date, and hash(UUID+challenge+userPassword+obfuscationKey+date).
Server calculates same, makes sure date is in legal range (30-60s is good) and validates.
At this point I generally have the server generate a long, sparse, random session id which the client may use for the remainder of this "session" (anywhere from the next few minutes to the next year) rather than re-authenticating in every message.
ObfuscationKey is a secret key you hardcode into your program and server to make it harder for third parties to create bogus clients. It is not possible, period, not possible, to securely ensure that only your client can talk to your server. The obfuscationKey helps, however, especially on iPhone where reverse engineering is more difficult. Using UUID also helps because it is much less known to third-parties than phone number.
Note "userPassword" in there. The user should authenticate using something only the user knows. Neither the UUID nor the phone number is such a thing.
The system above, plus HTTPS, should be straightforward to implement (I've done it many times in many languages), have good performance, and be secure to an appropriate level for a broad range of "appropriate."
I don't think you're going to be able to do what you want securely with symmetric encryption. With asym you can send the public key without worrying about it too much (only threat is someone substituting your key with their own) and validate the encrypted unique id on your server with the private key.
Let's say I need to access a web service from an iPhone app. This web service requires clients to digitally sign HTTP requests in order to prove that the app "knows" a shared secret; a client key. The request signature is stored in a HTTP header and the request is simply sent over HTTP (not HTTPS).
This key must stay secret at all times yet needs to be used by the iPhone app.
So, how would you securely store this key given that you've always been told to never store anything sensitive on the client side?
The average user (99% of users) will happily just use the application. There will be somebody (an enemy?) who wants that secret client key so as to do the service or client key owner harm by way of impersonation. Such a person might jailbreak their phone, get access to the binary, run 'strings' or a hex editor and poke around. Thus, just storing the key in the source code is a terrible idea.
Another idea is storing the key in code not a string literal but in a NSMutableArray that's created from byte literals.
One can use the Keychain but since an iPhone app never has to supply a password to store things in the Keychain, I'm wary that someone with access to the app's sandbox can and will be able to simply look at or trivially decode items therein.
EDIT - so I read this about the Keychain: "In iPhone OS, an application always has access to its own keychain items and does not have access to any other application’s items. The system generates its own password for the keychain, and stores the key on the device in such a way that it is not accessible to any application."
So perhaps this is the best place to store the key.... If so, how do I ship with the key pre-entered into the app's keychain? Is that possible? Else, how could you add the key on first launch without the key being in the source code? Hmm..
EDIT - Filed bug report # 6584858 at http://bugreport.apple.com
Thanks.
The goal is, ultimately, restrict access of the web service to authorized users, right? Very easy if you control the web service (if you don't -- wrap it in a web service which you do control).
1) Create a public/private key pair. The private key goes on the web service server, which is put in a dungeon and guarded by a dragon. The public key goes on the phone. If someone is able to read the public key, this is not a problem.
2) Have each copy of the application generate a unique identifier. How you do this is up to you. For example, you could build it into the executable on download (is this possible for iPhone apps)? You could use the phone's GUID, assuming they have a way of calculating one. You could also redo this per session if you really wanted.
3) Use the public key to encrypt "My unique identifier is $FOO and I approved this message". Submit that with every request to the web service.
4) The web service decrypts each request, bouncing any which don't contain a valid identifier. You can do as much or as little work as you want here: keep a whitelist/blacklist, monitor usage on a per-identifier basis and investigate suspicious behavior, etc.
5) Since the unique identifier now never gets sent over the wire, the only way to compromise it is to have physical access to the phone. If they have physical access to the phone, you lose control of any data anywhere on the phone. Always. Can't be helped. That is why we built the system such that compromising one phone never compromises more than one account.
6) Build business processes to accommodate the need to a) remove access from a user who is abusing it and b) restore access to a user whose phone has been physically compromised (this is going to be very, very infrequent unless the user is the adversary).
The simple answer is that as things stand today it's just not possible to keep secrets on the iPhone. A jailbroken iPhone is just a general-purpose computer that fits in your hand. There's no trusted platform hardware that you can access. The user can spoof anything you can imagine using to uniquely identify a given device. The user can inject code into your process to do things like inspect the keychain. (Search for MobileSubstrate to see what I mean.) Sorry, you're screwed.
One ray of light in this situation is in app purchase receipts. If you sell an item in your app using in app purchase you get a receipt that's crypto signed and can be verified with Apple on demand. Even though you can't keep the receipt secret it can be traced (by Apple, not you) to a specific purchase, which might discourage pirates from sharing them. You can also throttle access to your server on a per-receipt basis to prevent your server resources from being drained by pirates.
UAObfuscatedString could be a solution to your problem. From the docs:
When you write code that has a string constant in it, this string is saved in the binary in clear text. A hacker could potentially discover exploits or change the string to affect your app's behavior. UAObfuscatedString only ever stores single characters in the binary, then combines them at runtime to produce your string. It is highly unlikely that these single letters will be discoverable in the binary as they will be interjected at random places in the compiled code. Thus, they appear to be randomized code to anyone trying to extract strings.
If you can bear to be iPhone OS 3.0-only, you may want to look at push notifications. I can't go into the specifics, but you can deliver a payload to Apple's servers along with the notification itself. When they accept the alert (or if your app is running), then some part of your code is called and the keychain item is stored. At this point, that is the only route to securely storing a secret on an iPhone that I can think of.
I had the same question and spent a lot of time poking around for an answer. The issue is a chicken and egg one: how to pre-poluate the keychain with data needed by your app.
In any case, I found a technique that at least will make it harder for a jailbreaker to uncover the information - they'll at least have to disassemble your code to find out what you did to mask the info:
String Obfuscation (if the link breaks search for "Obfuscate / Encrypt a String (NSString)")
Essentially the string is obfuscated before placed in the app, then you unobfuscate it using code.
Its better than doing nothing.
David
EDIT: I actually used this in an app. I put a base coding string into the info.plist, then did several operations on it in code - rot13, rotate/invert bytes, etc. The final processed string was used to decode the obfuscated string. Now, the three letter agencies could for sure break this - but at a huge cost of many hours decoding the binary.
I was going to say that this is the best technique I've come across, but I just read Kiran's post on UAObfuscatedString (different answer), which is a completely different way to obfuscate. It has the benefit of no strings saved anywhere in the app - each letter is turned into a method call. The selectors will show up as strings, so a hacker can quickly tell that your class used that technique though.
I think that this similar question, and my answer, may be relevant to your case too. In a nutshell, there was some talk of a trusted platform module being present in an iPhone. This would allow your service to trust an iPhone, even in the hands of an attacker. However, it looks like using the keychain is your best bet.
Did you consider/try the Push Notification suggestion, for initially transmitting the secret to the app & keychain? Or end up finding some other method to achieve this?
I'm going have my iphone app upload images to Amazon S3. Instead of putting the AWS credentials in the app, I am going to have the app phone home to my server for the URI and headers to use in the S3 upload request. My server will generate the S3 URI, proper signatures, etc. I can then implement a tighter, more specific security model on my app's webservice than AWS offers by itself and not give away my AWS keys to anyone with a jailbroken iphone.
But there still has to be some trust (credentials or otherwise) given to the app, and that trust can be stolen. All you can ever do is limit the damage done if someone jailbreaks an iphone and steals whatever credentials are in the app. The more powerful those credentials are, the worst things are. Ways to limit the power of credentials include:
avoid global credentials. make them per-user/application
avoid permanent credentials. make them temporary if possible
avoid global permissions. give them only the permissions they need. for instance, write permissions might be broken down into insert, overwrite, delete, write against resource group A or B, etc, and read could be broken into read named resources, read a list of all existing resources, read resource groups A or B, etc.
I would recommend creating a key at run time if possible. This way if the key were to get apprehended during a particular session, once the session ends, the key will be worthless. They could still apprehend the key from memory if they are smart enough, but it wouldn't matter since the key would become invalid after a period of time.
Sounds wonky. Would use HTTPS and maybe an encryption package to handle the key.
I think CommonCrypto is available for iPhone.
EDIT: Still sounds wonky. Why would anyone pass a secret key in an HTTP header? Anyone who traces your network traffic (via a logging wifi router, for instance) would see it.
There are well-established security methods for encrypting message traffic...why not use them rather than invent what is basically a trivially flawed system?
EDIT II: Ah, I see. I would go ahead and use the Keychain...I think it is intended for just these kinds of cases. I missed that you were generating the request using the key. Would still use HTTPS if I could though, since that way you don't risk people deducing your keygeneration scheme via inspection of enough signatures.