OAuth secrets in mobile apps - iphone

When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)?
Storing the string in the app is obviously not good, as someone could easily find it and abuse it.
Another approach would be to store it on your server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app.
The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, which would append the secret to the request data and communicate with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app).
Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?

Yes, this is an issue with the OAuth design that we are facing ourselves. We opted to proxy all calls through our own server. OAuth wasn't entirely flushed out in respect of desktop apps. There is no prefect solution to the issue that I've found without changing OAuth.
If you think about it and ask the question why we have secrets, is mostly for provision and disabling apps. If our secret is compromised, then the provider can only really revoke the entire app. Since we have to embed our secret in the desktop app, we are sorta screwed.
The solution is to have a different secret for each desktop app. OAuth doesn't make this concept easy. One way is have the user go and create an secret on their own and enter the key on their own into your desktop app (some facebook apps did something similar for a long time, having the user go and create facebook to setup their custom quizes and crap). It's not a great experience for the user.
I'm working on proposal for a delegation system for OAuth. The concept is that using our own secret key we get from our provider, we could issue our own delegated secret to our own desktop clients (one for each desktop app basically) and then during the auth process we send that key over to the top level provider that calls back to us and re-validates with us. That way we can revoke on own secrets we issue to each desktop client. (Borrowing a lot of how this works from SSL). This entire system would be prefect for value-add webservices as well that pass on calls to a third party webservice.
The process could also be done without delegation verification callbacks if the top level provider provides an API to generate and revoke new delegated secrets. Facebook is doing something similar by allowing facebook apps to allow users to create sub-apps.
There are some talks about the issue online:
http://blog.atebits.com/2009/02/fixing-oauth/
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/629b03475a3d78a1/de1071bf4b820c14#de1071bf4b820c14
Twitter and Yammer's solution is a authentication pin solution:
https://dev.twitter.com/oauth/pin-based
https://www.yammer.com/api_oauth_security_addendum.html

With OAUth 2.0, you can store the secret on the server. Use the server to acquire an access token that you then move to the app and you can make calls from the app to the resource directly.
With OAuth 1.0 (Twitter), the secret is required to make API calls. Proxying calls through the server is the only way to ensure the secret is not compromised.
Both require some mechanism that your server component knows it is your client calling it. This tends to be done on installation and using a platform specific mechanism to get an app id of some kind in the call to your server.
(I am the editor of the OAuth 2.0 spec)

One solution could be to hard code the OAuth secret into the code, but not as a plain string. Obfuscate it in some way - split it into segments, shift characters by an offset, rotate it - do any or all of these things. A cracker can analyse your byte code and find strings, but the obfuscation code might be hard to figure out.
It's not a foolproof solution, but a cheap one.
Depending on the value of the exploit, some genius crackers can go to greater lengths to find your secret code. You need to weigh the factors - cost of previously mentioned server side solution, incentive for crackers to spend more efforts on finding your secret code, and the complexity of the obfuscation you can implement.

Do not store the secret inside the application.
You need to have a server that can be accessed by the application over https (obviously) and you store the secret on it.
When someone want to login via your mobile/desktop application, your application will simply forward the request to the server that will then append the secret and send it to the service provider. Your server can then tell your application if it was successful or not.
Then if you need to get any sensitive information from the service (facebook, google, twitter, etc), the application ask your server and your server will give it to the application only if it is correctly connected.
There is not really any option except storing it on a server. Nothing on the client side is secure.
Note
That said, this will only protect you against malicious client but not client against malicious you and not client against other malicious clients (phising)...
OAuth is a much better protocol in browser than on desktop/mobile.

There is a new extension to the Authorization Code Grant Type called Proof Key for Code Exchange (PKCE). With it, you don't need a client secret.
PKCE (RFC 7636) is a technique to secure public clients that don't use
a client secret.
It is primarily used by native and mobile apps, but the technique can
be applied to any public client as well. It requires additional
support by the authorization server, so it is only supported on
certain providers.
from https://oauth.net/2/pkce/
For more information, you can read the full RFC 7636 or this short introduction.

Here's something to think about. Google offers two methods of OAuth... for web apps, where you register the domain and generate a unique key, and for installed apps where you use the key "anonymous".
Maybe I glossed over something in the reading, but it seems that sharing your webapp's unique key with an installed app is probably more secure than using "anonymous" in the official installed apps method.

With OAuth 2.0 you can simply use the client side flow to obtain an access token and use then this access token to authenticate all further requests. Then you don't need a secret at all.
A nice description of how to implement this can be found here: https://aaronparecki.com/articles/2012/07/29/1/oauth2-simplified#mobile-apps

I don't have a ton of experience with OAuth - but doesn't every request require not only the user's access token, but an application consumer key and secret as well? So, even if somebody steals a mobile device and tries to pull data off of it, they would need an application key and secret as well to be able to actually do anything.
I always thought the intention behind OAuth was so that every Tom, Dick, and Harry that had a mashup didn't have to store your Twitter credentials in the clear. I think it solves that problem pretty well despite it's limitations. Also, it wasn't really designed with the iPhone in mind.

I agree with Felixyz. OAuth whilst better than Basic Auth, still has a long way to go to be a good solution for mobile apps. I've been playing with using OAuth to authenticate a mobile phone app to a Google App Engine app. The fact that you can't reliably manage the consumer secret on the mobile device means that the default is to use the 'anonymous' access.
The Google App Engine OAuth implementation's browser authorization step takes you to a page where it contains text like:
"The site <some-site> is requesting access to your Google Account for the product(s) listed below"
YourApp(yourapp.appspot.com) - not affiliated with Google
etc
It takes <some-site> from the domain/host name used in the callback url that you supply which can be anything on the Android if you use a custom scheme to intercept the callback.
So if you use 'anonymous' access or your consumer secret is compromised, then anyone could write a consumer that fools the user into giving access to your gae app.
The Google OAuth authorization page also does contain lots of warnings which have 3 levels of severity depending on whether you're using 'anonymous', consumer secret, or public keys.
Pretty scary stuff for the average user who isn't technically savvy. I don't expect to have a high signup completion percentage with that kind of stuff in the way.
This blog post clarifies how consumer secret's don't really work with installed apps.
http://hueniverse.com/2009/02/should-twitter-discontinue-their-basic-auth-api/

Here I have answer the secure way to storing your oAuth information in mobile application
https://stackoverflow.com/a/17359809/998483
https://sites.google.com/site/greateindiaclub/mobil-apps/ios/securelystoringoauthkeysiniosapplication

Facebook doesn't implement OAuth strictly speaking (yet), but they have implemented a way for you not to embed your secret in your iPhone app: https://web.archive.org/web/20091223092924/http://wiki.developers.facebook.com/index.php/Session_Proxy
As for OAuth, yeah, the more I think about it, we are a bit stuffed. Maybe this will fix it.

None of these solutions prevent a determined hacker from sniffing packets sent from their mobile device (or emulator) to view the client secret in the http headers.
One solution could be to have a dynamic secret which is made up of a timestamp encrypted with a private 2-way encryption key & algorithm. The service then decrypts the secret and determines if the time stamp is +/- 5 minutes.
In this way, even if the secret is compromised, the hacker will only be able to use it for a maximum of 5 minutes.

I'm also trying to come up with a solution for mobile OAuth authentication, and storing secrets within the application bundle in general.
And a crazy idea just hit me: The simplest idea is to store the secret inside the binary, but obfuscated somehow, or, in other words, you store an encrypted secret. So, that means you've got to store a key to decrypt your secret, which seems to have taken us full circle. However, why not just use a key which is already in the OS, i.e. it's defined by the OS not by your application.
So, to clarify my idea is that you pick a string defined by the OS, it doesn't matter which one. Then encrypt your secret using this string as the key, and store that in your app. Then during runtime, decrypt the variable using the key, which is just an OS constant. Any hacker peeking into your binary will see an encrypted string, but no key.
Will that work?

As others have mentioned, there should be no real issue with storing the secret locally on the device.
On top of that, you can always rely on the UNIX-based security model of Android: only your application can access what you write to the file system. Just write the info to your app's default SharedPreferences object.
In order to obtain the secret, one would have to obtain root access to the Android phone.

Related

Which is more better between basic auth and token auth as security perspective

I am currently developing a RESTful API server, and I am choosing between using ID and password or using a token to authenticate a user.
Let me, explain my situation first. I need to include static authentication information to my library to communicate between a client and my server or provide it to a partnership company to communicate between their server and my server. And when I was researching other services which are in a similar situation as us, they are using token now (for example, Bugfender is using a token to specify a user).
However, what I think is that using ID and PW and using the token are the same or using ID and PW is better because there are two factors to compare it is correct or incorrect.
Is there any reason why other services are using a token?
Which one is better as a security perspective or is there a better way to do this?
I think, if you are going go use on your client fixed username/password, or some fixed token, then the level of the security is the same.
Username and password is not considered as multi-factor authentication. Multi factor means that you are authenticating someone by more than one of the factors:
What you know. This can be the combination of username and password, or some special token.
What you have. Might be some hardware that generates an additional one time password - Google authenticator app on your telephone, or SMS with OTP received with some time expiration.
What you are. This is for example your fingerprint or retina of the eye.
Where you are. This can be the IP address of the origin if it is applicable for your setup.
How you behave. What is your normal way of using the service.
etc.
Maybe not needed to mention that both - the token and the username/password combination have to be carried in an encrypted requests (I believe you are using HTTPS). Otherwise the client's identity can be stolen.
How are you going to provide the credentials to your client library? I thnk this is the most tricky part. If those credentials are saved as a configuration (or worse hard coded) on their server, is that storage secure enough? Who is going to have access to it. Can you avoid it?
What would happen if your partner company realize that the username/password is compromised? Can they change it easily themselves? Or how fast you can revoke the permissions of stolen credentials?
My advice is also to keep audit logs on your server, recording the activity of the client requests. Remember also the GDPR if you work with Europe servers, check for similar regulations in your country based on what you are going to audit log.
In case the credentials (ID and password) and the token are being transferred the same way (say: by a header in a REST request) over a TLS secured channel, the only difference lies in the entropy of the password VS entropy of the token. Since it is something for you to decide in both cases, there is no real difference from the security perspective.
NOTE: I don't count the ID as a secret, as it usually is something far easier to guess than a secret should be.
I'd go for a solution that is easier to implement and manage.
IMHO this would be HTTP basic authentication, as you usually get full support from your framework/web server with little danger of making security mistakes in authentication logic. You know, friends don't let friends write their own auth. ;)

Mobile app/API security: will a hardcoded access key suffice?

I'm building a mobile app which allows users to find stores and discounts near their location. The mobile app gets that information from the server via a REST API, and I obviously want to protect that API.
Would it be enough to hardcode an access key (128 bit) into the mobile app, and send it on every request (enforcing https), and then check if it matches the server key? I am aware of JWT, but I believe using it or other token-based approach will give me more flexibility but not necessarily more security.
As far as I can see the only problem with this approach is that I become vulnerable to a malicious developer on our team. Would there be a way to solve this?
First, as discussed many times here, what you want to achieve is not technically possible in a way that could be called secure. You can't authenticate the client, you can only authenticate people. The reason is that anything you put into a client will be fully available to people using it, so they can reproduce any request it makes.
Anything below is just a thought experiment.
Your approach of adding a static key, and sending it with every request is a very naive attempt. As an attacker, it would be enough for me to inspect one single request created by your app and then I would have the key to make any other request.
A very slightly better approach would be to store a key in your app and sign requests with it (instead of sending the actual key), like for example to create a hmac of the request with the key and add that as an additional request header. That's better, because I would have to disassemble the app to get the key - more work which could deter me, but very much possible.
As a next step, that secret key could be generated upon the first use of your app, and there we are getting close to as good as this could get. Your user installs the app, you generate a key, store it in the appropriate keystore of your platform, and register it with the user. From then on, on your backend you require every request to be signed (eg. the hmac method above) with the key associated with your user. The obvious problem is how you would assign keys - and such a key would still authenticate a user, and not his device or client, but at least from a proper keystore on a mobile platform, it would not be straightforward to get the key without rooting/jailbraking.But nothing would keep an attacker from installing the app on a rooted device or an emulator.
So the bottomline is, you can't prevent people from accessing your api with a different client. However, in the vast majority of cases they wouldn't want to anyway. If you really want to protect it, it should not be public, and a mobile app is not an appropriate platform.
YOUR PROBLEM
Would it be enough to hardcode an access key (128 bit) into the mobile app, and send it on every request (enforcing https), and then check if it matches the server key?
Well no, because every secret you hide in a mobile app, is not anymore a secret, because will be accessible to everyone that wants to spend some time to reverse engineer your mobile app or to perform a MitM attack against it in a device he controls.
On the article How to Extract an API key from a Mobile App by Static Binary Analysis I show how easy is to extract a secret from the binary, but I also show a good approach to hide it, that makes it hard to reverse engineer:
It's time to look for a more advanced technique to hide the API key in a way that will be very hard to reverse engineer from the APK, and for this we will make use of native C++ code to store the API key, by leveraging the JNI interface which uses NDK under the hood.
While the secret may be hard to reverse engineer it will be easy to extract with a MitM attack, and that is what I talk about in the article Steal that API key with a MitM Attack:
So, in this article you will learn how to setup and run a MitM attack to intercept https traffic in a mobile device under your control, so that you can steal the API key. Finally, you will see at a high level how MitM attacks can be mitigated.
If you read the article you will learn how an attacker will be able to extract any secret you transmit over https to your API server, therefore a malicious developer in your team will not be your only concern.
You can go and learn how to implement certificate pinning to protect your https connection to the API server, and I also have wrote an article on it, entitled Securing Https with Certificate Pinning on Android:
In this article you have learned that certificate pinning is the act of associating a domain name with their expected X.509 certificate, and that this is necessary to protect trust based assumptions in the certificate chain. Mistakenly issued or compromised certificates are a threat, and it is also necessary to protect the mobile app against their use in hostile environments like public wifis, or against DNS Hijacking attacks.
Finally you learned how to prevent MitM attacks with the implementation of certificate pinning in an Android app that makes use of a network security config file for modern Android devices, and later by using TrustKit package which supports certificate pinning for both modern and old devices.
Sorry but I need to inform you that certificate pinning can be bypassed in a device the attacker controls, and I show how it can be done in the article entitled Bypass Certificate Pinning:
In this article you will learn how to repackage a mobile app in order to make it trust custom ssl certificates. This will allow us to bypass certificate pinning.
Despite certificate pinning can be bypassed it is important to always use it to secure the connection between your mobile app and API server.
So and now what, am I doomed to fail in defending my API server... Hope still exists, keep reading!
DEFENDING THE API SERVER
The mobile app gets that information from the server via a REST API, and I obviously want to protect that API.
Protecting an API server for a mobile app is possible, and it can be done by using the Mobile App Attestation concept.
Before I go in detail to explain the Mobile App Attestation concept is important to we first clarify an usual misconception among developers, the difference between the WHO and the WHAT is accessing your API server.
The Difference Between WHO and WHAT is Accessing the API Server
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
The Intended Communication Channel represents the mobile app being used as you expected, by a legit user without any malicious intentions, using an untampered version of the mobile app, and communicating directly with the API server without being man in the middle attacked.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using a repackaged version of the mobile app, a hacker using the genuine version of the mobile app, while man in the middle attacking it, to understand how the communication between the mobile app and the API server is being done in order to be able to automate attacks against your API. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the original version of the mobile app.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the mobile app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise you may end up discovering that It can be one of the legit users using a repackaged version of the mobile app or an automated script that is trying to gamify and take advantage of the service provided by the application.
Well, to identify the WHAT, developers tend to resort to an API key that usually they hard-code in the code of their mobile app. Some developers go the extra mile and compute the key at run-time in the mobile app, thus it becomes a runtime secret as opposed to the former approach when a static secret is embedded in the code.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?, and that you can read in full here, that is the first article in a series of articles about API keys.
Mobile App Attestation
The role of a Mobile App Attestation solution is to guarantee at run-time that your mobile app was not tampered with, is not running in a rooted device, not being instrumented by a framework like xPosed or Frida, not being MitM attacked, and this is achieved by running an SDK in the background. The service running in the cloud will challenge the app, and based on the responses it will attest the integrity of the mobile app and device is running on, thus the SDK will never be responsible for any decisions.
Frida
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
xPosed
Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo.
MiTM Proxy
An interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers.
On successful attestation of the mobile app integrity a short time lived JWT token is issued and signed with a secret that only the API server and the Mobile App Attestation service in the cloud are aware. In the case of failure on the mobile app attestation the JWT token is signed with a secret that the API server does not know.
Now the App must sent with every API call the JWT token in the headers of the request. This will allow the API server to only serve requests when it can verify the signature and expiration time in the JWT token and refuse them when it fails the verification.
Once the secret used by the Mobile App Attestation service is not known by the mobile app, is not possible to reverse engineer it at run-time even when the App is tampered, running in a rooted device or communicating over a connection that is being the target of a Man in the Middle Attack.
The Mobile App Attestation service already exists as a SAAS solution at Approov(I work here) that provides SDKs for several platforms, including iOS, Android, React Native and others. The integration will also need a small check in the API server code to verify the JWT token issued by the cloud service. This check is necessary for the API server to be able to decide what requests to serve and what ones to deny.
SUMMARY
In the end, the solution to use in order to protect your API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
DO YOU WANT TO GO THE EXTRA MILE?
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.

Advice on implementing web server security in iPhone app

I have a relatively successful app in the app store that allows people to view metrics on their iDevice using a JSON file hosted on the their server. The app has a simple settings screen in which you simply type the URL of your JSON file and the app takes care of visualising the data in the file. I use AFNetworking for this.
For example the URL might be: http://www.mylargecompany.com/factorykpi.json
Customers are now coming back to me and asking for the ability to connect to their servers more securely. Problem is there is a myriad of ways you can secure your server.
I need some advice. What kind of standard security features would I need to build into my app. I am confused by OAuth, HTTPS, etc.
I believe OAuth would mean the customers server would have to use it? Can you make a generic OAuth login screen in an app or do you need to know which web server you are connecting to authenticate.
Any advice on even the most basic of security measure would be very welcome!
Regards,
MonkeyBusiness
Security is really a very broad topic. There is no short answer. In any case, both web service and client app need to implement security mechanisms. I would recommend, you provide both the web service and client app.
You likely need some user login with a password, "server verifies user identity" and "client verifies server identity" using certificates. Then you utilize HTTPS which ensures to transport confident data in a secure way. The web service should be implemented with one of the well known web application frameworks since security is a scary business, and tricky. Implementing everything yourself, might end in a suboptimal insecure application.
You should now read more about the complex topic and come back when you have specific questions.
... most basic would be to use https, which would secure the transaction, but anybody accessing the same link would be able to access the same data. Thus you will need some kind of authentication, starting with a simple secret key passed as POST request, username and password and/or via certificates.

Possible approach to secure a Rest API endpoints using Facebook OAuth

I've been reading a lot about the topic but all I find are obsolete or partial answers, which don't really help me that much and actually just confused me more.
I'm writing a Rest API (Node+Express+MongoDB) that is accessed by a web app (hosted on the same domain than the API) and an Android app.
I want the API to be accessed only by my applications and only by authorized users.
I also want the users to be able to signup and login only using their Facebook account, and I need to be able to access some basic info like name, profile pic and email.
A possible scenario that I have in mind is:
The user logs-in on the web app using Facebook, the app is granted
permission to access the user Facebook information and receives an
access token.
The web app asks the API to confirm that this user
is actually registered on our system, sending the email and the
token received by Facebook.
The API verifies that the user
exists, it stores into the DB (or Redis) the username, the token and
a timestamp and then goes back to the client app.
Each time the
client app hits one of the API endpoint, it will have to provide the
username and the token, other than other info.
The API each time
verifies that the provided pair username/token matches the most
recent pair username/token stored into the DB (using the timestamp
to order), and that no more than 1 hour has passed since we stored
these info (again using the timestamp). If that's the case, the API
will process the request, otherwise will issue a 401 Unauthorized
response.
Does this make sense?
Does this approach have any macroscopic security hole that I'm missing?
One problem I see using MongoDB to store these info is that the collection will quickly become bloated with old tokens.
In this sense I think it would be best to use Redis with an expire policy of 1 hour so that old info will be automatically removed by Redis.
I think the better solution would be this:
Login via Facebook
Pass the Facebook AccessToken to the server (over SSL for the
android app, and for the web app just have it redirect to an API endpoint
after FB login)
Check the fb_access_token given, make sure its valid. Get user_id,email and cross-reference this with existing users to
see if its a new or old one.
Now, create a random, separate api_access_token that you give back to the webapp and android app. If you need Facebook for
anything other than login, store that fb_access_token and in your
DB associate it with the new api_access_token and your user_id.
For every call hereafter, send api_access_token to authenticate it. If you need the fb_access_token for getting more info, you can
do so by retrieving it from the DB.
In summary: Whenever you can, avoid passing the fb_access_token. If the api_access_token is compromised, you have more control to see who the attacker is, what they're doing etc than if they were to get ahold of the fb_access_token. You also have more control over settings an expiration date, extending fb_access_tokens, etc
Just make sure whenever you pass a access_token of any sort via HTTP, use SSL.
I know I'm late to the party, but I'd like to add a visual representation of this process as I'm dealing with this problem right now (specifically in dealing with the communication between the mobile app and the web api by securing it with a 3rd party provider like facebook).
For simplicity, I haven't included error checks, this is mostly just to outline a reasonable approach. Also for simplicity, I haven't included Tommy's suggestion to only pass your own custom api token once the authorization flow is over, although I agree that this is probably a good approach.
Please feel free to criticize this approach though, and I'll update as necessary.
Also, in this scenario, "My App" refers to a mobile application.

Best practices for securing API credentials as part of an iPhone app

The apps that I build frequently have 'social media service' requirements; e.g.
Twitter
bit.ly
Facebook
For most of these services, I need to have an API key of some sort. I'm trying to work out the best way of including these API keys in the application binary. The desired level of security depends on the possible attacks that can be conducted by malicious attackers.
Twitter
I have an xAuth-enabled key and secret. Both need to be used by the iPhone app.
Fallout from attack
Malicious users could post twitter status updates masquerading as coming from my app. There is no twitter account to hijack and start posting status updates on.
bit.ly
I have a username, password and API key.
To login to the website and access analytics, the username and password are required.
To create links via the API, only the username and API key are required by my iPhone apps. The password will not be in the app in any form.
Fallout from attack
Malicious users could create links on my bit.ly account. They would need to do a separate attack to brute-force or otherwise gain the password to login to the account.
For both of those services, the potential for harm doesn't seem too great. But for other services, it could be much worse.
I can just define the API credentials as strings in the header or in-line in the code, but then it's vulnerable to someone using strings on the application to see what's in it.
I could then start doing silly concatenation / xor-ing in the code to recreate the API key in memory, and the attacker would have to do a bit more work to recover any keys in the binary. My concern with that is that I'm not a cryptographer and would create an embarrassingly weak form of obfuscation there.
What better suggestions do people have?
The attacker can just sniff your traffic and extract the secret from there. So any obfuscation is easily circumvented.
Even SSL won't help much, since you can intercept the networking API which receives the unencrypted data.
The secure way to solve this is create your own server, keep the secret stuff server side, and use your own server from your app, and the server then relays to the other webservice. This way the attacker never has access to the secret.
A good suggestion is not to worry about it. There are plenty of apps that store their API keys in plain text. The point is you need a lot of different bits of information to construct an access token.
As long as you're not storing username+password combos in plain text on the file system or transmitting them over the network without SSL/HTTPS etc then you're fine.