Why is the application secret required for extending a token lifetime? - facebook

My understanding of the application secret is that it is meant to be kept "secret".
I am wondering how I could extend the lifetime of a token using the client side flow without embedding the application secret in the client?

You can't, you'll need to do it server side (maybe via proxying the call)
There should be few reasons to have a long term token in client code if it's Javascript, for example, and I believe the iOS and Android SDK's already receive extended tokens when they use the SSO flow

Related

How to protect bearer tokens in a web app

I am trying to implement the Authorization Code flow described in RFC 6749 (OAuth 2.0) for a JavaScript-based application. I understand that I should use a web server back-end as a confidential client so that it can protect the access token and refresh token returned by the authorization server and not pass them on to the JavaScript front-end. Then all requests from the front-end to any protected resources go via the web server back-end, which attaches the access token to the request and proxies it on.
My question is how do I let the JavaScript front-end make use of these tokens in a secure way? I assume that I have to do something like set up a session on the web server and pass back a cookie that identifies the session. But this means that the JavaScript application then has a cookie that gives them the same privileges as if they just had direct access to the bearer tokens stored in the web server. How does having a web server to hold the tokens give extra security?
I understand that I should use a web server back-end as a confidential client so that it can protect the access token and refresh token returned by the authorization server and not pass them on to the JavaScript front-end.
No, it is a misunderstanding of the OAuth2 flows and goals.
Here is the OAuth2 main goal: your application (which can for instance be a JavaScript program running in the browser, a web server, both, etc.) MUST NOT need to know the user's credentials (most of the time a login/password pair) to access the service on behalf of the user.
Here is the way OAuth2 must be used to achieve this goal:
according to your needs, that is having a Javascript-based application running in the browser (i.e. not a node.js application), you need to use the OAuth2 implicit flow, not the authorization code flow. But of course, because your application is running in the browser, it will not be able to persist the credentials to access the resource offered by the service provider. The user will have to authenticate to the service provider for each new session on your application.
when your application needs to access the service provider when the user is not logged in, or when your application is able to persist credentials (because your application has its own credential system to identify its users), your application does not only rely on a JavaScript program running in the browser. It may rely only on a web server, or on both a web server and a JavaScript program that talks to this server. So, in those cases, you must use the authorization code flow.
So, as a conclusion, you have decided to add a web server to your application because you thought you had to use the authorization code flow. But in your case, you probably do not have to use this code flow, therefore you should select the appropriate code flow for your application: implicit code flow. And this way, you do not have to add a web server to run your application.
How does having a web server to hold the tokens give extra security?
This does not give extra security. Having a web server to hold the tokens is simply a way to let your application access the service on behalf of the user, in the background, when the user is not logged on your application.
While I agree with Alexandre Fenyo's comments, I just want to add the 2021 version. You should no longer be using the implicit flow as this is no longer considered secure.
For scenarios such as this where a JavaScript application has to handle tokens I would suggest using the Authorization Code flow with PKCE instead: https://auth0.com/docs/flows/authorization-code-flow-with-proof-key-for-code-exchange-pkce

Why is a signing certificate needed for Implicit Flow in IdentityServer3, but not for ClientCredential and ResourceOwner flows?

I'm trying to get up to speed on using IdentityServer3 for authorization in a .NET web application.
I'm trying to figure out why the Implicit Flow requires that IdentityServer have a signing certificate while the Authorization Flows do not require that.
So far, this is what I understand: the Flows "ClientCredential" and "ResourceOwner" are "AuthorizationFlows" - i.e. they have a client secret, all tokens are returned from the token endpoint, long-lived authentication via refresh tokens is possible, tokens are not revealed to the user agent/browser (rather, they are stored on the server, and a cookie or similar is sent to the user agent/browser) per https://www.scottbrady91.com/OpenID-Connect/OpenID-Connect-Flows. For a SPA / JS application that directly accesses a WebAPI, Implicit Flow should be used instead because the JS client calls the Identity Server directly from the browser, so there's no way for a client secret to be preserved (since it would have to be transmitted to the browser, thereby compromising the security).
All of that makes sense, but in all cases, the IdentityServer generates tokens that the application uses for authorization, so I'm puzzled why they need to be signed in one case but not in the other?
Many thanks for your assistance!
A signing certificate is needed anytime a JWT is issued (id_token or access_token).

Accessing REST APIs secured using OAUTH

I have a set of REST APIs that are secured by oauth 2. I need to access them from an Android app and a webapp.
Accessing the APIs from android app seems pretty straight forward for me to implement. What I am unable to understand here is - what is the correct and secure way to access the same APIs from a webapp?
I am thinking, I shouldn't be making any direct calls to the APIs from the browser, using some JS library, for it seems to me that it would be pretty insecure. Instead, I should keep it all traditional, by submitting requests to the web server and then letting it make the REST API call. This would be similar to the method of making REST calls from Android.
Am I correct in my thinking/approach?
Accessing your API should be the same no matter where the request is coming from. You just use an Authorization header with bearer scheme and stick the JWT token in there.
The way you get the JWT token is different though, as I explain in this answer. It all depends on how much you trust the client application.
If your client is a web application that queries your API from the server side, you can use the code authorization grant and get an access and refresh token for your API.
If you want to access your API from a JavaScript application, you have no way to hide app-keys or refresh tokens, so you should use the implicit grant.
If you know how to store secrets securely on your Android client, you could use the resource owner password grant.
The code authorization grant is definitively the most secure as it's much harder to compromise a server application than an application that runs on your machine.

REST Api authentication - exchange private key

I'm adding rest API for mobile application into my existing grails web app. Since I'm having hard time with integrating OAuth2 provider into my application, I'm going to implement my own HMAC mechanism.
HMAC uses secret key and what I want is, that each user of application has it's own secret key. Now the thing is how do I transfer secret in a safe manner between API and mobile device initially.
Of course all communication will be through t SSL. But is it safe to send client secret from server to mobile client when connecting for the first time over the wire?
Or I should use one secret and store it with mobile client, which could be easily reverse-engineered?
Or maybe there are other and better ways to do it?
You may want to look into shared key authentication schemes and implement custom mechanism.
Here is example how Amazon use it for REST request:
http://docs.aws.amazon.com/AWSECommerceService/latest/DG/Query_QueryAuth.html
and sample java code
http://docs.aws.amazon.com/AWSECommerceService/latest/DG/AuthJavaSampleSig2.html

OAuth secrets in mobile apps

When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)?
Storing the string in the app is obviously not good, as someone could easily find it and abuse it.
Another approach would be to store it on your server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app.
The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, which would append the secret to the request data and communicate with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app).
Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?
Yes, this is an issue with the OAuth design that we are facing ourselves. We opted to proxy all calls through our own server. OAuth wasn't entirely flushed out in respect of desktop apps. There is no prefect solution to the issue that I've found without changing OAuth.
If you think about it and ask the question why we have secrets, is mostly for provision and disabling apps. If our secret is compromised, then the provider can only really revoke the entire app. Since we have to embed our secret in the desktop app, we are sorta screwed.
The solution is to have a different secret for each desktop app. OAuth doesn't make this concept easy. One way is have the user go and create an secret on their own and enter the key on their own into your desktop app (some facebook apps did something similar for a long time, having the user go and create facebook to setup their custom quizes and crap). It's not a great experience for the user.
I'm working on proposal for a delegation system for OAuth. The concept is that using our own secret key we get from our provider, we could issue our own delegated secret to our own desktop clients (one for each desktop app basically) and then during the auth process we send that key over to the top level provider that calls back to us and re-validates with us. That way we can revoke on own secrets we issue to each desktop client. (Borrowing a lot of how this works from SSL). This entire system would be prefect for value-add webservices as well that pass on calls to a third party webservice.
The process could also be done without delegation verification callbacks if the top level provider provides an API to generate and revoke new delegated secrets. Facebook is doing something similar by allowing facebook apps to allow users to create sub-apps.
There are some talks about the issue online:
http://blog.atebits.com/2009/02/fixing-oauth/
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/629b03475a3d78a1/de1071bf4b820c14#de1071bf4b820c14
Twitter and Yammer's solution is a authentication pin solution:
https://dev.twitter.com/oauth/pin-based
https://www.yammer.com/api_oauth_security_addendum.html
With OAUth 2.0, you can store the secret on the server. Use the server to acquire an access token that you then move to the app and you can make calls from the app to the resource directly.
With OAuth 1.0 (Twitter), the secret is required to make API calls. Proxying calls through the server is the only way to ensure the secret is not compromised.
Both require some mechanism that your server component knows it is your client calling it. This tends to be done on installation and using a platform specific mechanism to get an app id of some kind in the call to your server.
(I am the editor of the OAuth 2.0 spec)
One solution could be to hard code the OAuth secret into the code, but not as a plain string. Obfuscate it in some way - split it into segments, shift characters by an offset, rotate it - do any or all of these things. A cracker can analyse your byte code and find strings, but the obfuscation code might be hard to figure out.
It's not a foolproof solution, but a cheap one.
Depending on the value of the exploit, some genius crackers can go to greater lengths to find your secret code. You need to weigh the factors - cost of previously mentioned server side solution, incentive for crackers to spend more efforts on finding your secret code, and the complexity of the obfuscation you can implement.
Do not store the secret inside the application.
You need to have a server that can be accessed by the application over https (obviously) and you store the secret on it.
When someone want to login via your mobile/desktop application, your application will simply forward the request to the server that will then append the secret and send it to the service provider. Your server can then tell your application if it was successful or not.
Then if you need to get any sensitive information from the service (facebook, google, twitter, etc), the application ask your server and your server will give it to the application only if it is correctly connected.
There is not really any option except storing it on a server. Nothing on the client side is secure.
Note
That said, this will only protect you against malicious client but not client against malicious you and not client against other malicious clients (phising)...
OAuth is a much better protocol in browser than on desktop/mobile.
There is a new extension to the Authorization Code Grant Type called Proof Key for Code Exchange (PKCE). With it, you don't need a client secret.
PKCE (RFC 7636) is a technique to secure public clients that don't use
a client secret.
It is primarily used by native and mobile apps, but the technique can
be applied to any public client as well. It requires additional
support by the authorization server, so it is only supported on
certain providers.
from https://oauth.net/2/pkce/
For more information, you can read the full RFC 7636 or this short introduction.
Here's something to think about. Google offers two methods of OAuth... for web apps, where you register the domain and generate a unique key, and for installed apps where you use the key "anonymous".
Maybe I glossed over something in the reading, but it seems that sharing your webapp's unique key with an installed app is probably more secure than using "anonymous" in the official installed apps method.
With OAuth 2.0 you can simply use the client side flow to obtain an access token and use then this access token to authenticate all further requests. Then you don't need a secret at all.
A nice description of how to implement this can be found here: https://aaronparecki.com/articles/2012/07/29/1/oauth2-simplified#mobile-apps
I don't have a ton of experience with OAuth - but doesn't every request require not only the user's access token, but an application consumer key and secret as well? So, even if somebody steals a mobile device and tries to pull data off of it, they would need an application key and secret as well to be able to actually do anything.
I always thought the intention behind OAuth was so that every Tom, Dick, and Harry that had a mashup didn't have to store your Twitter credentials in the clear. I think it solves that problem pretty well despite it's limitations. Also, it wasn't really designed with the iPhone in mind.
I agree with Felixyz. OAuth whilst better than Basic Auth, still has a long way to go to be a good solution for mobile apps. I've been playing with using OAuth to authenticate a mobile phone app to a Google App Engine app. The fact that you can't reliably manage the consumer secret on the mobile device means that the default is to use the 'anonymous' access.
The Google App Engine OAuth implementation's browser authorization step takes you to a page where it contains text like:
"The site <some-site> is requesting access to your Google Account for the product(s) listed below"
YourApp(yourapp.appspot.com) - not affiliated with Google
etc
It takes <some-site> from the domain/host name used in the callback url that you supply which can be anything on the Android if you use a custom scheme to intercept the callback.
So if you use 'anonymous' access or your consumer secret is compromised, then anyone could write a consumer that fools the user into giving access to your gae app.
The Google OAuth authorization page also does contain lots of warnings which have 3 levels of severity depending on whether you're using 'anonymous', consumer secret, or public keys.
Pretty scary stuff for the average user who isn't technically savvy. I don't expect to have a high signup completion percentage with that kind of stuff in the way.
This blog post clarifies how consumer secret's don't really work with installed apps.
http://hueniverse.com/2009/02/should-twitter-discontinue-their-basic-auth-api/
Here I have answer the secure way to storing your oAuth information in mobile application
https://stackoverflow.com/a/17359809/998483
https://sites.google.com/site/greateindiaclub/mobil-apps/ios/securelystoringoauthkeysiniosapplication
Facebook doesn't implement OAuth strictly speaking (yet), but they have implemented a way for you not to embed your secret in your iPhone app: https://web.archive.org/web/20091223092924/http://wiki.developers.facebook.com/index.php/Session_Proxy
As for OAuth, yeah, the more I think about it, we are a bit stuffed. Maybe this will fix it.
None of these solutions prevent a determined hacker from sniffing packets sent from their mobile device (or emulator) to view the client secret in the http headers.
One solution could be to have a dynamic secret which is made up of a timestamp encrypted with a private 2-way encryption key & algorithm. The service then decrypts the secret and determines if the time stamp is +/- 5 minutes.
In this way, even if the secret is compromised, the hacker will only be able to use it for a maximum of 5 minutes.
I'm also trying to come up with a solution for mobile OAuth authentication, and storing secrets within the application bundle in general.
And a crazy idea just hit me: The simplest idea is to store the secret inside the binary, but obfuscated somehow, or, in other words, you store an encrypted secret. So, that means you've got to store a key to decrypt your secret, which seems to have taken us full circle. However, why not just use a key which is already in the OS, i.e. it's defined by the OS not by your application.
So, to clarify my idea is that you pick a string defined by the OS, it doesn't matter which one. Then encrypt your secret using this string as the key, and store that in your app. Then during runtime, decrypt the variable using the key, which is just an OS constant. Any hacker peeking into your binary will see an encrypted string, but no key.
Will that work?
As others have mentioned, there should be no real issue with storing the secret locally on the device.
On top of that, you can always rely on the UNIX-based security model of Android: only your application can access what you write to the file system. Just write the info to your app's default SharedPreferences object.
In order to obtain the secret, one would have to obtain root access to the Android phone.