Application Cache manifest file errors with Windows/NTLM authentication - windows-authentication

How are browsers implementing requests for Application Cache manifest files and is it different from how other files are requested?
I ask because I'm seeing behavior I wouldn't expect when using Windows/NTLM authentication in IIS 7. The situation is that I have a site with a manifest file defined. With anonymous authentication, everything works as expected -- the site loads and is available offline.
When I disable anonymous and enable Windows authentication, the site will load fine after authenticating, but I will see an error in the console (in Chrome or on an iPad 2) that says the manifest file could not be fetched.
On the iPad, the error is that the Application Cache file could not be fetched. In Chrome, the specific error is "Application Cache Error event: Manifest fetch failed (401)." I can see the 401 response code in the web server logs in both instances.
Why this behavior seems unexpected is requests for all other resources (CSS, JavaScript, images) all work as expected. Also, I can browse to my .appcache file and it loads.
Can anyone explain what's going on?
Has anyone else run into this and found a solution?

Not sure if this is still relevant, but I'm also having this problem.
As my site makes AJAX requests, once the page has loaded I am asked for credentials for the request to take place. Once this has happened, running applicationCache.update() causes the application cache to update correctly.
Therefore, as a work around, perhaps try making an AJAX request to something so that the user is prompted for credentials, then call applicationCache.update().

i have also run into this problem. Exactly as you described, but i am using basic auth on Apache. I am going to try making the the mainfest file public.

I know its an old question but i had the exact same problem which led me here.
my setup is:
server - IIS8
authentication - windows
anonymous authentication - enabled (did this so i could get my dynamic manifest to be fetched regardless of authentication, i had to then decorate all other controllers with [Authorize])
With the above setup the application would cache properly however when loading from the cache, if there was an update to the manifest certain sections were not fetching (such as authorized content) because the user was not "logged in" and hence making the whole update even fail.
My solution was to add in an ajax call to an authorized resource, this way when the user was online they would be prompted to log in meaning that the next time the cache was updated they were authorized again.

Related

Can I whitelist all domains for Keycloak in the development environment?

Let's say we have a lot of projects. Project1, Project2, etc. and let's say their local development domains are example1.local and example2.local, etc.
Now we have set up a Keycloak instance of our development machine, with a Development realm inside it, with an AdminPanel client in that realm, and we want to use it for all of our projects.
We can manually add https://example1.local/* and https://example2.local/* etc. to valid redirect URLs and web origins.
But this means that we need to add each and every project we have and we do many many projects per year.
We tried https://* but it did not let us login complaining about invalid redirect_uri.
Is it possible to whitelist every domain for Keycloak?
You should be able to do that. I suggest to check your configuration again. Something like this works perfectly for my scenario which is the same as yours. The only difference is that I created a dedicated client for my applications, but still it's single client for many dev environments:
Valid Redirect URIs: https://* or https://*.local
Web Origin: *
Don't put anything extra for Web Origin. Just the * but this is only needed for example if you want to use a swagger-ui hosted on somewhere else. It allows swagger from any domain ask for token from the Keycloak. If you don't put the *, due to CORS error, the swagger-ui or any tools like that would not be able to fetch token.
It's a minor thing, but worth mentioning that you put https:// in the config, so the client app should also be accessed using https. If someone type http by mistake, the same error would be returned.
We tried https://* but it did not let us login complaining about
invalid redirect_uri.
Unless you are working in a testing environment, or you want to get hacked, DO NOT DO THIS in a production environment. From OAuth 2.0 Security Best Current Practice you read an explanation of a an exploit based on this misconfiguration.
Therefore, you should make your registered redirect URIs as specific as feasible, and simply using a wildcard in a big no-no.
But this means that we need to add each and every project we have and
we do many many projects per year.
Wouldn't it be possible to automatize this via scripts or so? Get the project names and then call the Keycloak Admin API to add those redirectURIs to the client?!

Play Framework authentication: request headers are not being added in production

I have implemented an authorized action as explained in this question as well as the answer by #vdebergue.
This was working great, and the requests made by the front-end application were automatically adding an X-XSRF-TOKEN request header, with the token obtained from the login response.
However upon deploying both front-end and back-end, the requests issued from the browser are no longer adding the X-XSRF-TOKEN request header, thus causing an Unauthorized response from the server (rightfully so).
What I am failing to understand is, what is it that changed between development and deployment?
I do have the request header specified in cors.allowedHttpHeaders:
play.filters.cors.allowedHttpHeaders = ["Accept", "Origin", "Content-Type", "X-XSRF-TOKEN"]
I doubt I have to add this header manually from React (in fact the issue probably has nothing to do with the front-end).
Thanks!
Edit 1:
List of XHR requests:
Details of the login POST request, can see the X-XSRF cookie and the token being passed:
Details of the unauthorized GET that is not setting the X-XSRF as request header:
Same as previous screenshot, but running on localhost, getting authorized with the header added:
Assuming you implemented correctly, and the cookie is not attached during deployment, the issue might be related to the domain of your cookie. The way I did it is to define an an env variable and use it to hold the domain value; so it does not break the implementation during development and tests.
You can look at the Playframework API documentation for more information on how to use the cookie.
Solved in an unconventional matter: front end was made with react, which offers a way to build a static production version.
I simply integrated those static files with play framework's index.scala.html, instead of trying to run it as a separate app on a different port.
It works, however i will not mark it as a best answer yet, because i don't know whether a mobile app connecting to the same play framework backend will play along nicely when it comes to authorisation and cookies. Mobile apps are not browsers (and maybe don't abide by their limitations), and Postman had no issues with cookies.
To be checked.

Possible to reverse a permanent redirect in Azure?

I have an azure web site that I don't update anymore.
So I edited the web.config and added a rule to redirect to a new URL.
I made a type when typing the new URL and set the redirect mode to permanent.
No matter what I do, now I can not correct it because it seems it's permanently stuck this way.
The old URL now tries to redirect to some random incorrect typo location.
Is there a way to reverse this?
This sounds like it may be a local issue. Your browser may have cached the 302 response. Have you tried using a different browser, or clearing your browser's cache?
Otherwise, have you restarted the web site through the Azure portal?

Integrated Exchange login with GWT on Tomcat

I have a GWT app to deploy to Tomcat on a Windows server, with the following requirements:
1- The app should work fully, whether the user is in the Windows domain or not;
2- If the user happens to be in the domain, the app should be able to identify the user in some manner. Presumably, this should be via getThreadLocalRequest().getRemoteUser(), but any other alternative is fine...
3- If the user happens to be in the domain, the app should be able to access the MS Exchange server in that domain, without requiring the user to enter their password.
I've scoured the web high and low for this, but unfortunately, it seems there's no way to get authentication without forcing authentication. There are many examples of exclusions for, say, a login form or other "public" resources, but that won't work for us, since all the resources in a GWT app are packed into the same "page".
Maybe it's my limited understanding that's making me fail in some basic way, but I've tried to look at JCIFS, Jespa, Waffle and SPNEGO, and I just can't seem to get working the way I want to...
Any help would be greatly appreciated.
Cheers,
J.
How about putting a Javascript on your front page and have a Kerberos/SPNEGO protected page. The javascript will attempt to request a protected page, if the user is on the domain you will get the correct result from the page otherwise you will get 401 access denied. In the former case you can redirect your browser to exchange page, or have another AJAX call to retrieve things from exchange server in the later case you either show a log-in form or a generic anonymous page.
What about using JNI to call the Win32Api function LogonUser?
By doing impersonation at the thread level you will have the NTLM token added to the current thread and you would be able to call exchange with no issues

I'm unable to de-authorize callback

I want to delete record of those peoples who have remove app from their application's list, to do this I have entered that URL where I make a code to delete record of active user from my database in de-authorize callback. But still I'm unable to de-authorize users from by db.
Edit: See Facebook Deauthorize Callback over HTTPS for what my original problem really was. Summary: Improper web server configuration on my part.
Original answer was:
One potential problem has to do with https based deauthorize callbacks. At least some SSL certificates are not compatible with the Facebook back end servers that send the ping to the deauthorize callback. I was only able to process the data once I implemented a callback on an http based handler.
Some things to check...
That the URL of your server is visible from facebook's servers (ie not 192.168 or 10.0 unless you've got proper firewall and dns config).
Try using an anonymous surfing service and browsing to the URL you gave facebook - do you see a PHP Error?
Increase the loglevel for PHP and Apache/IIS to maximum and see if you get any more information
We can't do much more unless you give us your code...