Circumventing web security limitations between two sites on the same server - rest

I'm using Eclipse to develop an app that consists of an Angular 2 front end and a Java REST back end.
For the front end, I'm using the Angular CLI plugin, which starts the app by issuing an ng serve command to the CLI. This command sets up an http server on port 4200.
For the back end, I'm using an in-company framework that launches in Jetty within Eclipse in port 8088.
While both these ports are configurable, by nature of the frameworks and plugins in use, they'll always be distinct.
Authentication works via an OAuth2 service that is also deployed to port 8088, as part of the framework. This service sets a cookie which certifies the browser session as authenticated. I have verified that this service works correctly by testing it against a Swagger instance of the REST API (also running in 8088 as part of the same framework).
The problem is that when the browser is aimed at the Angular 2 app on :4200, its internal REST API requests to :8088 aren't carrying the authentication cookie. Presumably, this is because of cross-site protection.
Is there any way for the app or the framework to tell the browser that these two "sites" are actually part of the same system?
Alternatively, if I have to configure the dev browser (Chrome) to work, I can live with that too. However, I've tried the --disable-web-security --user-data-dir recommendation, but the cookie still doesn't show up on the requests.
Lastly, I have Apache installed on the dev machine. If I can set up appropriate vhosts and use it as a proxy so that the browser thinks it's all the same, that would probably work too. It would just be a matter of intercepting all /swagger and /api requests and sending them to :8088, and all forwarding all other requests to :4200. However, I've been banging my head against mod_rewrite and mod_proxy and haven't been able to come up with anything that works.

I think what you're looking for is
withCredentials = true
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials

Related

Fiddler doesn't show traffic from Cypress

Background: I'm trying to send a request through cy.request and I get a different response from what I receive when I send a presumably similar request through PostMan. According to the debug information that Cypress writes to the console, I couldn't spot the difference. Therefore I wanted to look at Fiddler and see if I can spot the difference when looking at the raw requests side by side.
However, when I opened Fiddler I realized that I don't see any traffic from it, including the navigation to the home page using cy.visit().
Any ideas why I can't see the traffic in Fiddler, and if there's some way to capture it?
Fiddler is a proxy, it has to be explicitly used by an application, otherwise it will not be recogized by Fiddler.
There are three reasons that often cause that traffic is not visible in Fiddler:
The Windows application explicitly ignores the Windows/IE proxy settings. Usually such apps have an own proxy configuration. Configure it manually to use Fiddler. A common example of such an application is Firefox.
If you have activated the "Act as system proxy at startup" Fiddler changes the proxy settings while running. Any application that is already running when Fiddler starts may have already cached the old proxy configuration and therefore does not use Fiddler. Therefore start Fiddle before any program you want to capture.
The setting "Act as system proxy at startup" is AFAIK user specific, therefore any apps running on a different user or service account are not affected. You have to manually configure them to use Fiddler.
Cypress does not actually make an XHR request from the browser. Cypresd making the HTTP request from the Cypress Test Runner (in Node). So, you won’t see the request inside of your Developer Tools or filddler

REST API with Single Page Application over HTTPS on Firefox only

I am developing a web service using REST API. This REST API is running on port 6443 for HTTPS. Client is going to be a Single page application running on port 443 for HTTPS on same machine. The problem I am facing is:
While I hit the url say: https://mymachine.com/new_ui I get certificate exception for an invalid certificate because I use a self signed one, so mymachine.com:443 gets added to server exception. But still requests doen't go to REST API as they are running on https://mymachine.com:6443/restservice. If I manually add mymachine.com:6443 to server exception on firefox it works but it will not be the case in production for customers.
Some options that I thought are:
1. Give another pop up and ask to add REST server on port 6443 exception too.But this doesn't look proper as why an end user should accept the cerf for same domain twice. Also REST api server port can change.
Can we programmatically add exception for domain and both the ports in one shot? Ofcourse with the consent of the user. 3. Use a reverse proxy. But then its going to have memory footprint on our system. Also it will be time consuming.
Please suggest some options. How do I deal with it. Thank you

Bypassing `blocked: mixed-content` restrictions in browsers

I have an internal WEB application I use, with a local printer attached.
To control the local printer (it's a ticketing printer) I use locally a small program that manages it. In order for my WEB application to "use" the printer, I make it to POST AJAX request to the small local program.
My WEB application is served with HTTPS, while the local program exposes a simple HTTP API through HTTP (non-secure).
The problem is, I am facing blocked: mixed-content restrictions when accessing the application through HTTPS (in development mode I wasn't seen this, of course).
I have several fixes (don't like any of them):
Make the local program to expose its simple HTTP API through HTTPS.
It's doable, but I will face problems with self signed certificates (will have to install them on the target machine), or will have to use DNS tricks to expose it under a "name".
Disallow browsers to block mixed-content
Doable. But will have to configure each browser accessing my application, plus will make them less secure.
====
So my question is: is there another way of circumventing/bypassing the blocked: mixed-content restriction? Ideally supported on new Firefox and Chrome versions.
You shouldn't but you can upgrade all non-secure requests by allowing it in your header
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">

Angular 2 app: "XMLHttpRequest cannot load localhost endpoint"

I have my Angular front-end set up to try and hit a RESTful endpoint. The Angular front-end is being served on localhost:3000, and the RESTful back-end is being hosted on localhost:8080.
In my Angular rest client service, I make the call (which I subscribe to elsewhere in my application):
getCurrentSlides(): Observable<Slide[]> {
return this.http.get("localhost:8080/app/slides")
.map(extractData)
.catch(handleError);
}
But when Angular tries to hit that URL, I get the following error:
XMLHttpRequest cannot load localhost:8080/app/slides. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https.
And yes, CORS is enabled on my server.
this.http.get("localhost:8080/app/slides")
You're missing the http:// in the URL. With that, most browsers will still require CORS for the different ports, but IE does not, so when adding http:// you should be able to test using IE:
IE Exceptions
Internet Explorer has two major exceptions when it comes to same origin policy
Trust Zones: if both domains are in highly trusted zone e.g, corporate domains, then the same origin limitations are not applied
Port: IE doesn't include port into Same Origin components, therefore http://company.com:81/index.html and http://company.com/index.html are considered from same origin and no restrictions are applied.
These exceptions are non-standard and not supported in any other browser but would be helpful if developing an app for Windows RT (or) IE based web application.
That said, you should enable CORS. (But it seems you did, so then it's just the missing http:// prefix.)

SiteMinder Single-Sign-On without installing agent on web application server

Our IT staff refuses to install the SiteMinder agent on our application's IIS 6.0 web server, citing security concerns as it is a third-party software, as well as the possibility of high resource utilization impacting application performance.
They suggest that we set up an independent, segregated web server containing only a bare-bones IIS, the SiteMinder Agent, and a "shim" to authenticate login attempts.
This shim would be a single ASPX page marked to be protected by the agent. It would use the SiteMinder agent to authenticate the user ID, look up the user ID in the application's database, and return the user ID and password to the user's browser. A JavaScript function would then POST the user ID and password to the application's existing login page as if they typed it in themselves.
Are their concerns warranted? Why or why not?
Have you ever heard of anyone implementing a similar architecture?
Is their proposed solution good, bad, or ugly?
It does not look like it would work, because the agent manages not only the initial login, but subsequent calls to the application, i.e. authenticated session. The agent examines the cookie, validates it, etc. Your scenario does not describe how that would happen.
In our environment, all internet traffic goes through an Apache reverse proxy before hitting IIS. IIS is behind firewall. The Apache reverse proxy has the SM agent all it does is redirect the traffic to IIS. I suppose it would be feasible to do a similar setup with IIS acting as a reverse proxy.
BTW, tell your IT guy that his proposed shoestring and bubblegum login solution is a much bigger security concern than installing SiteMinder on IIS.
The apache reverse proxy solution will definitely work, but with SiteMinder r12.51, Secure Proxy Server is included, which is basically SiteMinder's version of a reverse proxy (plus a lot more).
SPS will let you configure a single server as a "gateway" for all of your applications that can't or won't install a SiteMinder agent. The web agent is embedded in SPS and a proprietary Java app does the heavy lifting. SPS also has a GUI which follows the look and feel of the r12 WAMUI, which makes configuring it very simple.
Secure Proxy Server also has a Federation Gateway feature, so you don't need to install the web agent option pack if you are doing SAML Federation. All of your fcc pages can also be served by the SPS, so you can reduce the number of webservers needed to support your SSO environment.