Does a RESTful API need CORS implementation anytime? - rest

I struggle around with CORS implementation in my client-server project. CORS means, that calls from other origins (ex. other domain) only allowed if the header contains the CORS value.
So, for example if I host a website on www.domain.com and call an RESTful API on the same domain, everything is fine.
But if I develop an API for an mobile application for example, the mobile does not have the same domain of the API. How could that work together? Does I need everytime the CORS implementation in my service?
The question comes up, since I develop an Angular 2 application, that is running in dev on localhost:4200 and my API runs on localhost:8080 (build with Spring Boot). So the client throws an exception, because it's not the same origin (different port).
The Goal is to host my API on an root server somewhere in the internet, and the client on different webspace provider (because it's just a single Page Application). The api runs with http://1.2.3.4:8080/api/v1 and the client with http://www.example.com:80/myPage
So, does I need to implement Cross-Origin everytime? Or is there another way to realize that?

Due to security concerns, browsers enforce same-origin policy i.e., a script (typically AJAX calls) running in a web page cannot access data from another page residing in a different domain. In some cases, this can be restrictive. CORS (Cross Origin resource sharing) is a W3C specification supported by most modern browsers to specify when it is safe to allow cross origin requests.
In Spring boot, enabling CORS is as easy as adding the #CrossOrigin annotation. This annotation can be added at method level to enable just for that particular request mapping or at the class level to enable for the whole controller.
You could list the domain and port to be allowed by adding an "origins" attribute to the annotation. If it is not specified, all origins are allowed by default (better to avoid this for security reasons).
Below is an example to enable CORS for example.com domain and port 80 at controller level
#CrossOrigin(origins = "http://www.example.com:80")
#RestController
#RequestMapping("/yourmapping")
public class YourController {
}

Yes, if you are developing an API and want to make it public and want mobile users or other site consumers use it, you should set CORS for any site (*) , always. You can read more info here:
https://spring.io/understanding/CORS (no longer functioning)
https://auth0.com/blog/cors-tutorial-a-guide-to-cross-origin-resource-sharing/

Related

Is it possible to set up an API to serve html from another domain?

I'm curious whether if it's possible to set up a server to respond with html fetched from another domain rather than simply redirect the requester to that domain.
For example, I set up a simple node express server that has a GET route /google, which fetches google.com, and then responds with the response from the fetch. However, in this case, it does not respond with the google webpage as I would expect.
It is not only possible but quite common especially in larger server environments. The term you are looking for is reverse-proxy.
Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.
Source: https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/
Most major web servers support it.
More than likely the response you're getting from google (and passing on) is some kind of redirection. Try it with a static web page of your own to rule out any redirection shenanigans.

Secure communication between Web site and backend

I am currently implementing a Facebook Chat Extension which basically is just a web page displayed in a browser provided by the Facebook Messenger app. This web page communicates with a corporate backend over a REST API (implemented with Python/Flask). Communication is done via HTTPS.
My question: How to secure the communication the Web page and the backend in the sense that the backend cannot be accessed by any clients that we do not control?
I am new to the topic, and would like to avoid making beginners' mistakes or add too complicated protocols to our tech stack.
Short answer: You cant. Everything can be faked by i.e. curl and some scripting.
Slightly longer:
You can make it harder. Non browser clients have to implement everything you do to authenticate your app (like client side certificates and Signet requests) forcing them to reverse engineer every obfuscation you do.
The low hanging fruit is to use CORS and set the Access Allow Origin Header to your domain. Browsers will respect your setting and wont allow requests to your api (they do an options request to determine that.)
But then again a non official client could just use a proxy.
You can't be 100% sure that the given header data from the client is true. It's more about honesty and less about security. ("It's a feature - not a bug.")
Rather think about what could happen if someone uses your API in a malicious way (DDoS or data leak)? And how would he use it? There are probably patterns to recognize an attacker (like an unusual amount of requests).
After you analyzed this situation, you can find more information here about the right approach to secure your API: https://www.incapsula.com/blog/best-practices-for-securing-your-api.html

How to protect REST API with CORS?

I am developing a web application where data will be accessible both to frontend and to various clients (curl & co.) through REST API. Both frontend and backend will be on the same domain. I would like to protect my frontend with CORS, which presents a dilemma for me. If I set Access-Control-Allow-Origin to * then all other clients will be able to access API, but my own frontend will be more exposed. On the other hand setting it to my domain forces clients to supply (fake) Origin headers and effectively disallows using browsers as clients (via frontend on different domains).
How is this usually solved? Should I use two different endpoint for API, one for public access and the other for use with my frontend? I would appreciate some advice.
I would like to protect my frontend with CORS
CORS doesn't protect anything in the frontend, CORS is a way to prevent cross-site scripting from web sites which are not authorized. The CORS headers are effective only for browser's XHR calls. It will not prevent direct loading of resources.
If I set Access-Control-Allow-Origin to * then all other clients will be able to access API, but my own frontend will be more exposed.
IMHO you frontend will be accessible as before. The CORS headers are effective only for browser's XHR calls
On the other hand setting it to my domain forces clients to supply (fake) Origin headers and effectively disallows using browsers as clients (via frontend on different domains).
Not really.
There are several options:
You can have a list of allowed hosts for each API client (effective you set the origin header based on the client's authentication) This is what many of API providers do (FB, Google, Amazon, ..)
the browser (in the XHR calls) sends the Origin header and you could check and sent or deny the hostname from the Origin header
And non-browsers clients are not restricted by the CORS headers.
Should I use two different endpoint for API, one for public access and the other for use with my frontend? I would appreciate some advice
As written in the comments - assuming the functionality is the same and users are authenticated, then IMHO there is no point in having separate services for internal / public use.
These all are answers for particular questions, however I am still not convinced it is clear what/why/how you want to achieve.
CORS is relevant only for browsers & HTML. curl doesn't care about it. So if you restrict your service to be accessed only from your domain, then other sites won't be able to access it.
To make your service available for them - those sites could set up nginx or apache to forward some of the traffic to your service. So 3d-party sites will access their own host with their own CORS configured and their host will communicate with your service.
Another (similar) solution would be for you to set up 2 host names (subdomains?) that lead to the same service. And expose one to your own site (with strict CORS) and the other - for external clients.

Calling insecure endpoint from a website runs under HTTPS - nginx

My application is running under HTTPS with a valid certificate from one of the known authorities. Unfortunately I am using a third party API which doesn't support HTTPS.
The result is the known message Mixed content: mydomain.com requested an
insecure XMLHttpRequest endpoint.
Is it possible to add an exception to the web server to allow calling this API insecurely!! I am using Nginx BTW.
If not what what can be other possibilities to solve this problem.
I have a solution but I don't like it because it will be a performance drawback:
Implement an API which acts as proxy, receive the requests from the application through HTTPS and make the requests to the third party API throw HTTP.
I too had this issue. Everything on a page should come and request https if you are using https and don't want warning/errors. You don't need to implement an api to proxy if you are using nginx. Whatever you implement will be performance hit as you correctly surmise. Just use proxy pass in nginx.
In our configuration, we have :
location /thirdparty/ {
proxy pass http://thirdpartyserver/;
}
Notice the trailing slash in proxy pass, I keep all third party api which are http in https://myserver/thirdparty/requesturl. Trailing slash removes thirdparty while making request. So it becomes, http://thirdpartyserver/request
Official reference: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass
In order to allow mixed content, the individual users must allow it in their browsers. Allowing HTTP content from one source is enough to compromise the security of HTTPS, so browsers forbid mixed content by default. The solutions I see are:
Getting rid of HTTPS (which I would NOT recommend)
Doing what you suggested and proxying requests through (this still isn't great security-wise)
Get rid of the HTTP content
Google has some recommendations for developers under step 1 (but they are basically echoed above): https://developers.google.com/web/fundamentals/security/prevent-mixed-content/fixing-mixed-content#step-1

web application in different sso federations

is it possible include one web application into several sso federations
Yes, at least in the SAML-P and WS-Federation protocols there is nothing that forbids this. A web application can inspect the incoming HTTP request (the URL and/or cookies), and use that to choose the STS to redirect to.
However, a specific SSO library/framework might have restrictions in this area.
For example, if your web application is in .NET based on WIF, then the WSFederationAuthenticationModule has exactly one Issuer, which is used for all sign-in requests. (This is usually set in the web.config file in the <wsFederation issuer="..."> attribute). It may be possible to override the CreateSignInRequest() method of this module, temporarily setting Issuer to a different value while the request is created (and applying the proper locking). But WIF was apparently not designed to support this multi-SSO-federation scenario.