Our application is built using Blazor Server and .razor pages which communicates to Azure APIs. I would like to know if for Blazor UI should we implement Anti forgery CSRF/XSRF, if so can I get some information how to implement?
I have observed posts implementing Anti forgery for ASP .Net core MVC applications https://exceptionnotfound.net/using-anti-forgery-tokens-in-asp-net-core-razor-pages/ and also blogs mentioned that Razor pages implement Anti forgery validation by default.
But for Blazor would like to know what is the pattern to follow ?
From the Microsoft documentation:
Blazor Server apps can be accessed cross-origin unless additional
measures are taken to prevent it. To disable cross-origin access,
either disable CORS in the endpoint by adding the CORS middleware to
the pipeline and adding the DisableCorsAttribute to the Blazor
endpoint metadata or limit the set of allowed origins by configuring
SignalR for cross-origin resource sharing.
If CORS is enabled, extra steps might be required to protect the app
depending on the CORS configuration. If CORS is globally enabled, CORS
can be disabled for the Blazor Server hub by adding the
DisableCorsAttribute metadata to the endpoint metadata after calling
MapBlazorHub on the endpoint route builder.
Related
I've been trying to get XSRF working with my front-end angular site and my back-end .NET site. They're two different websites. For example, client.example.com and api.client.example.com
When I call IAntiforgery.GetAndStoreTokens on the .NET side, it's going to generate the .AspNetCore.Antiforgery cookie with a SameSite=Strict policy, meaning the client can't use it.
That, plus the fact that .NET only provides the AutoValidateAntiforgeryTokenAttribute if you're using Razor leads me to wonder if XSRF is even needed in this case.
I have generated JAX-RS stubs for a REST service using Swagger and want to set up the security.
The security side is very new to me and I would like to use standards as far as possible. (In the past, for other J2EE applications, I have used Filters to handle Authentication which put User objects into a Session. As I understand it, Sessions should be avoided for REST.)
There are 4 types of user who will access the services
Customers and business partners (Authentication via oAuth or similar)
Employees (Authentication via NTLM & LDAP)
Developers (Mock authentication/authorisation of some kind)
Integration test (JUnit with pre-defined users and roles)
Is it possible to define a security mechanism which would handle all of these users?
How would I use the Swagger security directives?
Am I making this more complicated than it needs to be?
You could use an open source API gateway like Tyk? Here’s a link to some handy info on API Security in the tyk docs.
And here is a blog post that describes taking a layered approach to API Security that goes beyond the gateway.
Disclosure: I work for Tyk!
I am developing a web application where data will be accessible both to frontend and to various clients (curl & co.) through REST API. Both frontend and backend will be on the same domain. I would like to protect my frontend with CORS, which presents a dilemma for me. If I set Access-Control-Allow-Origin to * then all other clients will be able to access API, but my own frontend will be more exposed. On the other hand setting it to my domain forces clients to supply (fake) Origin headers and effectively disallows using browsers as clients (via frontend on different domains).
How is this usually solved? Should I use two different endpoint for API, one for public access and the other for use with my frontend? I would appreciate some advice.
I would like to protect my frontend with CORS
CORS doesn't protect anything in the frontend, CORS is a way to prevent cross-site scripting from web sites which are not authorized. The CORS headers are effective only for browser's XHR calls. It will not prevent direct loading of resources.
If I set Access-Control-Allow-Origin to * then all other clients will be able to access API, but my own frontend will be more exposed.
IMHO you frontend will be accessible as before. The CORS headers are effective only for browser's XHR calls
On the other hand setting it to my domain forces clients to supply (fake) Origin headers and effectively disallows using browsers as clients (via frontend on different domains).
Not really.
There are several options:
You can have a list of allowed hosts for each API client (effective you set the origin header based on the client's authentication) This is what many of API providers do (FB, Google, Amazon, ..)
the browser (in the XHR calls) sends the Origin header and you could check and sent or deny the hostname from the Origin header
And non-browsers clients are not restricted by the CORS headers.
Should I use two different endpoint for API, one for public access and the other for use with my frontend? I would appreciate some advice
As written in the comments - assuming the functionality is the same and users are authenticated, then IMHO there is no point in having separate services for internal / public use.
These all are answers for particular questions, however I am still not convinced it is clear what/why/how you want to achieve.
CORS is relevant only for browsers & HTML. curl doesn't care about it. So if you restrict your service to be accessed only from your domain, then other sites won't be able to access it.
To make your service available for them - those sites could set up nginx or apache to forward some of the traffic to your service. So 3d-party sites will access their own host with their own CORS configured and their host will communicate with your service.
Another (similar) solution would be for you to set up 2 host names (subdomains?) that lead to the same service. And expose one to your own site (with strict CORS) and the other - for external clients.
I struggle around with CORS implementation in my client-server project. CORS means, that calls from other origins (ex. other domain) only allowed if the header contains the CORS value.
So, for example if I host a website on www.domain.com and call an RESTful API on the same domain, everything is fine.
But if I develop an API for an mobile application for example, the mobile does not have the same domain of the API. How could that work together? Does I need everytime the CORS implementation in my service?
The question comes up, since I develop an Angular 2 application, that is running in dev on localhost:4200 and my API runs on localhost:8080 (build with Spring Boot). So the client throws an exception, because it's not the same origin (different port).
The Goal is to host my API on an root server somewhere in the internet, and the client on different webspace provider (because it's just a single Page Application). The api runs with http://1.2.3.4:8080/api/v1 and the client with http://www.example.com:80/myPage
So, does I need to implement Cross-Origin everytime? Or is there another way to realize that?
Due to security concerns, browsers enforce same-origin policy i.e., a script (typically AJAX calls) running in a web page cannot access data from another page residing in a different domain. In some cases, this can be restrictive. CORS (Cross Origin resource sharing) is a W3C specification supported by most modern browsers to specify when it is safe to allow cross origin requests.
In Spring boot, enabling CORS is as easy as adding the #CrossOrigin annotation. This annotation can be added at method level to enable just for that particular request mapping or at the class level to enable for the whole controller.
You could list the domain and port to be allowed by adding an "origins" attribute to the annotation. If it is not specified, all origins are allowed by default (better to avoid this for security reasons).
Below is an example to enable CORS for example.com domain and port 80 at controller level
#CrossOrigin(origins = "http://www.example.com:80")
#RestController
#RequestMapping("/yourmapping")
public class YourController {
}
Yes, if you are developing an API and want to make it public and want mobile users or other site consumers use it, you should set CORS for any site (*) , always. You can read more info here:
https://spring.io/understanding/CORS (no longer functioning)
https://auth0.com/blog/cors-tutorial-a-guide-to-cross-origin-resource-sharing/
is it possible include one web application into several sso federations
Yes, at least in the SAML-P and WS-Federation protocols there is nothing that forbids this. A web application can inspect the incoming HTTP request (the URL and/or cookies), and use that to choose the STS to redirect to.
However, a specific SSO library/framework might have restrictions in this area.
For example, if your web application is in .NET based on WIF, then the WSFederationAuthenticationModule has exactly one Issuer, which is used for all sign-in requests. (This is usually set in the web.config file in the <wsFederation issuer="..."> attribute). It may be possible to override the CreateSignInRequest() method of this module, temporarily setting Issuer to a different value while the request is created (and applying the proper locking). But WIF was apparently not designed to support this multi-SSO-federation scenario.