I'm trying to determine whether the following can be classified as a CSRF (Cross-Site Request Forgery) vulnerability on a website:
If website-1.com contains the following code: <img src = "http://website-2.com/img.png"></img> and "http://website-2.com/img.png" performs a 302 redirect to some sensitive content on website-1.com, such as http://website-1.com/delete.php?file=test.jpg and "test.jpg" is deleted succesfully, is that a CSRF attack, even though the malicious content was embedded on website-1.com and not on a 3rd party site?
Thank you for your help
Deleting with a simple GET request is a pretty bad practice and makes CSRF attacks easy.
Does plain link from website-2 to http://website-1.com/delete.php?file=test.jpg cause file to be deleted? If not, then there must be some sort of CSRF protection. But there are a lot of other things to be watched for to ensure CSRF is not possible (like if/how CSRF token is exactly implemented or what sort of user content sites allow, how much admins of both sites trust each other, etc). From your limited info, I'd consider website-1 vulnerable.
Related
We have a corporate website that receives external email, processes them, and shows them in the browser to the user. We will be showing the emails in HTML format if they are available in this format. However, this basically means that we will be showing user-generated HTML code (you could send any HTML in an email, as far as I know).
What are the security risks here? What steps to take in order to minimize these risks?
I can currently think of:
Removing all javascript
Perhaps removing external CSS? Not sure if this is a security risk
Not loading images (to limit tracking... not sure if this poses a security risk or just a privacy risk)
Would that be all? Removing HTML tags is always error prone so I am wondering if there is a better way to somehow disable external scripts when displaying e-mail?
The security risks are, as far as I know, the same as with Cross-Site-Scripting (XSS).
OWASP describes the risks as following:
XSS can cause a variety of problems for the end user that range in severity from an annoyance to complete account compromise. The most severe XSS attacks involve disclosure of the user’s session cookie, allowing an attacker to hijack the user’s session and take over the account. Other damaging attacks include the disclosure of end user files, installation of Trojan horse programs, redirect the user to some other page or site, or modify presentation of content. An XSS vulnerability allowing an attacker to modify a press release or news item could affect a company’s stock price or lessen consumer confidence.
Source
Defending against it must be with layers of defense, such as but not limited to:
Sanitizing the HTML with something like DOMPurify.
Making use of HTTP only cookies for security sensitive cookies so they can't be read from JavaScript. Source
Adding a Content Security Policy so the browser only trusts scripts from domains you tell it to trust. Source
Depending on your requirements it might also be possible to load the email content into a sandbox iframe, as an additional security measurement. This can be done like this:
var sanitizedHTML = DOMPurify('<div>...</div>');
var iframe = document.getElementById('iframeId');
iframe.srcdoc = sanitizedHTML;
I have a basic html, which contains a form. This form submission is handled by a RESTful backend api service (written in spring boot). The html page is unprotected for business reasons -any sort of authentication / login mechanism can't be applied on the HTML. How can I make sure, only the html is allowed to hit the backend APIs, and not other sources? Both the html and backend apis are under the same domain. Example - example.com/index.html; example.com/getStudentList
How can I make sure, only the html is allowed to hit the backend APIs, and not other sources?
If I'm understanding things correctly, you don't want consumers of your API to authenticate with the API, because reasons? But what you want is that any client that loads the index page can access the API.
The closest implementation I can think of that would work at all like that would be to treat the API urls like a one time pad: You dynamically generate the html page, with urls that include some difficult to guess token. When the API receives any request, it checks the token -- if there is no token, it rejects the request (403 - Forbidden). If there is a token, it checks whether or not that token is still active; if the token is expired, then the request is rejected. If the token is inactive, but within some grace period, you might redirect the API request to a URL with a newer token (301 - Moved Permanently). If the token is active, then you serve the request.
Mark Seemann, while trying to solve a different problem, wrote a nice little introduction: Avoiding Hackable URLs.
If that sounds to you like a session cookie -- well, you aren't far wrong. To be completely honest, I suspect that the differences are subtle, and I wouldn't be surprised to discover that I exaggerate them. The primary differences are that we are communicating things like cache invalidation and the resource lifecycle explicitly to intermediary components. The Cookie header, on the other hand, is effectively opaque.
This answer is certainly imperfect -- anybody who happens to guess the currently active URL is going to be able to access the API whether they hit the index page or not. Obscurity, rather than security.
But it might be enough to tide you over until you have reasonable requirements.
Could someone please help me understand why OWASP had to make this change to their reference implementation
https://github.com/aramrami/OWASP-CSRFGuard/commit/a494d4d7d7e9814fa0feaabf81f8264d10165ffb
The only hint in that commit is "The token is now removed and fetched using another POST request to solve,the token hijacking problem."
I would very much appreciate if someone could explain how this change prevents token hijacking.
There are several changes in this commit, mostly if you look like it honors the configuration to protect or not protect forms that use method="get" (which is the default if bot specified as post. There also appears to be some XHR(Ajax) related changes to populate the HTTP headers in the request differently from the config values.
The header comment for Rack::Protection::FormToken says:
# This middleware is not used when using the Rack::Protection collection,
# since it might be a security issue, depending on your application
Can anyone describe an example of when this middleware becomes a security issue?
According to https://github.com/rkh/rack-protection/issues/38 "FormToken lets through xhr requests without token."
So if you were relying on form tokens and had not taken extra steps to protect against xhr requests, then this might be considered a security risk. You might assume a request was genuine (since it's protected by FormToken - right!?) when in fact it was a forgery. By forcing you install FormToken explicitly, the developers are hoping that you will examine what it does and take the necessary steps.
I'm not sure whether this is necessary, but I don't see any csrf tokens in the login form. Usually when you create a form, you add form_rest(form) at the end, and that adds the csrf token. But the login form is handled differently, it's not really a form object, it's kind of automagic. You can see that in the docs.
So what's up with that? Why there is no csrf protection for the login form? I know CSRF attacks are for authenticated users, but anonymous users in Sf2 are technically authenticated (see the session cookie), and also I might want to have some kind of gradual engagement, like in stackoverflow, where you can perform some actions without being a confirmed member.
Any thoughts?
CSRF protection is not necessary on login forms.
CSRF definition: an attacker can force a victim to send an HTTP request to a server.
Typical school-book example: to initiate a money transfer.
The attacker can force a request like this: http://bank.example.com/withdraw?account=Alice&amount=1000000&for=Eve
As you see, the attacker must bake a URL beforehand.
In the case of a login request, it does not make sense, because the attacker must bake a URL like this: http://example.com/login?user=pierre.ernst&pwd=secret.
If the attacker has this information (credentials) already, chances are he will not try a CSRF :-)
Hoping it helps.
Actually, form_rest(form) does a lot more than throw in the CSRF token, this function prints out any form rows that have yet to be rendered, and is good to put in to ensure that any additional fields that have been neglected are sure to be rendered.
The reason you're not seeing a CSRF token is because the FOSUserBundle login form is not a Symfony form, it's just a regular HTML form that is then processed by the FOSUser form handler.
I'm not entirely sure why this decision was made but I am sure there was a technical reason behind it, and that if it is a problem you could add one in manually and extend the form handler to process and validate it in the response, I believe the service is parameterised so it should be relatively easy to swap out.
My big question though would be why you would bother doing this? Is it a massive deal? CSRF is a useful step but not the be-all-and-end-all solution for security, and personally, I'd have bigger priorities than this, if it's a big deal it'll get fixed in FOS at some point.
With your latter point, I'm not sure of the relevance, how does this stop you from achieving gradual engagement? A quick tip regardless, whilst I didn't architect this part of the system myself, on an ecommerce project I've been working on recently, the lead decided that to achieve gradual engagement (so people could checkout anonymously) but still persist a lot of their actions, very early on new users are persisted with a autogenerated username and a custom role such as ROLE_GUEST, with the default functionality in Symfony proving insufficient for our use case.