When is this Rack::Protection::FormToken a security issue? - rack

The header comment for Rack::Protection::FormToken says:
# This middleware is not used when using the Rack::Protection collection,
# since it might be a security issue, depending on your application
Can anyone describe an example of when this middleware becomes a security issue?

According to https://github.com/rkh/rack-protection/issues/38 "FormToken lets through xhr requests without token."
So if you were relying on form tokens and had not taken extra steps to protect against xhr requests, then this might be considered a security risk. You might assume a request was genuine (since it's protected by FormToken - right!?) when in fact it was a forgery. By forcing you install FormToken explicitly, the developers are hoping that you will examine what it does and take the necessary steps.

Related

How to pass Account ID to a REST API

My HTTP API requires the frontend to pass an accountId on API calls due to the fact we support admin users who are not tied to a single account and could be querying any account.
Initially I implemented this as a header the issue here is that caching would not work.
The current implementation looks like api.com/endpoint?accountId=123 whilst this works well - I would like to understand if this is the correct approach when implementing a RESTful HTTP API.
UPDATE:
Based on a comment - this is for GET
IMO, you did things "the right way" (TM) by putting this information in a HTTP header -- that seems to be the correct place for this to live. However your caching system currently doesn't care about HTTP headers, so that leaves you with a practical problem (which doesn't really care about "the right way"). So...
Based on that, it sounds like you have two options:
Fix your caching to include specific headers as part of the cache key, or
Add relevant information to the URL for caching purposes
I'd suggest (1) where you update the cache key to be a hash of the URL + other relevant information that would result in a different result (in this case, the HTTP header including the account ID or session information). This allows you to keep putting information in the right place while also not causing issues for cached pages.
Of course this might not be possible, so the only practical solution I can offer you is (2) where you pull stuff into the URL in order to support caching. I think this is an anti-pattern, but it will get the job done.

Having a body on a delete request as an additional layer of security

I'm developing an web API and I'm trying to follow the principles of REST. I'm now at the stage of creating the endpoint where a user can delete its own account. I have also implemented JWT functionality where the JWT is valid for 1 day.
I want to add an extra layer of security when the user is deleting its own account. My idea is that the user have to provide its current password in the body of the delete request. I did some googling which pointed to having a body in a delete request is a bad idea because some entities does not support it, e.g. the Angular HttpClient, some web services might strip the body, etc.
I know GitHub have a similar functionality when deleting a repository, you have to provide your password. I like this feature because it prevents unauthorized persons from spoofing the JWT on critical operations, right?
What are your recommendations
Proceed with using DELETE and deal with the potential problems that might come along with this approach?
Instead use POST, PUT or PATCH even though it would look semantically wrong?
Other solution?
I would not recommend to use other http methods like put or patch if you really want to delete it and not only disable it. That would not be intuitive for the API user and could lead to misunderstandings.
One solution for your use case is to introduce an additional resource (e. g. deletionRequest) to request (but not immediately execute) the deletion of the profile with a post call. Then you could do the actual deletion with a delay (preferably longer than the token life span). You could then inform the user via email about the deletion, so the real user has the chance to revoke the deletion. If the user does not react in time, the deletion is executed.

Can one cache and secure a REST API with Cloudflare?

I am designing a RESTful API that is intended to be consumed by a single-page application and a native mobile app. Some calls of this API return public results that can be cached for a certain time. Moreover, there is a need for rate protection to protect the API against unauthorized users (spiders)
Can I use Cloudflare to implement caching and rate-limiting / DDOS protection for my RESTful API?
Caching: Cloudflare supports HTTP cache control headers so the API can decide for each entity requested via GET whether is public and how long it can be cached.
However it is not clear whether the cache control header is also passed downstream to client, so will also trigger the browser to cache the response? This may not be desirable, as it could make troubleshooting more difficult
Akamai has an Edge-Control header to ensure content is cached in CDN but not the browser. Can one do something similar with Cloudflare?
DDOS Protection: Cloudflare support has an article recommending that DDOS protection be disabled for backend APIs, but this does not apply to my use case where each client is supposed to make few requests to the API. The native DDOS protection actually fits my requirements for protecting the API against bots.
I need to know how I can programatically detect when Cloudflare serves a Captcha / I'm under attack etc. page This would then allow the SPA / mobile app to react intelligently, and redirect the user to a web view where she can demonstrate her "hummanness".
From Cloudflare documentation, it is not obvious what HTTP status code is sent when a DDOS challenge is presented. An open-source cloudscraper to bypass Cloudflare DDOS protection seems to indicate that Captcha and challenge pages are delivered with HTTP status 200. Is there a better way than parsing the request body to find out whether DDOS protection kicked in?
Cloudflare apparently uses cookies to record who solved the Captcha successfully. This obviously creates some extra complexity with native apps. Is there a good way to transfer the Cloudflare session cookies back to a native app after the challenge has been solved?
Probably this is something of an advanced Cloudflare use case - but I think it's promising and would be happy to hear if anyone has experience with something like this (on Cloudflare or another CDN).
Cloudflare has published a list of best practices for using it with APIs.
TL;DR, they recommend setting a page rule that patches all API requests and putting the following settings on it:
Cache Level: Bypass
Always Online: OFF
Web Application Firewall: OFF
Security Level: Anything but "I'm under attack"
Browser Integrity Check: OFF
Yes CloudFlare can help with DDOS protections and No it does not implement caching and rate-limiting for your API. You are to implement those your self or you use a framework that does.
You can use CloudFlare to protect your API endpoint by using it as a proxy.
CloudFlare protects the entire URL bit your can use the page rules to tweak the settings to your api endpoint.
Example: https://api.example.com/*
Reduce the the security for this rule to between low or medium so as
not to show a captcha.
API's are not meant to show captcha you protect them with authorizations and access codes.
you can implement HTTP Strict Transport Security and Access-Control Headers on your headers.
Cloud Hosting providers (e.g DigitalOcean, Vultr,etc..) have free or paid DDoS protection. You can subscribe for it on just that public facing VM. This will be a big plus because now you have double DDOS protection.
For cache APIs
Create a page rule like https://api.example.com/*.json
Set the Caching Level for that rule such that CloudFlare caches it on its servers for a specific duration.
The are so many other ways you can protect APIs. Hopes this answer has been of help?
This is a 5 year-old question from #flexresponsive with the most recent answer having been written 3 years ago and commented upon 2 years ago. While I'm sure the OP has by now found a solution, be it within CloudFlare or elsewhere, I will update the solutions given in a contemporary (2020) fashion and staying within CloudFlare. Detailed Page Rules are always a good idea for anyone; however for the OP's specific needs, this specific set in combination with a "CloudFlare Workers" script will be of benefit:
Edge Cache TTL: (n)time set to the time necessary for CloudFlare to cache your API content along/in its "Edge" (routes from edge node/server farm location is dependent upon one's account plan, with "Free" being of lowest priority and thus more likely to serve content from a location with higher a latency from it to your consumers.
However Edge Cache TTL > 0 (basically using it at all) this will not allow setting the following, which may or not be of importance to your API:
Cache Deception Armor: ON
Origin Cache Control: ON if #3 is being used and you want to do the following :
Use Cache Level: Cache Everything in combination with a worker that runs during calls to your API. Staying on-topic, I'll show two headers to use specific to your API 's route / address.
addEventListener("fetch", event => {
event.respondWith(fetchAndReplace(event.request));
});
async function fetchAndReplace(request) {
const response = await fetch(request);
let type = response.headers.get("Content-Type") || "";
if (!type.startsWith("application/*")) {
return response;
}
let newHeaders = new Headers(response.headers);
'Cache-Control', 's-maxage=86400';
'Clear-Site-Data', '"cache"';
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: newHeaders
});
}
In setting the two cache-specific headers, you are saying "only shared proxies can cache this". It's impossible to fully control how any shared proxy actually behave, though, so depending on the API payload, the no-transform value may be of value if that's a concern, e.g. if only JSON is in play, then you'd be fine without it unless a misbehaving cache decides to mangle it along the way, but if say, you'll be serving anything requiring an integrity hash or a nonce then using the no-transform is a must to ensure that the payload isn't altered at all and in being altered cannot be verified as the file coming from your API. The Clear-Site-Data header with the Cache value set instructs the consumer's browser to essentially clean the cache as it receives the payload. "Cache" needs to be within double-quotes in the HTTP header for it to function.
Insofar as running checks to ensure that your consumers aren't experiencing a blocking situation where the API payload cannot be transmitted directly to them and a hCaptcha kicks in, inspecting the final destinations for a query string containing a cf string (I don't recall the exact layout but it would definitely have the CloudFlare cf in it and definitely not be where you want your consumers landing. Beyond that, the "normal" DDoS protection that CloudFlare uses would not be triggered by normal interaction with the API. I'd also recommend not following CloudFlare's specific advice to use a security level of anything but "I'm Under Attack"; on that point I must point out that even though the 5-second redirect won't occur on each request, hCaptchas will be triggered on security levels Low, Medium & High. Setting the security level to "Essentially Off" does not mean a security level of null; additionally the WAF will catch standard violations and that of course may be adjusted according to what is being served from your API.
Hopefully this is of use, if not to the OP at least to other would-be visitors.

CSRF Clarification

I'm trying to determine whether the following can be classified as a CSRF (Cross-Site Request Forgery) vulnerability on a website:
If website-1.com contains the following code: <img src = "http://website-2.com/img.png"></img> and "http://website-2.com/img.png" performs a 302 redirect to some sensitive content on website-1.com, such as http://website-1.com/delete.php?file=test.jpg and "test.jpg" is deleted succesfully, is that a CSRF attack, even though the malicious content was embedded on website-1.com and not on a 3rd party site?
Thank you for your help
Deleting with a simple GET request is a pretty bad practice and makes CSRF attacks easy.
Does plain link from website-2 to http://website-1.com/delete.php?file=test.jpg cause file to be deleted? If not, then there must be some sort of CSRF protection. But there are a lot of other things to be watched for to ensure CSRF is not possible (like if/how CSRF token is exactly implemented or what sort of user content sites allow, how much admins of both sites trust each other, etc). From your limited info, I'd consider website-1 vulnerable.

Why is form based authentication NOT considered RESTful?

Although I "think" I understand it I need some clarity. With PURE Restful authentication, things do get a bit unwieldy and using forms helps a lot with the UI of the application (i.e., get to have separate login page, forgot password links, easier logout? etc.,)
Now Forms come along and some folks say "not restful" - what is "not restful" about them? Is it that there is no corresponding login resource so to speak? Or does it force something else that I'm missing?
Note: If ones create sessions with them, that's a different matter altogether. I'm more keen on know "why" are they branded as restful? Just googling for "Form based authentication vs restful authentication" throws up quite a few hits.
One could use these "forms" to authenticate and pass on tokens for the application to store in cookies etc., which I feel is entirely restful (assuming cryptographic security etc.,)...
There is nothing wrong with sending your credentials, perhaps through a form, for authentication. The problem is most Form based systems rely on sessions, thus requiring you to only log in "once".
Sessions are server state, thus violating the stateless constraint of a REST architecture.
If you have to send the credentials each time, you can either include them in the payload (i.e. using a form), or you can use the HTTP Authorization header.
If you include them in the payload, you can include them in the body, but only for a POST or PUT, and not a GET or DELETE (since they don't have bodies).
If you include them in the URL as part of the query parameters, then the URL is no longer necessarily representing the actual resource. One of the other tenets is that the URL matches the resource. Adding out of band information (such as credentials) within the query parameters muddies that constraint up a bit.
So, for a REST system over HTTP, you're better to use the existing HTTP Authorization mechanism than working out something else. You could also use client specific SSL certs as well, that works fine also.
Excellent question. I think that RESTful purists will likely say that having a URI associated with an action rather than a resource is what makes form-based auth not RESTful, which is something you pointed out yourself.
Honestly I think that the idea of a 100% pure RESTful application is somewhat of a Utopian ideal when it comes to web applications. I believe it is achievable, however, for RESTful web services, since the calling applications can pass credentials with the request header.
At the end of the day, I think that as long as your application works it is fine, and you should not worry about whether or not it is purely RESTful.
That's my $0.02.
From this answer:
To be RESTful, each HTTP request should carry enough information by itself for its recipient to process it to be in complete harmony with the stateless nature of HTTP.
It's not that form-based auth is not RESTful — it's not RESTful to have a session at all. The RESTful way is to send credentials with every request. This could easily be eavesdropped upon, however, but HTTPS/SSL/TLS closes that hole.
Form-based authentication does not use the authentication techniques that are built into HTTP (e.g. basic authentication, digest authentication).
REST purists will ask you to use the functionality built into HTTP wherever possible. Even though form-based authentication is extremely common, it is not a part of the official protocol. So the REST purist sees form-based authentication as an instance of "building functionality on top of HTTP when that functionality already exists within HTTP itself."
Now Forms come along and some folks say "not restful" - what is "not restful" about them?
The authentication credentials are not in the standard place.
REST doesn’t eliminate the need for a clue. What REST does is concentrate that need for prior knowledge into readily standardizable forms. -- Fielding, 2008
RFC 7235 describes the Authorization header; that gives us a standard way to distinguish authorized requests (which have the header) from anonymous requests (which don't have the header).
Because we can distinguish authorized and anonymous requests, general purpose components can do interesting things.
For example, if the origin server is restricting access to a resource, we probably don't want a shared cache to be re-using copies of the HTTP response to satisfy other requests. Because the authorization has a standardized form, we can define caching semantics that restrict what a shared cache is allowed to do.
Putting the same information into a cookie header, in effect, hides the authorization semantics from general purpose components.
(This is somewhat analogous to using POST requests for safe requests -- general purpose components can't distinguish semantically safe POST requests from semantically unsafe POST requests, and therefore can't do anything intelligently take advantage of the safe handling).
If we had smarter form processing -- form input controls that copied information into the Authorization header instead of the query string/request-body -- then forms would be fine. But (a) we don't have that and (b) we are unlikely to get it, because backwards compatibility.