I've been noticing a very quirky trend lately and I'm baffled by it. In the past month or two, I've begun to notice sites breaking without a referer header.
As background: you'll of course remember the archaic days where referer headers were misused to do a whole bunch of things from feature detection to some misguided appearance of security. There are still some legacy sites that depend on it, but for the most part refer headers have been relegated to shitty device detection.
Imagine my surprise when not one, but three modern websites are suddenly breaking without a referer.
Codepen: pen previews and full page views just break (i.imgur.com/3abXqsC.png). But editor view works perfectly.
Twitter: basically every interactive function breaks. If you try to tweet, retweet, favourite, etc. you get a generic no-descriptive error (i.imgur.com/E6tIKFo.png). If you try to update a setting, it just flat out refuses (403) (i.imgur.com/51e2d0M.png).
Imgur: It just can't upload anything (i.imgur.com/xCWpkGX.png) and eventually gives up (i.imgur.com/iO2UlR6.png).
All three are modern websites. Codepen was already broken since I started using it so I'm not sure if it was always like that, but Twitter and Imgur used to work perfectly fine with no referer. In fact I had just noticed Imgur breaking.
Furthermore, all of them only generate non-descriptive error messages, if at all, which do not identify the problem at all. It took a lot of trial and error for me to figure it out the first two times, now I try referer headers as one of the first things. But wait! There's more! All it takes to un-bork them is to send a generic referer that's the root of the host (i.e. twitter.com, codepen.io, imgur.com). You don't even need to use actual URLs with directory paths!
One website, I can chalk it up to shitty code. But three, major, modern websites - especially when they used to work - is a huge head scratcher.
Has anybody else noticed this trend or know wtf is going on?
While Referer headers don't "add security", they can be used to trim out attempts from browsers (that play by refer rules) which invoke the request. It's not making the site "secure" from any HTTP attempt, but it is a fair filter for browsers (running on behalf of, possibly unsuspecting, users) acting-as proxies.
Here are some possibilities:
Might prevent hijacked (or phished) users, and/or other injection attacks on form POSTS (non-idempotent requests), which are not constrained to Same-Origin Policy.
Some requests can leak a little bit of information, event with Same-Origin Policy.
Limit 3rd-party use of embedded content such as iframes, videos/images, and other hotlinking.
That is, while it definitely should not be considered a last line of defence (eg. it should not replace proper authentication and CSRF tokens), it does help reduce some exposure of undesired access from browsers.
Related
Over the past few months random email addresses, some of which are on known spam lists, have been added at the rate of 2 or 3 a day to my website.
I know they aren't real humans - for a start the website is in a very narrow geographical area, and many of these emails are clearly from a different country, others are info# addresses that appear to have been harvested from a website, rather than something a human would use to sign up to a site.
What I can't work out is, what are reasons for somebody doing this? I can't see any benefit to an external party beyond being vaguely destructive. (I don't want to link to the site here, it's just a textbox where you enter email and press join).
These emails are never verified - my question isn't about how to prevent this, but what are some valid reasons why somebody might do this. I think it's important to understand why malicious users do what they do.
This is probably a list bombing attack, which is definitely not valid. The only valid use I can think of is for security research, and that's a corner case.
List bomb
I suspect this is part of a list bombing attack, which is when somebody uses a tool or service to maliciously sign up a victim for as much junk email as possible. I work in anti-spam and have seen victims' perspectives on this: it's nearly all opt-in verifications, meaning the damage is only one per service. It sounds like you're in the Confirmed Opt-In (COI) camp, so congratulations, it could be worse.
We don't have good solutions for list bombing. There are too many problems to entertain a global database of hashed emails that have recently opted into lists (so list maintainers could look up an address, conclude it's being bombed, and refuse to invite). A global database of hashed emails opting out of bulk mail (like the US Do Not Call list or the now-defunct Blue Frog's Do Not Intrude registry but without the controversial DDoS-the-spammers portion) could theoretically work in this capacity, though there'd still be a lot of hurdles to clear.
At the moment, the best thing you can do is to rate-limit (which this attacker is savvy enough to avoid) and use captchas. You can measure your success based on the click rate of the links in your COI emails; if it's still low, you still have a problem.
In your particular case, asking the user to identify a region via drop-down, with no default, may give you an easy way to reject subscriptions or trigger more complex captchas.
If you're interested in a more research-driven approach, you could try to fingerprint the subscription requests and see if you can identify the tool (if it's client-run, and I believe most are) or the service (if it's cloud-run, in which case you can hopefully just blacklist a few CIDR ranges instead). Pay attention to requesters' HTTP headers, especially the referer. Browser fingerprinting it its own arms race; take a gander at the EFF's Panopticlick or Brian Kreb's piece on AntiDetect.
Security research
The only valid case I can consider, whose validity is debatable, is that of security research (which is my field). When I'm given a possible phishing link, I'm going to anonymize it. This means I'll enter fake data rather than reveal my source. I'd never intentionally go after a subscription mechanism (at least with an email I don't control), but I suppose automation could accidentally stumble into such a thing.
You can avoid that by requiring POST requests to subscribe. No (well-designed) subscription mechanism should accept GET requests or action links without parameters (though there are plenty that do). No (well-designed) web crawler, for search or archiving or security, should generate POST requests, at least without several controls to ensure it's acceptable (such as already concluding that it's a bad actor's site). I'm going to be generous and not call out any security vendors that I know do this.
In the context of a restful web service, is it acceptable to have side effects for GET methods?
Single use download links for example
GET /downloads/664d92b3-b373-4dac-a4fb-7a41d015109a
will return 200 and "the thing" and 404 on next request.
HTTP spec says GET methods should be safe and according to https://www.rfc-editor.org/rfc/rfc7231#section-4.2.1
Request methods are considered "safe" if their defined semantics are
essentially read-only; i.e., the client does not request, and does
not expect, any state change on the origin server as a result of
applying a safe method to a target resource.
and
This definition of safe methods does not prevent an implementation
from including behavior that is potentially harmful, that is not
entirely read-only, or that causes side effects while invoking a safe
method. What is important, however, is that the client did not
request that additional behavior and cannot be held accountable for
it.
Several clarifying examples are provided which make me think safe methods are not allowed to purposefully remove the resource.
For example, most servers append request information to access
log files at the completion of every response, regardless of the
method, and that is considered safe even though the log storage might
become full and crash the server.
And
Likewise, a safe request initiated
by selecting an advertisement on the Web will often have the side
effect of charging an advertising account.
And
For example, it is
common for Web-based content editing software to use actions within
query parameters, such as "page?do=delete". If the purpose of such a
resource is to perform an unsafe action, then the resource owner MUST
disable or disallow that action when it is accessed using a safe
request method.
Single use links are obviously a reality. I just wonder whether they're abusing the spec or I just don't get it.
Having an opinion is fine but having worked on these specs and understanding their subtleties would be most convincing.
What you're suggesting is acceptable in some situations, and not necessarily an abuse of the spec.
Firstly, 2616 says regarding safe methods that they:
SHOULD NOT have the significance of taking an action other than
retrieval
And the phrase "SHOULD NOT" is defined as follows (emphasis added):
This phrase, or the phrase "NOT RECOMMENDED" mean that there may
exist valid reasons in particular circumstances when the particular
behavior is acceptable or even useful, but the full implications
should be understood and the case carefully weighed before
implementing any behavior described with this label.
The new version you linked to (which I think supercedes 2616) doesn't use the term "SHOULD NOT" - but they haven't replaced it with "MUST NOT" either. They also acknowledge that side effects are not ruled out as long as the client is not held responsible. So I think the idea of safe methods is the same.
So since the spec acknowledges that there are situations where it's ok, how do we know if yours is such a situation - and more importantly, how do we stay generally within the "spirit" of the spec i.e. make sure we're not abusing it?
I'd refer to this quote from 7231:
The purpose of distinguishing between safe and unsafe methods is to
allow automated retrieval processes (spiders) and cache performance
optimization (pre-fetching) to work without fear of causing harm.
If your app is a private intranet app and you're not concerned with the issues mentioned here, your approach is ok. Put another way: taking into consideration all the possible ways that a GET could happen, are you ok with this side effect?
Working outside RESTful guidelines is not always bad. It's just important to make sure you understand the effect it has.
With all that said, if you are looking for a way to implement reliable, consistent one-time delivery of a resource over HTTP, it's well worth reading Bill de hÓra's HTTPLR spec (http://www.dehora.net/doc/httplr/draft-httplr-01.html). This approach relies on the client acknowledging receipt of the message. You might be able to use something like to allow this user agents that are unaware of the one-use policy (spiders etc.) to GET the resource without causing side effects, but still allow participating clients to cause the resource to become unavailable after one GET.
A transactional approach like this has the added benefit of allowing the client to re-try the download as often as they need to. This is important because otherwise the server cannot know whether the client successfully received the message or not.
If you really need to enforce the once-only policy from the server side for any possible user agent, then your original approach might be best, but bear in mind it's really an "at most once" policy.
Sometimes breaking the spec is the only way, an example is web-page visit-counters that use a hidden image. Is requested with GET but updates a counter.
However some things can go wrong. Applications that follow the spec are allowed to presume that making a GET request won't have any side effects. So is perfectly valid for example for some kind of antivirus-enabled email server to follow the links found in an email to make sure all is safe. If you send this "download-once" link in an email the recipient could never see it. For same reason also a yes-no answer with two different links in an email is hard to deploy. But also in a web page: I recall Google browsing the links of a unique-by-user page known to google only because there was an analytics script inside and because the page contained these infamous links with side effects google was actually changing the answers of people that visited it...
Fake hits are not really a problem in the case of the hidden image counter , they are in any case not considered very reliable, but in the case of the "download-once" link could be problematic.
I'm writing a client that grabs a page from a web server. On one particular server, it would work fine from my web browser, but my code was consistently getting the response:
HTTP/1.1 503 Service Unavailable
Content-Length:62
Connection: close
Cache-Control: no-cache,no-store
Pragma: no-cache
<html><body><b>Http/1.1 Service Unavailable</b></body> </html>
I eventually narrowed this down to the User-Agent header I was sending: if it contains Mozilla, everything is fine (I tried many variations of this). If not, I get 503. As soon as I realized it was User-Agent, I remembered having this same issue in the past (different project, different servers), but I've never figured out why.
In this particular case, the web server I'm connecting to is running IIS 7.5, but I'm not sure if there are any proxies/firewalls/etc in front of it (I suspect there probably is something because of this behaviour).
There's an interesting history to User-Agents which you can read about on this question: Why do all browsers' user agents start with "Mozilla/"?
It's clearly no issue for me to have Mozilla in my User-Agent, but my question is simply: what is the configuration or server that causes this to happen, and why would anyone want this behaviour?
Here is an interesting history of this phenomenon: User Agent String History
The main reason that this exists is because the internet, web, and browsers were not designed, but evolved, with high amounts of backwards compatibility, but then a lot of vender exclusive extensions. In particular, frames (which are widely considered a bad idea these days) were not well supported by Mosaic, but were by Netscape (which had Mozilla as it's user agent).
Server administrators then had a choice: did they use the new hip cool frames and only support Netscape, or did they use old boring pages that everyone can use? Their choice was a hack; if someone tells me they are Mozilla, send them frames; if not, send them not frames.
This ruined everything. IE had to call itself Mozilla compatible, everyone impersonated everyone else, it's all well detailed in the link at the top. But this problem more or less went away in the modern era, as everyone impersonated everyone, and everyone supported more and more of a common subset of features.
And then mobile browsers and smart phone browsers became wide spread. Suddenly, there wasn't just a few main browsers with basically the same features, and a few outlying browsers you could easily ignore. Now it was dozens of small browsers, with less power and less ability and a disjoint odd set of capabilities! And so, many servers took the easy road and simply did not send the proper data, or any data at all, to any browser they did not recognize.
Now rather than a poorly rendered or inoperable website, you had...no website on certain platforms, and a perfect one on others. This worked, but wasn't tolerable for many businesses; they wanted to work right on ALL platforms, because that's how the web was supposed to work.
Mobile versions, mobile first, responsive design, media queries, all these were designed to fill in those gaps. But for the most part, a lot of websites still just ignore less than modern browsers. And media queries were quickly subverted: no one wants to declare their browser is handheld, oh no. We're a real display browser, even if our screen is only 3 inches, yes sir!
In summary, some servers are configured to drop any browser which is not Mozilla compatible because they think it's better to serve no page than a poorly rendered one.
I've also seen some arguments that this improves security because then the server doesn't have to deal with rogue programs that aren't browsers (much like your own) connecting to them. As the user agent is easy to change, this holds no water for me; it's simply security through obscurity.
Many firewalls are configured to drop all requests which do not have "proper" user agent, as many DDoS attacks do not bother to send it - this is easy, reliable filter.
I'm building a REST service where I want to implement a way to deprecate certain URIs when they shouldn't be supported anymore for one reason or another. As functions are deprecated, they will be replaced by new ones that work in similar (but not identical) ways. This means that at some point, I will have to start responding with 410 Gone.
The idea is that all client software should be updated, and after say six months all users should have had the chance to upgrade. At this time, the deprecated URIs will start to inform the client that it's out of date, so that the client can display a message to the user. This time is not known in advance, though, and can't explicitly be written in the documentation.
The problem I want to solve is:
Is there an HTTP header field I should use to indicate that a certain URI will cease to work at a certain time and, if so, which?
This can't be the first time someone wants to solve this problem. Is there an unofficial header field already in use, or should I design my own? Note that I don't want to add this information to the content itself, as that would mean that every resource was changed and needs to be refreshed by the client, which is of course not what happened.
Strictly speaking, no. The resources should be driving your applications state, so if there is a change, the uri linking would provide the nessessary changes to your application.
For a HTTP header, you are free to add custom headers. Normally starting with X- but its important to know changes to uri's is only interesting to developers not users.
I've spent the last 5 months developing a gwt app, and it's now become time for third party people to start using it. In preparation for this one of them has set up my app behind a reverse proxy, and this immediately resulted in problems with the browser's same origin policy. I guess there's a problem in the response headers, but I can't seem to rewrite them in any way to make the problem go away. I've tried this
response.setHeader("Server", request.getRemoteAddress());
in some sort of naive attempt to mimic the behaviour I want. Didn't work (to the surprise of no-one).
Anyone knowing anything about this will most likely snicker and shake their heads when reading this, and I do not blame them. I would snicker too, if it was me... I know nothing at all about this, and that naturally makes this problem awfully hard to solve. Any help at all will be greatly appreciated.
How can I get the header rewrite to work and get away from the SOP issues I'm dealing with?
Edit: The exact problem I'm getting is a pop-up saying:
"SmartClient can't directly contact
URL
'https://localhost/app/resource?action='doStuffs'"
due to browser same-origin policy.
Remove the host and port number (even
if localhost) to avoid this problem,
or use XJSONDataSource protocol (which
allows cross-site calls), or use the
server-side HttpProxy included with
SmartClient Server."
But I shouldn't need the smartclient HttpProxy, since I have a proxy on top of the server, should I? I've gotten no indications that this could be a serialisation problem, but maybe this message is hiding the real issue...
Solution
chris_l and saret both helped to find the solution, but since I can only mark one I marked the answer from chris_l. Readers are encouraged to bump them both up, they really came through for me here. The solution was quite simple, just remove any absolute paths to your server and use only relative ones, that did the trick for me. Thanks guys!
The SOP (for AJAX requests) applies, when the URL of the HTML page, and the URL of the AJAX requests differ in their "origin". The origin includes host, port and protocol.
So if the page is http://www.example.com/index.html, your AJAX request must also point to something under http://www.example.com. For the SOP, it doesn't matter, if there is a reverse proxy - just make sure, that the URL - as it appears to the browser (including port and protocol) - isn't different. The URL you use internally is irrelevant - but don't use that internal URL in your GWT app!
Note: The solution in the special case of SmartClient turned out to be using relative URLs (instead of absolute URLs to the same origin). Since relative URLs aren't an SOP requirement in browsers, I'd say that's a bug in SmartClient.
What issue are you having exactly?
Having previously had to write a reverseproxy for a GWT app I can't remember hitting any SOP issues, one thing you need to do though is make sure response headers and uri's are rewritten to the reverseproxies url - this includes ajax callback urls.
One issue I hit (which you might also experience) when running behind a reverseproxy was with the serialization policy of GWT server.
Fixing this required writing an implementation of RemoteServiceServlet. While this was in early/mid 2009, it seems the issue still exists.
Seems like others have hit this as well - see this for further details (the answer by Michele Renda in particular)