Facebook strict mode mandatory or not after March? - facebook

So, I've just received a warning regarding my usage of the Facebook-login solution for my app. The warning says:
In March, we're making a security update to your app settings that
will invalidate calls from URIs not listed in the Valid OAuth redirect
URIs field below.This update comes in response to malicious activity
we saw on our platform, and we want to protect your app or website by
requiring a new strict mode for redirect URIs.
This makes me think that strict mode would be a mandatory thing for everyone. But, I did some further research about this and found a blogpost about it saying:
In March, we'll be turning on Strict Mode for everyone by default.
This, on the other hand, makes me believe that the strict mode wont be mandatory but will be selected by default when creating a new app, but can be turned off if you don't want to use it.
Well, how should i interpret this? Will it be mandatory or not?
Anyone got any further information about it?
Thanks.

Yes, it will be mandatory.
In March, we'll be turning on Strict Mode for everyone by default.
This phrasing might be a little unlucky; but it means that for now you can turn this setting on yourself for your apps - come March, Facebook will do it for you (and everyone else), and it will not be optional any more.

Related

Why are websites requiring referer headers (and failing silently)?

I've been noticing a very quirky trend lately and I'm baffled by it. In the past month or two, I've begun to notice sites breaking without a referer header.
As background: you'll of course remember the archaic days where referer headers were misused to do a whole bunch of things from feature detection to some misguided appearance of security. There are still some legacy sites that depend on it, but for the most part refer headers have been relegated to shitty device detection.
Imagine my surprise when not one, but three modern websites are suddenly breaking without a referer.
Codepen: pen previews and full page views just break (i.imgur.com/3abXqsC.png). But editor view works perfectly.
Twitter: basically every interactive function breaks. If you try to tweet, retweet, favourite, etc. you get a generic no-descriptive error (i.imgur.com/E6tIKFo.png). If you try to update a setting, it just flat out refuses (403) (i.imgur.com/51e2d0M.png).
Imgur: It just can't upload anything (i.imgur.com/xCWpkGX.png) and eventually gives up (i.imgur.com/iO2UlR6.png).
All three are modern websites. Codepen was already broken since I started using it so I'm not sure if it was always like that, but Twitter and Imgur used to work perfectly fine with no referer. In fact I had just noticed Imgur breaking.
Furthermore, all of them only generate non-descriptive error messages, if at all, which do not identify the problem at all. It took a lot of trial and error for me to figure it out the first two times, now I try referer headers as one of the first things. But wait! There's more! All it takes to un-bork them is to send a generic referer that's the root of the host (i.e. twitter.com, codepen.io, imgur.com). You don't even need to use actual URLs with directory paths!
One website, I can chalk it up to shitty code. But three, major, modern websites - especially when they used to work - is a huge head scratcher.
Has anybody else noticed this trend or know wtf is going on?
While Referer headers don't "add security", they can be used to trim out attempts from browsers (that play by refer rules) which invoke the request. It's not making the site "secure" from any HTTP attempt, but it is a fair filter for browsers (running on behalf of, possibly unsuspecting, users) acting-as proxies.
Here are some possibilities:
Might prevent hijacked (or phished) users, and/or other injection attacks on form POSTS (non-idempotent requests), which are not constrained to Same-Origin Policy.
Some requests can leak a little bit of information, event with Same-Origin Policy.
Limit 3rd-party use of embedded content such as iframes, videos/images, and other hotlinking.
That is, while it definitely should not be considered a last line of defence (eg. it should not replace proper authentication and CSRF tokens), it does help reduce some exposure of undesired access from browsers.

HTTP header field for URI deprecation/expiration

I'm building a REST service where I want to implement a way to deprecate certain URIs when they shouldn't be supported anymore for one reason or another. As functions are deprecated, they will be replaced by new ones that work in similar (but not identical) ways. This means that at some point, I will have to start responding with 410 Gone.
The idea is that all client software should be updated, and after say six months all users should have had the chance to upgrade. At this time, the deprecated URIs will start to inform the client that it's out of date, so that the client can display a message to the user. This time is not known in advance, though, and can't explicitly be written in the documentation.
The problem I want to solve is:
Is there an HTTP header field I should use to indicate that a certain URI will cease to work at a certain time and, if so, which?
This can't be the first time someone wants to solve this problem. Is there an unofficial header field already in use, or should I design my own? Note that I don't want to add this information to the content itself, as that would mean that every resource was changed and needs to be refreshed by the client, which is of course not what happened.
Strictly speaking, no. The resources should be driving your applications state, so if there is a change, the uri linking would provide the nessessary changes to your application.
For a HTTP header, you are free to add custom headers. Normally starting with X- but its important to know changes to uri's is only interesting to developers not users.

Rest: Right or Wrong to Choose URLs From Usecases

I was at a developer conference where the speaker argued that the following set of URLs are not RESTful:
/users/username/changepassword
/users/username/resetpassword
The main reason given was that the same URLs might be used in different context and that this didn't facilitate HATEOAS in a meaningful way.
He then continued to argue that a more viable approach is to use the following URLs:
/account/changepassword
/administration/server/users/username/resetpassword
According to the speaker this latter approach allowed for each use-case to have a specifically tailored (html-)form for each URL, which could then be posted to the same URL. No more problems with the same URL used in different contexts.
I would spontaneously say that neither of these URL sets are RESTful, simply due to the fact that they are both centered around actions (verbs) which in my eyes do not really qualify as resources except for in exceptional cases (like search). I feel like this setup is very RPC-like.
I would have suggested something more noun-like and granular like
//Change password
PUT /users/username/account/password
//Register reset
POST /users/username/account/password/resets
//Verify reset
PUT /users/username/account/password/resets/0/verification_code
What is your opinion? Is the speakers approach RESTful or not, or is there simply not enough information here?
I agree, the whole idea of a RESTful interface (as I understand it) is to allow access to "resources". So neither of those URL schemes seem very nice to me.
Having said that REST isn't set in stone, it is more of a guide than a set of rules. Some things don't sit that well with it, so you have to get as close as you can just using the HTTP verbs.
A password reset isn't a resource, however a password is. So, I would say something along these lines for a password reset operation ...
GET /users/antonyscott/password
PUT /users/antonyscott/password
With the 2nd call requiring authentication of some sort derived from the first call and passing in the new password. Actually that's more of a straight password change than a reset. If you're after a reset (ie - following a link in an email to confirm the reset) then what you had seems okay.
Obviously designing an API is an iterative process, so I would say have a go and see how it works, then refine it.

Apple iCal's "Delegation" tab -- disabled checkboxes?

I am trying to access a CalDAV account in iCal and everything works fine except for the Delegation tab. I can see the account(s) I have access to (including the correct read/write properties), but the checkboxes are disabled and the calendars cannot be selected. Has anyone seen this before & know what the cause is?
This is a custom CalDAV implementation, so it is likely due to a disconnect between what iCal expects and what our server is sending -- but there are no error/warning messages in the console to indicate what the problem might be.
Any advice would be appreciated.
iCal queries the permissions and methods available on the server. To query the permissions on a collection resource you will need to have the DAV::read-current-user-privilege-set permission. Assuming iCal can read the permissions it will be looking for the DAV::read permission for reading and the DAV::bind, DAV::unbind and DAV::write permissions to indicate the ability to write.
The best way to debug this is probably to read RFC3744 about half a dozen times, interspersed with using iCal against a working server and sniffing the TCP communication as it does it. A good way is to use some kind of man-in-the-middle proxy so you can sniff the communication with (e.g.) Mobile Me or iCloud.
In my limited experience, this happens when the account used for sharing is functional (not personal) in Microsoft Exchange Server 2010. An example, where two of three are functional:
I do use various CalDAV implementations but have never encountered the same limitation, so this may be not a good answer. Also Exchange Web Services (EWS) for calendaring and delegation are probably not comparable to CalDAV. Still, it's food for thought.
The Debug menu of iCal 5.x offers CalDAV logging options.
To enable that menu, you could use the Secrets preference pane.

SOP issue behind reverse proxy

I've spent the last 5 months developing a gwt app, and it's now become time for third party people to start using it. In preparation for this one of them has set up my app behind a reverse proxy, and this immediately resulted in problems with the browser's same origin policy. I guess there's a problem in the response headers, but I can't seem to rewrite them in any way to make the problem go away. I've tried this
response.setHeader("Server", request.getRemoteAddress());
in some sort of naive attempt to mimic the behaviour I want. Didn't work (to the surprise of no-one).
Anyone knowing anything about this will most likely snicker and shake their heads when reading this, and I do not blame them. I would snicker too, if it was me... I know nothing at all about this, and that naturally makes this problem awfully hard to solve. Any help at all will be greatly appreciated.
How can I get the header rewrite to work and get away from the SOP issues I'm dealing with?
Edit: The exact problem I'm getting is a pop-up saying:
"SmartClient can't directly contact
URL
'https://localhost/app/resource?action='doStuffs'"
due to browser same-origin policy.
Remove the host and port number (even
if localhost) to avoid this problem,
or use XJSONDataSource protocol (which
allows cross-site calls), or use the
server-side HttpProxy included with
SmartClient Server."
But I shouldn't need the smartclient HttpProxy, since I have a proxy on top of the server, should I? I've gotten no indications that this could be a serialisation problem, but maybe this message is hiding the real issue...
Solution
chris_l and saret both helped to find the solution, but since I can only mark one I marked the answer from chris_l. Readers are encouraged to bump them both up, they really came through for me here. The solution was quite simple, just remove any absolute paths to your server and use only relative ones, that did the trick for me. Thanks guys!
The SOP (for AJAX requests) applies, when the URL of the HTML page, and the URL of the AJAX requests differ in their "origin". The origin includes host, port and protocol.
So if the page is http://www.example.com/index.html, your AJAX request must also point to something under http://www.example.com. For the SOP, it doesn't matter, if there is a reverse proxy - just make sure, that the URL - as it appears to the browser (including port and protocol) - isn't different. The URL you use internally is irrelevant - but don't use that internal URL in your GWT app!
Note: The solution in the special case of SmartClient turned out to be using relative URLs (instead of absolute URLs to the same origin). Since relative URLs aren't an SOP requirement in browsers, I'd say that's a bug in SmartClient.
What issue are you having exactly?
Having previously had to write a reverseproxy for a GWT app I can't remember hitting any SOP issues, one thing you need to do though is make sure response headers and uri's are rewritten to the reverseproxies url - this includes ajax callback urls.
One issue I hit (which you might also experience) when running behind a reverseproxy was with the serialization policy of GWT server.
Fixing this required writing an implementation of RemoteServiceServlet. While this was in early/mid 2009, it seems the issue still exists.
Seems like others have hit this as well - see this for further details (the answer by Michele Renda in particular)