Tornado redirect to a different domain - redirect

I have a Tornado server and I want to redirect people coming from a certain country to a totally different domain. It depends on their IP, and I need it to work for every URI chosen. So for example if someone goes to www.mysite.com/about from a British IP, I want to redirect her to www.mysite.uk/about.
I tried adding the function initialize() to the BaseHandler, but according to what I've seen, It's impossible to finish from the init.
I checked out RedirectHandler, but it changes only the URI, and not the entire domain as I need.
Do you know of any solution within Tornado? (I also use nginx, but don't think it can support checking the ip, finding the locations, and also I have a lot of URIs).
Thank you!

RedirectHandler works with both absolute and relative urls; why do you think you can't change the domain with it?
You cannot redirect (or send any response) from initialize(), but you can from prepare(). It sounds like this is the right place for what you want to do:
def prepare(self):
if should_redirect(self.request):
self.redirect(new_domain, self.request.uri)
raise tornado.web.Finish()

Related

How to configure Big Blue Button for Xirsys TURN server?

I run an self-hosted instance of BigBlueButton and signed up for Xirsys TURN server services because we need to serve clients behind (pretty restrictive) firewalls. Before I had been running my own instance of coturn, but as this led to problems recently, I thought I will got someone who does this for a living a try.
Now the configuration in BBB is explained here:
https://docs.bigbluebutton.org/2.2/setup-turn-server.html
Yet so far I completely failed to match the parameters I receive from Xirsys with what I have to put into the /usr/share/bbb-web/WEB-INF/classes/spring/turn-stun-servers.xml file in the place of the <turn.example.com> and <secret_value>.
Did anyone ever make this work? I did try and find a tutorial but also failed.
bbb_web, is returning this the turn uris. passwords to the html5 client, that the client is using in sip.js
so you can either get bbb-web to send valid username/passwords is same method is used, or modify the html5 client to make a Xirsys api call, to get access to the turn candidates.
Would need to look at api docs. twilio has a similar service.
regards,
Stephen
not the most elegant solution but the easiest one for me:
modify the final bbb js bundle to load the stunturn info from a fixed url in
e.g.
/usr/share/meteor/bundle/programs/web.browser/f30716b2b57e2862c4db2325 b7aac63f4622842b.js
the minified part should then look somewhat like:
const r=Meteor.settings.public.media,i='https://<yourbbburl>/html5client/stunturn.json',a=r.cacheStunTurnServers,s=r.fallbackStunServer;
and put either the static credentials or generated ones in a file stunturn.json besides the js bundle.

HTTP header field for URI deprecation/expiration

I'm building a REST service where I want to implement a way to deprecate certain URIs when they shouldn't be supported anymore for one reason or another. As functions are deprecated, they will be replaced by new ones that work in similar (but not identical) ways. This means that at some point, I will have to start responding with 410 Gone.
The idea is that all client software should be updated, and after say six months all users should have had the chance to upgrade. At this time, the deprecated URIs will start to inform the client that it's out of date, so that the client can display a message to the user. This time is not known in advance, though, and can't explicitly be written in the documentation.
The problem I want to solve is:
Is there an HTTP header field I should use to indicate that a certain URI will cease to work at a certain time and, if so, which?
This can't be the first time someone wants to solve this problem. Is there an unofficial header field already in use, or should I design my own? Note that I don't want to add this information to the content itself, as that would mean that every resource was changed and needs to be refreshed by the client, which is of course not what happened.
Strictly speaking, no. The resources should be driving your applications state, so if there is a change, the uri linking would provide the nessessary changes to your application.
For a HTTP header, you are free to add custom headers. Normally starting with X- but its important to know changes to uri's is only interesting to developers not users.

Rest: Right or Wrong to Choose URLs From Usecases

I was at a developer conference where the speaker argued that the following set of URLs are not RESTful:
/users/username/changepassword
/users/username/resetpassword
The main reason given was that the same URLs might be used in different context and that this didn't facilitate HATEOAS in a meaningful way.
He then continued to argue that a more viable approach is to use the following URLs:
/account/changepassword
/administration/server/users/username/resetpassword
According to the speaker this latter approach allowed for each use-case to have a specifically tailored (html-)form for each URL, which could then be posted to the same URL. No more problems with the same URL used in different contexts.
I would spontaneously say that neither of these URL sets are RESTful, simply due to the fact that they are both centered around actions (verbs) which in my eyes do not really qualify as resources except for in exceptional cases (like search). I feel like this setup is very RPC-like.
I would have suggested something more noun-like and granular like
//Change password
PUT /users/username/account/password
//Register reset
POST /users/username/account/password/resets
//Verify reset
PUT /users/username/account/password/resets/0/verification_code
What is your opinion? Is the speakers approach RESTful or not, or is there simply not enough information here?
I agree, the whole idea of a RESTful interface (as I understand it) is to allow access to "resources". So neither of those URL schemes seem very nice to me.
Having said that REST isn't set in stone, it is more of a guide than a set of rules. Some things don't sit that well with it, so you have to get as close as you can just using the HTTP verbs.
A password reset isn't a resource, however a password is. So, I would say something along these lines for a password reset operation ...
GET /users/antonyscott/password
PUT /users/antonyscott/password
With the 2nd call requiring authentication of some sort derived from the first call and passing in the new password. Actually that's more of a straight password change than a reset. If you're after a reset (ie - following a link in an email to confirm the reset) then what you had seems okay.
Obviously designing an API is an iterative process, so I would say have a go and see how it works, then refine it.

SOP issue behind reverse proxy

I've spent the last 5 months developing a gwt app, and it's now become time for third party people to start using it. In preparation for this one of them has set up my app behind a reverse proxy, and this immediately resulted in problems with the browser's same origin policy. I guess there's a problem in the response headers, but I can't seem to rewrite them in any way to make the problem go away. I've tried this
response.setHeader("Server", request.getRemoteAddress());
in some sort of naive attempt to mimic the behaviour I want. Didn't work (to the surprise of no-one).
Anyone knowing anything about this will most likely snicker and shake their heads when reading this, and I do not blame them. I would snicker too, if it was me... I know nothing at all about this, and that naturally makes this problem awfully hard to solve. Any help at all will be greatly appreciated.
How can I get the header rewrite to work and get away from the SOP issues I'm dealing with?
Edit: The exact problem I'm getting is a pop-up saying:
"SmartClient can't directly contact
URL
'https://localhost/app/resource?action='doStuffs'"
due to browser same-origin policy.
Remove the host and port number (even
if localhost) to avoid this problem,
or use XJSONDataSource protocol (which
allows cross-site calls), or use the
server-side HttpProxy included with
SmartClient Server."
But I shouldn't need the smartclient HttpProxy, since I have a proxy on top of the server, should I? I've gotten no indications that this could be a serialisation problem, but maybe this message is hiding the real issue...
Solution
chris_l and saret both helped to find the solution, but since I can only mark one I marked the answer from chris_l. Readers are encouraged to bump them both up, they really came through for me here. The solution was quite simple, just remove any absolute paths to your server and use only relative ones, that did the trick for me. Thanks guys!
The SOP (for AJAX requests) applies, when the URL of the HTML page, and the URL of the AJAX requests differ in their "origin". The origin includes host, port and protocol.
So if the page is http://www.example.com/index.html, your AJAX request must also point to something under http://www.example.com. For the SOP, it doesn't matter, if there is a reverse proxy - just make sure, that the URL - as it appears to the browser (including port and protocol) - isn't different. The URL you use internally is irrelevant - but don't use that internal URL in your GWT app!
Note: The solution in the special case of SmartClient turned out to be using relative URLs (instead of absolute URLs to the same origin). Since relative URLs aren't an SOP requirement in browsers, I'd say that's a bug in SmartClient.
What issue are you having exactly?
Having previously had to write a reverseproxy for a GWT app I can't remember hitting any SOP issues, one thing you need to do though is make sure response headers and uri's are rewritten to the reverseproxies url - this includes ajax callback urls.
One issue I hit (which you might also experience) when running behind a reverseproxy was with the serialization policy of GWT server.
Fixing this required writing an implementation of RemoteServiceServlet. While this was in early/mid 2009, it seems the issue still exists.
Seems like others have hit this as well - see this for further details (the answer by Michele Renda in particular)

Connectedness & HATEOAS

It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something i.e given the following hierarchy of resources:
/collection1
collection1
|-sub1
|-sub1sub1
|-sub1sub1sub1
|-sub1sub1sub1sub1
|-sub1sub2
|-sub2
|-sub2sub1
|-sub2sub2
|-sub3
|-sub3sub1
|-sub3sub2
If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong?
Best regards,
Suresh
You will run into this mismatch when you try and build a REST api that does not match the flow of the user agent that is consuming the API.
Consider when you run a client application, the user is always presented with some initial screen. If you match the content and options on this screen with the root representation then the available links and desired transitions will match nicely. As the user selects options on the screen, you can transition to other representations and the client UI should be updated to reflect the new representation.
If you try and model your REST API as some kind of linked data repository and your client UI as an independent set of transitions then you will find HATEOAS quite painful.
Yes, it's right that the client application should traverse the links, but once it's discovered a resource, there's nothing wrong with keeping a reference to that resource and using it for a longer time than one request. If your client has the possibility of remembering things permanently, it can do so.
consider how a web browser keeps its bookmarks. You probably have maybe ten or a hundred bookmarks in the browser, and you probably found some of these deep in a hierarchy of pages, but the browser dutifully remembers them without requiring remembering the path it took to find them.
A more rich client application could remember the URI of sub1sub1sub1sub1 and reuse it if it still works. It's likely that it still represents the same thing (it ought to). If it no longer exists or fails for any other client reason (4xx) you could retrace your steps to see if you can find a suitable replacement.
And of course what Darrel Miller said :-)
I don't think that that's the strict requirement. From how I understand it, it is legal for a client to access resources directly and start from there. The important thing is that you do not do this for state transitions, i.e. do not automatically proceed with /foo2 after operating on /foo1 and so forth. Retrieving /products/1234 initially to edit it seems perfectly fine. The server could always return, say, a redirect to /shop/products/1234 to remain backwards compatible (which is desirable for search engines, bookmarks and external links as well).