how to redirect to other port in playframework - redirect

can anyone tell me how to redirect to a module using another port? example: redirect from http://localhost:9000 to https://localhost:9443/login
without changing ports, i'd just use #{Secure.login()} in the controller but i couldn't find any way to redirect to another port..?

The way you are meant to do this, for your example, is to the .secure() method. It was added in Play 1.0.3.2.
So it would look like
#{Secure.login().secure()}
This is a special method on the router object that changes the URL from HTTP to HTTPS. The last time I checked though, it didn't change the port. I raised a bug, but not sure if it is not fixed in the 1.2 master branch yet (https://github.com/playframework/play/blob/master/framework/src/play/mvc/Router.java).
The reason for this, is that play expects an HTTP server to sit infront of Play in a production environment, and handles HTTPS for you, and proxies through to Play as a simple HTTP request. The purpose of the .secure() is to tell the URL to switch to HTTPS, but still go through the same domain.
I don't think there are many alternatives (and none that are nice and simple).
You could take the Play source, and altering the Router.java file, so that it also changes the port number (in the secure method).
Or, you could write a FastTag that mimics the Router.reverse (effectively what the # symbol does), but replace the port number with a secure one.

As codemwnci explained, in prod, generally Play is behind a front proxy that manages all secured channel issues and which can also be used for balancing.
The #{Secure.login().secure()} should work but it only changes http to https.
In addition, I would add the dumb cludge that can be used in a Controller:
redirect("http://www.zenexity.fr:9876");
;)

Related

Using TCP_QUICKACK with nginx

I've been bit recently by the combination of a delayed ACK on the server-side and Nagle's algorithm on the client side, producing the recognizable 40ms delay that is documented here: http://www.boundary.com/blog/2012/05/know-a-delay-nagles-algorithm-and-you/
The easiest way to fix this is to use TCP_NODELAY on the client side (or TCP_CORK should work in our case too). However, I don't have direct control over the client, and would like to try a server side fix. It seems that the TCP_QUICKACK option would do the trick here, since the server would immediately ACK, causing Nagle's algorithm on the client side to send the next packet without delay.
Surprisingly, I couldn't find any reference to people trying this before. Is it a bad idea (besides the fact that we'll be sending more, possibly gratuitous, ACKs)? Since it doesn't look like this option is available via any nginx config, is the best bet to just patch nginx directly (perhaps around http://hg.nginx.org/nginx/file/dcae651b2a0c/src/http/ngx_http_request.c#l3025)?
Thanks!
I know this question is old but let me answer anyway.
Since it doesn't look like this option is available via any nginx config
There is nginx tcp_nodelay directive to take care of it. It is usually combined with tcp_nopush and sendfile.
For more nginx optimizations, please read this article.

C Programming - Sending HTTP Request

My recent assignment is to make a proxy in C using socket programming. The proxy only needs to be built using HTTP/1.0. After several hours of work, I have made a proxy that can be used with Chromium. Various websites can be loaded such as google and several .edu websites; however, many websites give me a 404 error for page not found (these links work fine when not going through my proxy). These 404 errors even occur on the root address "/" of a site... which doesn't make sense.
Could this be a problem with my HTTP request? The HTTP request sent from the browser is parsed for the HTTP request method, hostname, and port. For example, if a GET request is parsed from the browser, a TCP connection is established to the hostname and port provided, and the HTTP GET request is sent in the following format:
GET /path/name/item.html HTTP/1.0\r\n\r\n
This format works for a small amount of websites, but a 404 error message is created for the rest. Could this be the problem? If not, what else could possibly be giving me this problem?
Any help would be greatly appreciated.
One likely explanation is the fact that you've designed a HTTP/1.0 proxy, whereas any website on a shared hosting site will only work with HTTP/1.1 these days (well, not quite, but I'll get to that in a second).
This isn't the only possible problem by a long way, but you'll have to give an example of a website which is failing like this to get some more ideas.
You seem to understand the basics of HTTP, that the client makes a TCP connection to the server and sends a HTTP request over it, which consists of a request line (such as GET /path/name/item.html HTTP/1.0) and then a set of optional header lines, all separated by CRLF (i.e. \r\n). The whole lot is ended with two consecutive CRLF sequences, at which point the server at the other end matches up the request with a resource and sends back an appropriate response. Resources are all identified by a path (e.g. /path/name/item.html) which could be a real file, or it could be a dynamic page.
That much of HTTP has stayed pretty much unchanged since it was first invented. However, think about how the client finds the server to connect to. What you give it is a URL, like this:
http://www.example.com/path/name/item.html
From this it looks at the scheme which is http, so it knows it's making a HTTP connection. The next part is the hostname. Under original HTTP the assumption was that each hostname resolved to its own IP address, and then the client connects to that IP address and makes the request. Since every server only had one website in those days, this worked fine.
As the number of websites increased, however, it became difficult to give every website a different IP address, particularly as many websites were so simple that they could easily be shared on the same physical machine. It was easy to point multiple domains at the same IP address (the DNS system makes this really simple), but when the server received the TCP request it would just know it had a request to its IP address - it wouldn't know which website to send back. So, a new Host header was added so that the client could indicate in the request itself which hostname it was requesting. This meant that one server could host lots of websites, and the webserver could use the Host header to tell which one to serve in the response.
These days this is very common - if you don't use the Host header than a number of websites won't know which server you're asking for. What usually happens is they assume some default website from the list they've got, and the chances are this won't have the file you're asking for. Even if you're asking for /, if you don't provide the Host header then the webserver may give you a 404 anyway, if it's configured that way - this isn't unreasonable if there isn't a sensible default website to give you.
You can find the description of the Host header in the HTTP RFC if you want more technical details.
Also, it's possible that websites just plain refuse HTTP/1.0 - I would be slightly surprised if that happened on so many websites, but you never know. Still, try the Host header first.
Contrary to what some people believe there's nothing to stop you using the Host header with HTTP/1.0, although you might still find some servers which don't like that. It's a little easier than supporting full HTTP/1.1, which requires that you understand chunked encoding and other complexities, although for simple example code you could probably get away with just adding the Host header and calling it HTTP/1.1 (I wouldn't suggest this is adequate for production code, however).
Anyway, you can try adding the Host header to make your request like this:
GET /path/name/item.html HTTP/1.0\r\n
Host: www.example.com\r\n
\r\n
I've split it across lines just for easy reading - you can see there's still the blank line at the end.
Even if this isn't causing the problem you're seeing, the Host header is a really good idea these days as there are definitely sites that won't work without it. If you're still having problems them give me an example of a site which doesn't work for you and we can try and work out why.
If anything I've said is unclear or needs more detail, just ask.

Peer 2 Peer call using PJSIP and PJSUA

I am still learning about SIP and all its protocols, specifically trying to integrate PJSIP into an iPhone application to make p2p calls.
I have a question about a peer 2 peer connection using PJSUA. I am able to
make calls perfectly to other clients on my local network by calling directly using the URI:
sip:192...*:5060
I am curious if this will work for
making direct calls to other SIP URIs that are not on the local
network without using server configuration - if not this way, is there another way of making p2p calls without server configuration?
thanks in advance,
You can make calls without server configuration, as a general principle, but something needs configuring. As mattjgalloway points out in the comments below your question, the most robust solution is a can of worms involving ICE which provides a kind of "umbrella" protocol for things like STUN.
Last time I touched this issue, I had the requirement that I couldn't use internet-based SIP servers to help. I came up with the idea of a registry of sorts: your client can define a bunch of "address spaces" with particular routing requirements. For SIP URIs in your LAN, you define no routing; for URIs in your company's VPN-accessed network, you define a route passing through your VPN connection; for everything else you define a route through your internet router.
By "define a route", I mean that when you place a call to a URI in some particular address space, you store what IP will go into a Contact header, what Route headers you might need, and so on.
Thus, the process of making a call becomes:
Look up in the set of address spaces for a match.
Ask that address space for the suitable bits needed to make a workable INVITE (appropriate Contact header details, Route headers, etc.)
Construct a normal INVITE, mutating as necessary for the previous step.
Send the INVITE as normal.
This essentially reproduces half of what ICE would give you, in a manually administrated form. "Half", because this ensures that one SIP agent can make calls such that the SIP routing all works. The missing half is you still need some kind of registrar somewhere, and each agent in your contact list needs to have the necessary setup to receive incoming calls. (If an agent's behind a NATting internet router, the router would need to either run a SIP proxy, or forward ports 5060, 5061 to a particular machine (which might be an agent, or a proxy serving the LAN's agents).
It is, indeed, a large can of worms.
The basic issue is to solve the problem of getting transport ports anywhere on the internet for multimedia traffic.
Many companies/experts have tried to solve this situation. A possible way out of is to buy a domain and setup a basic registrar using YATE or Asterisk on an address accessible from the internet and configure it to also use ICE as needed. Your iphone application at both ends could register automatically to it upon start. Then make P2P calls.

Is Fiddler a Good Building Block for an Internet Filter?

So I want to make my own Internet Filter. Don't worry, i'm not asking for a tutorial. I'm just wondering if Fiddler would make a good backbone for it. I'm a little worried because it seemed that there's a few things Fiddler can't always pick up - or that there are workarounds. So, my question:
Would Fiddler grab all web data? i.e, chats, emails, websites, etc.
Are there any known workarounds?
Any other reasons not to use it?
Thanks!
I think you mean FiddlerCore rather than Fiddler. Fiddler(Core) is a web proxy meaning it captures HTTP/HTTPS traffic; it won't capture traffic that uses other protocols (e.g. IRC, etc). To capture traffic from other protocols, you'll need a lower-level interception point (e.g. a Windows Firewall filter) which will capture everything, but it will not be able to decrypt HTTPS traffic, and parsing / modifying the traffic will prove MUCH harder.

See what website the user is visiting in a browser independent way

I am trying to build an application that can inform a user about website specific information whenever they are visiting a website that is present in my database. This must be done in a browser independent way so the user will always see the information when visiting a website (no matter what browser or other tool he or she is using to visit the website).
My first (partially successful) approach was by looking at the data packets using the System.Net.Sockets.Socket class etc. Unfortunately I discoverd that this approach only works when the user has administrator rights. And of course, that is not what I want. My goal is that the user can install one relatively simple program that can be used right away.
After this I went looking for alternatives and found a lot about WinPcap and some of it's .NET wrappers (did I tell you I am programming c# .NET already?). But with WinPcap I found out that this must be installed on the user's pc and there is nog way to just reference some dll files and code away. I already looked at including WinPcap as a prerequisite in my installer but that is also to cumbersome.
Well, long story short. I want to know in my application what website my user is visiting at the moment it is happening. I think it must be done by looking at the data packets of the network but can't find a good solution for this. My application is build in C# .NET (4.0).
You could use Fiddler to monitor Internet traffic.
It is
a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect traffic, set breakpoints, and "fiddle" with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.
It's scriptable and can be readily used from .NET.
One simple idea: Instead of monitoring the traffic directly, what about installing a browser extension that sends you the current url of the page. Then you can check if that url is in your database and optionally show the user a message using the browser extension.
This is how extensions like Invisible Hand work... It scans the current page and sends relevant data back to the server for processing. If it finds anything, it uses the browser extension framework to communicate those results back to the user. (Using an alert, or a bar across the top of the window, etc.)
for a good start, wireshark will do what you want.
you can specify a filter to isolate and view http streams.
best part is wireshark is open source, and built opon another program api, winpcap which is open source.
I'm guessing this is what you want.
capture network data off the wire
view the tcp traffic of a computer, isolate and save(in part or in hole) http data.
store information about the http connections
number 1 there is easy, you can google for a winpcap tutorial, or just use some of their sample programs to capture the data.
I recomend you study up on the pcap file format, everything with winpcap uses this basic format and its structers.
now you have to learn how to take a tcp stream and turn it into a solid data stream without curoption, or disorginized parts. (sorry for the spelling)
again, a very good example can be found in the wireshark source code.
then with your data stream, you can simple read the http format, and html data, or what ever your dealing with.
Hope that helps
If the user is cooperating, you could have them set their browser(s) to use a proxy service you provide. This would intercept all web traffic, do whatever you want with it (look up in your database, notify the user, etc), and then pass it on to the original location. Run the proxy on the local system, or on a remote system if that fits your case better.
If the user is not cooperating, or you don't want to make them change their browser settings, you could use one of the packet sniffing solutions, such as fiddler.
A simple stright forward way is to change the comupter DNS to point to your application.
this will cause all DNS traffic to pass though your app which can be sniffed and then redirected to the real DNS server.
it will also save you the hussel of filtering out emule/torrent traffic as it normally work with pure IP address (which also might be a problem as it can be circumvented by using IP address to browse).
-How to change windows DNS Servers
-DNS resolver
Another simple way is to configure (programmaticly) the browsers proxy to pass through your server this will make your life easier but will be more obvious to users.
How to create a simple proxy in C#?