nginx load balancing with network redirect - redirect

I'm looking for a solution to dispatch requests with nginx to optimize network connection bandwith of main server (then it should dispatch download requests to some other servers).
Here is an extract of nginx sample to perform load balacing:
upstream mystream {
server ip1:port1;
server ip2:port2;
}
server {
listen myport;
location / {
proxy_pass http://mystream;
}
}
The problem in this sample is that main server looks acting as a proxy of background servers and then not redirecting client. (it is providing file itself and then not saving bandwith).
Is there a way to configure nginx to dispatch download requests to background servers without acting as a proxy. (keep URL might be nice, but I'm open to rewrite it if needed).
Thanks

I finally found that split_clients is the best solution for my case as goal was to redirect clients to various download sites without any specific rule.
Note that this is changing URL so client will see the server URL (not important in my case).
With this solution, client asking server:myport/abcd will be redirected to serverx:portx/abcd based on MurmurHash2, see http://nginx.org/en/docs/http/ngx_http_split_clients_module.html
split_clients "${remote_addr}" $destination {
40% server1:port1;
30% server2:port2
20% server3:port3;
10% server4:port4
}
server {
listen myport;
location / {
return 302 http://$destination$request_uri;
}
}
Update
If you want to manage unique URL and background servers directly replying to client without any URL dispatch, you can configure load balancing using Linux Virtual Servers in Direct Routing mode.
To configure it, you can manage a Director VM & several "real servers" to which requests are dispatched transparently. See http://www.linuxvirtualserver.org/VS-DRouting.html

That's just how reverse proxying works:
A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as if they originated from the Web server itself.
One possible solution is to configure your upstream servers to serve traffic to the public and then redirect your clients there.

Related

Is it possible to run a Golang REST web app on an internal (private) IIS server?

I would like to create a web service with GoLang that runs either on IIS (7, 8 or 10) or under Tomcat 7.0. We have multiple environments, each with multiple servers, all being Windows 2008 R2, 2012 or 2016. All servers are private (10.x). My goal is to add some REST services to a COTS product that uses both IIS and Tomcat. I'd prefer to avoid gluing nginx or something else onto either server at the risk of impairing the COTS product or having their tech support people not answer the phone. Again .. the servers are only accessible via corporate VPN and are not public internet-facing.
Which server would offer the easiest path to get something working -- Tomcat or IIS?
That's not really about Go, but still there exist at least two solutions I can think of:
Reverse proxying of HTTP requests.
Write a plain Go server serving requests via HTTP.
Maybe turn it into a proper Windows service using golang.org/x/sys/windows/svc.
Deploy it.
If it's to be hosted on the same machine which runs IIS, then it's fine to make it listen on localhost only. Note that it will need a dedicated TCP port to listen on, and you'll need to make it possible for your server to be somehow configurable in this regard.
Set up reverse proxying in your IIS so that it forwards requests coming to whatever (part of an) URL you want to the Go server.
Use FastCGI.
Go supports serving requests over the FastCGI
protocol by means of its standard library,
and IIS suports FastCGI workers.
So it's possible to (re-)write your Go server to use FastCGI
instead of HTTP and then hook it into IIS via this protocol.
The pros and cons of these solutions—as I view them—are:
Serving over plain HTTP is more familiar to a developer and
makes the server more "portable"—in the sense it will be easier to change its deployment scheme if/when you'll need it.
Right to making it available to the Internet directly.
Conversely, with FastCGI, you'll always need a FastCGI host as a "middleware".
There were rumors that HTTP code is more fine-tuned in terms
of performance than that of FastCGI.
Still, this only will be of concern for reasonably hard-core loads.
One possible upside of FastCGI over HTTP is that it can
be served over pipes rather than TCP. For instance, you might
get it served over named pipes as it's supported by IIS's FastCGI module and there exists 3rd-party packages for Go implementing support for them
(even including one from Microsoft®).
The upside of this is that pipes are beleived to incur lesser overhead for data transfer (basically it's just shoveling bytes between in-kernel buffers belonging to two processes instead of pushing them through the whole TCP/IP stack), and using pipes frees you from the need of dedicating a TCP port to the Go server.
Still, I have no personal experience with this kind of setup and its performance trade-offs.

can the different hosts (not ip) forwarding to the same port externally?

Im just wondering, can 2 or more different external hostname/DNS redirect to multiple local servers but same port?
Let's see, I have 2 DNS internet domain for an example, myserver1.com and myserver2.com, and both I have same A record to my forwarded server IP (e.g: 102.123.123.123). Under my server which only has 102.123.123.123 IP address has 2 application servers but instead of trying to make they work, I use different port for each server applications for an example, serverApp1 listening to 0.0.0.0:2010, serverApp2 listening to 0.0.0.0:2020
My point is, is there any way or how to forward my myserver1.com:2000 to serverApp1 (port 2010), and myserver2.com:2000 to serverApp2 (port 2020) but both myserver1.com and myserver2.com has a same A record?
Im quite sure either it is in iptables or /etc/hosts or BIND issues, but guide me if I missed something. And by the way, the servers and DNS records are accessible from the internet which is the firewalls are configured properly. Thanks.
I don't have much experience in that, but I think you will need a third server/firewall/proxy listening for the incoming host and route it accordingly.
Again, I don't have much experience in that, so I'm not sure if the firewall is able to do that.
I think you can use redirection servers like apache.
In my application we want to access lot of intranet servers from internet. So what we did, we configured a apache with all the mappings in httpd.
So when ever a request to apache comes, it will be redirected appropriately.
For example - I have two servers or hostname in intranet : 1) abc.com:7300/context1
2) xyz.com:8900/context2
We configured a apache with host name abcxyz.com:9000. When a request like
abcxyz.com:9000/context1 comes it will be redirected to abc.com:7300/context1 and when a request like abcxyz.com:9000/context2 comes it will be redirected to xyz.com:8900/context2.
In your case since the requests are going through the single server (102.123.123.123), you can use redirection.
Hope it helps.

ARR The request cannot be routed because it has reached the Max-Forwards limit. The server may be self-referencing itself in request routing topology

I have two Windows 2008 R2 Standard Server on which IIS 7.5 is installed (Server1 and Server2). On Server1 I have installed Web Farm Framework 2.2 and created a server Farm "myFarm.com". I have also installed ARR on the Server1.
In the server farm, I have added Server2 and Server1 as the secondary servers. I have configured the ARR with default option. Load balancing is configured to "Round Robin so that request can go to both of the server randomly.
To test my setup I have created a Test.ASPX page and deployed it in both servers. This is a simple page which returns serverName on which server page is executed. This way I would know that load balancing is working or not.
Then I opened Internet Exlorer and tried to browse my Test.ASPX page from server1 which hosts Web Farm and ARR. Everytime I hit the page request goes to Server2 only. I made my server2 has unhealthy in the webfarm to check if Server1 handle the request or not. When I tried to hit the Test.aspx in the browser, I was surprised to add following error:
The request cannot be routed because it has reached the Max-Forwards limit. The server may be self-referencing itself in request routing topology.
From the error message it appears that when my server2 is not available ARR is sending the request to Server1 which is again sending it to itself causing loopback. I couldn't find a way to stop this loopback.
One of the solution which I found after searching is that I should not add Server1 to the web farm as it is hosting ARR, but I have only two servers and I don't want to use one server just for ARR.
As soon as I mark my server2 healthy request starts getting executed by server2.
Could someone suggest what should be configured to resolve this error?
Thanks
You can do a self reference ARR and avoid to get the max-fowards limits if you configure ARR on port 80 and your web farm on another port : ex 8080
So when ARR route the request to itself he will do it on another port so avoid to foward and foward again the request.
Enjoy :-)
I had the same problem recently and this is the configuration that helped me (following what Cedric suggested in another post).
So, here is what you can do:
In your web-site configuration, add additional binding for Server2, for example, to port 88 (i.e. you should be able to get response by navigating to http://Server2:88/Test.ASPX).
In your server farm configuration, add condition to your routing (Routing Rules -> URL Rewrite) to avoid processing requests that go to port 88:

Using Fiddler with IIS7 Express

I am using IIS7 Express while developing my web application. I need to use fiddler to investigate an issue and cannot figure out how to configure things so I can get the HTTP stream. It seems that IIS7 express will only listen on localhost which means I cannot access the stream.
This has nothing to do with IIS7 Express and everything to do with the fact that you're using loopback traffic.
Ref: https://www.fiddlerbook.com/fiddler/help/hookup.asp#Q-LocalTraffic
Click Rules > Customize Rules.
Update your Rules file like so:
static function OnBeforeRequest(oSession:Fiddler.Session)
{
if (oSession.HostnameIs("MYAPP")) { oSession.host = "localhost:portnumber"; }
}
Then, just visit http://myapp in your browser.
Or use the address http://localhost.fiddler/ and Fiddler will use the hostname localhost instead of converting to an IP address.
One useful variation of Eric's answer (that was edited by Brett) would be to use oSession.port to build the oSession.host. With this little change, if one needs to capture IIS express traffic on http://localhost:12345, they could use http://iisexpress:12345. That will make it easier to capture traffic for sites with random ports as created by WebMatrix and VS. I tried it out with IE and Firefox and capturing IIS Express traffic was a breeze. Fiddler rocks!.
static function OnBeforeRequest(oSession:Fiddler.Session)
{
//...
// workaround the iisexpress limitation
// URL http://iisexpress:port can be used for capturing IIS Express traffic
if (oSession.HostnameIs("iisexpress")) { oSession.host = "localhost:"+oSession.port; }
//...
}
With the latest version of fiddler, you only need to navigate to localhost.fiddler:port. However, doing that alone didn't help me and I was still getting access denied when using Windows Authentication. To fix this, I found this blog entry: http://www.parago.de/2013/01/fiddler-and-the-401-unauthorized-error-with-asp-net-web-api-using-integrated-windows-authentication-wia/
In short, create this key:
Key Path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\Lsa\MSV1_0
Value Name BackConnectionHostNames
Value Type REG_MULTI_SZ
String Value localhost.fiddler
You can use fiddler as a proxy between your clients and the server. This means you start up fiddler, and then access the server using fiddler's port rather then the usual port (default for fiddler2 is 8888 I think). If you need to debug the server "live" vs. real world clients, you can change the IIS binding from :80 to something else, and place fiddler's proxy on port 80.
EDIT: By the way, by default fiddler2 changes the proxy settings on your browsers so that they access everything through fiddler anyway (on the machine in which fiddler is installed only)

Is a server farm abstracted on both sides?

I am trying to understand how a solution will behave if deployed in a server farm. We have a Java web application which will talk to an FTP server for file uploads and downloads.
It is also desirable to protect the FTP server with a firewall, such that it will allow incoming traffic only from the web server.
AT the moment since we do not have a server farm, all requests to the FTP server come from the same IP (web server IP) making it possible to add a simple rule in the firewall. However, if the application is moved to a server farm, then I do not know which machine in the farm will make a request to the FTP server.
Just like the farm is hidden behind a facade for it's clients, is it hidden behind a facade for the services it might invoke, so that regardless of which machine from the farm makes the request to the FTP server, it always sees the same IP?
Are all server farms implemented the same way, or would this behavior depend on the type of server farm? I am thinking of using Amazon Elastic CLoud.
It depends very much on how your web cluster is configured. If your cluster is behind a NAT firewall, then yes, all outgoing connections will appear to come from the same address. Otherwise, the IP addresses will be different, but they'll almost certainly all be in a fairly small range of addresses, and you should be able to add that range to the firewall's exclude list, or even just list the IP address of each machine individually.
Usually you can enter cnames or subnets when setting up firewall rules, which would simplify the maintenance of them. You can also send all traffic through a load balancer or proxy. Thats essentially how any cloud/cluster/farm service works.
many client ips <-> load balancer <-> many servers