ARR The request cannot be routed because it has reached the Max-Forwards limit. The server may be self-referencing itself in request routing topology - arr

I have two Windows 2008 R2 Standard Server on which IIS 7.5 is installed (Server1 and Server2). On Server1 I have installed Web Farm Framework 2.2 and created a server Farm "myFarm.com". I have also installed ARR on the Server1.
In the server farm, I have added Server2 and Server1 as the secondary servers. I have configured the ARR with default option. Load balancing is configured to "Round Robin so that request can go to both of the server randomly.
To test my setup I have created a Test.ASPX page and deployed it in both servers. This is a simple page which returns serverName on which server page is executed. This way I would know that load balancing is working or not.
Then I opened Internet Exlorer and tried to browse my Test.ASPX page from server1 which hosts Web Farm and ARR. Everytime I hit the page request goes to Server2 only. I made my server2 has unhealthy in the webfarm to check if Server1 handle the request or not. When I tried to hit the Test.aspx in the browser, I was surprised to add following error:
The request cannot be routed because it has reached the Max-Forwards limit. The server may be self-referencing itself in request routing topology.
From the error message it appears that when my server2 is not available ARR is sending the request to Server1 which is again sending it to itself causing loopback. I couldn't find a way to stop this loopback.
One of the solution which I found after searching is that I should not add Server1 to the web farm as it is hosting ARR, but I have only two servers and I don't want to use one server just for ARR.
As soon as I mark my server2 healthy request starts getting executed by server2.
Could someone suggest what should be configured to resolve this error?
Thanks

You can do a self reference ARR and avoid to get the max-fowards limits if you configure ARR on port 80 and your web farm on another port : ex 8080
So when ARR route the request to itself he will do it on another port so avoid to foward and foward again the request.
Enjoy :-)

I had the same problem recently and this is the configuration that helped me (following what Cedric suggested in another post).
So, here is what you can do:
In your web-site configuration, add additional binding for Server2, for example, to port 88 (i.e. you should be able to get response by navigating to http://Server2:88/Test.ASPX).
In your server farm configuration, add condition to your routing (Routing Rules -> URL Rewrite) to avoid processing requests that go to port 88:

Related

Play Framework as reverse proxy with ScalaWS

I am trying to document a server and replicate its setup done by another person. Server is running Play Framework which also acts as a reverse proxy to MediaWiki running on Apache on the same server on a port that is not open externally on the server.
The Play Framework routes requests to the Media Wiki Server using ScalaWS. When I check the request it creates a request by using the server domain with the Apache port and the media wiki file.
In the real server it is working fine but in the test deployment it fails to reach mediawiki. It works if in the test deployment I open the Apache port externally.
So Somehow the request to the local server running internally on the machine needs to be accessed without routing the request externally. How can this be done? If anyone can give some quick tips or things I can check or even explain how this may be working, that would really help save me some time.
The /etc/hosts file had the wrong domain defined. Fixing that fixed the problem.

nginx load balancing with network redirect

I'm looking for a solution to dispatch requests with nginx to optimize network connection bandwith of main server (then it should dispatch download requests to some other servers).
Here is an extract of nginx sample to perform load balacing:
upstream mystream {
server ip1:port1;
server ip2:port2;
}
server {
listen myport;
location / {
proxy_pass http://mystream;
}
}
The problem in this sample is that main server looks acting as a proxy of background servers and then not redirecting client. (it is providing file itself and then not saving bandwith).
Is there a way to configure nginx to dispatch download requests to background servers without acting as a proxy. (keep URL might be nice, but I'm open to rewrite it if needed).
Thanks
I finally found that split_clients is the best solution for my case as goal was to redirect clients to various download sites without any specific rule.
Note that this is changing URL so client will see the server URL (not important in my case).
With this solution, client asking server:myport/abcd will be redirected to serverx:portx/abcd based on MurmurHash2, see http://nginx.org/en/docs/http/ngx_http_split_clients_module.html
split_clients "${remote_addr}" $destination {
40% server1:port1;
30% server2:port2
20% server3:port3;
10% server4:port4
}
server {
listen myport;
location / {
return 302 http://$destination$request_uri;
}
}
Update
If you want to manage unique URL and background servers directly replying to client without any URL dispatch, you can configure load balancing using Linux Virtual Servers in Direct Routing mode.
To configure it, you can manage a Director VM & several "real servers" to which requests are dispatched transparently. See http://www.linuxvirtualserver.org/VS-DRouting.html
That's just how reverse proxying works:
A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as if they originated from the Web server itself.
One possible solution is to configure your upstream servers to serve traffic to the public and then redirect your clients there.

can the different hosts (not ip) forwarding to the same port externally?

Im just wondering, can 2 or more different external hostname/DNS redirect to multiple local servers but same port?
Let's see, I have 2 DNS internet domain for an example, myserver1.com and myserver2.com, and both I have same A record to my forwarded server IP (e.g: 102.123.123.123). Under my server which only has 102.123.123.123 IP address has 2 application servers but instead of trying to make they work, I use different port for each server applications for an example, serverApp1 listening to 0.0.0.0:2010, serverApp2 listening to 0.0.0.0:2020
My point is, is there any way or how to forward my myserver1.com:2000 to serverApp1 (port 2010), and myserver2.com:2000 to serverApp2 (port 2020) but both myserver1.com and myserver2.com has a same A record?
Im quite sure either it is in iptables or /etc/hosts or BIND issues, but guide me if I missed something. And by the way, the servers and DNS records are accessible from the internet which is the firewalls are configured properly. Thanks.
I don't have much experience in that, but I think you will need a third server/firewall/proxy listening for the incoming host and route it accordingly.
Again, I don't have much experience in that, so I'm not sure if the firewall is able to do that.
I think you can use redirection servers like apache.
In my application we want to access lot of intranet servers from internet. So what we did, we configured a apache with all the mappings in httpd.
So when ever a request to apache comes, it will be redirected appropriately.
For example - I have two servers or hostname in intranet : 1) abc.com:7300/context1
2) xyz.com:8900/context2
We configured a apache with host name abcxyz.com:9000. When a request like
abcxyz.com:9000/context1 comes it will be redirected to abc.com:7300/context1 and when a request like abcxyz.com:9000/context2 comes it will be redirected to xyz.com:8900/context2.
In your case since the requests are going through the single server (102.123.123.123), you can use redirection.
Hope it helps.

Client Requesition to server using server ip address

I was told to do something I don't believe is possible, that challenge is as follows.
I have 2 web servers.
Web Server 1 is where the pages are and clients access it, server 2 is a server with a very restrictive firewall setting which is to only accept access from server 1.
So, server 1 has a link of a content inside server 2, and it can only be accessed if the request comes from the server 1 Ip address.
But, the client cliks the link therefore his IP address will be sent to server 2, which will be denied because the firewall policy.
Am I getting this correctly or there's a way to do this?
I hope you can understand what I need.
Thanks in advance.
Perhaps what the person posing this problem to you is suggesting is that the client connects to server 1, then server 1 connects to server 2 to fetch the content (i.e server-to-server, on the fly, perhaps by way of some API). So, the client never connects to server 2 directly.
client --> server 1 --> server 2

How to scale punjab bosh connection manager?

How can I scale punjab connection manager?
server1 and server2 are behind loadbalancer. Not I first got connected with server1. It created the session on second request lets say I got connected with server2. server2 will not recognize my previous session and it will disconnect the request.
My openfire and Punjab server are in each EC2 box behind loadbalancer.
You could add info to the HTTP header and use it for the load balancing when your balancer software supports this.
As Alex said you can use http headers to make sure you continue to go to the same node after a session is established. There is also a blog post I did some time ago about this issue. http://thetofu.livejournal.com/71339.html