We have a simple HA Proxy (13.5) and an IIS Server behind it. The IIS Server itself requires parallel services on the same box, all of which require Windows Authentication. But, it appears that while on "server" and trying to route traffic to the HA Proxy, back to the same server doesn't pass authentication.
frontend VipTst-M-TCPMode
bind 10.5.30.128:80 name http
bind 10.5.30.128:443 name https
timeout client 180s
option tcplog
mode tcp
log global
default_backend M-TcpMode
####### TCP MODE
backend M-TcpMode
balance roundrobin
mode tcp
log global
timeout server 180s
timeout connect 3s
default-server inter 3s rise 2 fall 3
server ServerA 10.20.30.104 maxconn 1000 weight 10 check port 443 inter 5000
So, from ServerA->HAProxy->ServerA/someservice doesn't seem to work. Ironically, if I go from my desktop like this: Desktop-HAproxy->ServerA/someservice it works fine.
And if I just go ServerA/someservice the page also renders.
In ServerA-HAProxy->ServerA, I'm prompted for credentials.
So what did I miss?
Thanks,
Nick
Related
I have 3 server:
server (A)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (B)= a nginx(port 80) as reverse proxy to kestler (5000 port)
server (C)= a HAProxy as load balancer for port 80 of server (A) and (B)
and server A & B are quite similar.
every things works very well and haproxy forwards requests to server (A) & (B), but if kestrel in one of servers (e.g. A) be killed, nginx respond 502 bad gateway error and haproxy not detect this issue and still redirect requests to it, and this is mistake! it must redirect requests to server (B) in this time.
global
log 127.0.0.1 local2 info
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option redispatch
retries 3
timeout connect 5s
timeout client 50s
timeout server 50s
stats enable
stats hide-version
stats auth admin:admin
stats refresh 10s
stats uri /stat?stats
frontend http_front
bind *:80
mode http
option httpclose
option forwardfor
reqadd X-Forwarded-Proto:\ http
default_backend http_back
backend http_back
balance roundrobin
mode http
cookie SERVERID insert indirect nocache
server ServerA 192.168.1.2:80 check cookie ServerA
server ServerB 192.168.1.3:80 check cookie ServerB
How Can I resolve this issue?
thanks very much.
You are only checking whether nginx is running, not whether the application is healthy enough to use.
In the backend, add option httpchk.
option httpchk GET /some/path HTTP/1.1\r\nHost:\ example.com
Replace some path with a path that will prove whether the application is usable on that server if it returns 200 OK (or any 2xx or 3xx response), and replace example.com with the HTTP Host header the application expects.
option httpchk
By default, server health checks only consist in trying to establish a TCP connection. When option httpchk is specified, a complete HTTP request is sent once the TCP connection is established, and responses 2xx and 3xx are
considered valid, while all other ones indicate a server failure, including the lack of any response.
This will mark the server as unhealthy if the app is not healthy, so HAProxy will stop sending traffic to it. You will want to configure a check interval for each server using inter and downinter and fastinter options on each server entey to specify how often HAProxy should perform the check.
I have an Haproxy 1.5.4. I would like to configure the haproxy to use a different backend for each request. This way , I want to ensure that a diffeent backend is used for each request. I curently use the following config:
global
daemon
maxconn 500000
nbproc 2
log 127.0.0.1 local0 info
defaults
mode tcp
timeout connect 50000ms
timeout client 500000ms
timeout server 500000ms
timeout check 5s
timeout tunnel 50000ms
option redispatch
listen httptat *:3310
mode http
stats enable
stats refresh 5s
stats uri /httpstat
stats realm HTTPS proxy stats
stats auth https:xxxxxxxxxxx
listen HTTPS *:5008
mode tcp
#maxconn 50000
balance leastconn
server backend1 xxx.xxx.xxx.xxx:125 check
server backend1 xxx.xxx.xxx.xxx:126 check
server backend1 xxx.xxx.xxx.xxx:127 check
server backend1 xxx.xxx.xxx.xxx:128 check
server backend1 xxx.xxx.xxx.xxx:129 check
server backend1 xxx.xxx.xxx.xxx:130 check
......
simply change the balance setting from leastconn to roundrobin
from the haproxy manual for 1.5 :
roundrobin Each server is used in turns, according to their weights.
This is the smoothest and fairest algorithm when the server's
processing time remains equally distributed. This algorithm
is dynamic, which means that server weights may be adjusted
on the fly for slow starts for instance. It is limited by
design to 4095 active servers per backend. Note that in some
large farms, when a server becomes up after having been down
for a very short time, it may sometimes take a few hundreds
requests for it to be re-integrated into the farm and start
receiving traffic. This is normal, though very rare. It is
indicated here in case you would have the chance to observe
it, so that you don't worry.
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#4-balance
Am using HAProxy version haproxy-1.4.24 on a SLES 11 SP3 server. I need to load balance (using least connections or round robin) between 3 servers which talk only SSL. A session from client to server starts with client/server handshake followed by a series of "chatty" messages and then close of session.
I do not want to use stick src directive since it needs a time limit argument, making my load balancing ineffective.
Below is the configuration file am using. Can someone let me know how to achieve per session stickiness (one client sticks to one server until the SSL session ends)?
global
log /dev/log local0
log /dev/log local1 notice
#chroot /var/lib/haproxy
#stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
#user haproxy
#group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind *:80
bind *:443
mode tcp
default_backend nodes
backend nodes
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server s1 s1.mydomain.com:443 check
server s2 s2.mydomain.com:443 check
server s3 s3.mydomain.com:443 check
I set up and haproxy on a mesosphere cluster and set up three web servers using marathon. Now I am trying to load balance between them using this config
global
daemon
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
defaults
log global
retries 3
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind 127.0.0.1:9090
balance
mode http
listen apiserver
bind 0.0.0.0:80
mode tcp
balance leastconn
server apiserver-3 10.132.62.240:31000 check
server apiserver-2 10.132.62.243:31000 check
server apiserver-1 10.132.62.242:31000 check
Now if I am in the VPN I can connect to the server normally - however externally I am unable to do that.Other Services manage to use the ports without problems (both local and global) but haproxy can't seem to work. If I put haproxy in a docker container it works , however I don't want to do that
I am hosting two different application versions on same servers on different ports. In basic version i expect that following configuration should send request in RoundRobin fashion to different ports. But what i am observing is the request is getting broadcasted to ALL of my server endpoints. Meaning in below example my main request to port 8080 gets FWD to both www.myappdemo.com:5001 and www.myappdemo.com:5002... although the response send by proxy is ALWAYS from www.myappdemo.com:5001.
Can anyone tell what is wrong here?
global
debug
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8080
default_backend servers
backend servers
balance roundrobin
server svr_50301 www.myappdemo.com:5001 maxconn 32 check
server svr_50302 www.myappdemo.com:5002 maxconn 32 check
i can advise you to enable logs and web interface, after that you can provide us more logs and you can check in web interface also if haproxy detects you second server(svr_50302) to be alive.
Reference to HAProxy 1.5 Doc's :
Web Interface - http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stats%20admin
Good info how to enable login - http://webdevwonders.com/haproxy-load-balancer-setup-including-logging-on-debian/
Best Regards,
Dani