Php application is too slow when using haproxy - haproxy

In my php application when I use mysql direct connection it work perfectly. But when I use haproxy ip for ha, it's taking too long to retrieve result.
Ha proxy configuration as following
global
maxconn 5000
nbproc 5
defaults
retries 3
option redispatch
timeout client 120s
timeout connect 10s
timeout server 120s
frontend galera_cluster_frontend
bind 0.0.0.0:3307
mode tcp
log global
option dontlognull
option tcplog
default_backend galera_cluster_backend
backend galera_cluster_backend
mode tcp
option tcpka
option tcplog
option log-health-checks
retries 3
balance leastconn
server db-server-01 e2e-34-201:3306 check weight 1
server db-server-02 e2e-34-202:3306 check weight 3
server db-server-03 e2e-34-203:3306 check weight 2

Related

HA Proxy not passing windows authentation?

We have a simple HA Proxy (13.5) and an IIS Server behind it. The IIS Server itself requires parallel services on the same box, all of which require Windows Authentication. But, it appears that while on "server" and trying to route traffic to the HA Proxy, back to the same server doesn't pass authentication.
frontend VipTst-M-TCPMode
bind 10.5.30.128:80 name http
bind 10.5.30.128:443 name https
timeout client 180s
option tcplog
mode tcp
log global
default_backend M-TcpMode
####### TCP MODE
backend M-TcpMode
balance roundrobin
mode tcp
log global
timeout server 180s
timeout connect 3s
default-server inter 3s rise 2 fall 3
server ServerA 10.20.30.104 maxconn 1000 weight 10 check port 443 inter 5000
So, from ServerA->HAProxy->ServerA/someservice doesn't seem to work. Ironically, if I go from my desktop like this: Desktop-HAproxy->ServerA/someservice it works fine.
And if I just go ServerA/someservice the page also renders.
In ServerA-HAProxy->ServerA, I'm prompted for credentials.
So what did I miss?
Thanks,
Nick

HAProxy balance b/w multiple backends is not working

My haproxy configuration is as below:
defaults
log global
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout queue 50000ms
timeout http-request 60000ms
timeout http-keep-alive 5000ms
max-keep-alive-queue 10
option httplog
option redispatch
option forwardfor
option http-server-close
frontend front
bind *:80
acl use_bar nbsrv(foo) -m int lt 1
use_backend bar if use_bar
default_backend foo
backend foo
server foo1 10.0.0.1:80 check
backend bar
server bar1 10.0.1.1:80 check
server bar2 10.0.1.2:80 check
My issue is if the backend foo is down then the first request to the proxy fails with 503 Service Unavailable.
The subsequent calls work as they get proxied to the backend bar.
In no situation, we will like the API call to fail.
I fixed it by keeping a single backend and using servers as backup:
defaults
log global
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout queue 50000ms
timeout http-request 60000ms
timeout http-keep-alive 5000ms
max-keep-alive-queue 10
option httplog
option redispatch
option forwardfor
option http-server-close
frontend front
bind *:80
default_backend foo
backend foo
server foo1 10.0.0.1:80 check
server bar1 10.0.1.1:80 check backup
server bar2 10.0.1.2:80 check backup

Chaining haproxys with optimal connection re-use and keep-alive

I need to understand the optimal connection configurations for chaining 2 haproxys to provide maximum connection reuse between the edge proxy, the app proxy and the app containers.
Where should I be correctly using http-server-close, http-reuse or keep-alive settings?
I have haproxy instances sitting in edge regions which proxy over private networks to a central data center where another haproxy provides the application routing based on url paths to application containers. All application containers are completely stateless rest servers.
The setup is as follows:
haproxy-edge(s) -> haproxy-app(s) -> app-component(s)
Each haproxy-edge serves thousands of concurrent browser and api connections and does ssl offloading etc.
haproxy-app can only be reached via connections from haproxy-edge and does path routing, sets consistent response headers etc.
haproxy-edge connection settings
defaults
mode http
option redispatch
option httplog
option dontlognull
option log-health-checks
option http-ignore-probes
option http-server-close
timeout connect 5s
timeout client 15s
timeout server 300s
timeout http-keep-alive 4s
timeout http-request 10s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
...
backend ...
# Routes to haproxy-app. No backend specific connection settings at all
haproxy-app connection settings
defaults
mode http
balance roundrobin
option redispatch
option httplog
option dontlognull
option http-ignore-probes
option http-server-close
timeout connect 5s
timeout client 15s
timeout server 300s
#timeout http-keep-alive 4s
timeout http-request 10s
timeout tunnel 2m
timeout client-fin 1s
timeout server-fin 1s
frontend http-in
...
tcp-request inspect-delay 5s
option http-server-close
...
backend ...
# Routes to app components. No backend specific connection settings at all
I see no reuse of connections in haproxy stats page and the number of sessions/connections seems to be similar at both haproxys, but would expect many on edge to fewer reused connections in haproxy-app.
After testing various combinations the simple (and obvious) change of removing option http-server-close from both haproxy-edge and haproxy-app allowed the connection reuse to work effectively. Haproxy 2.x has some nice new stats page values for reporting on new v's reused connections.

Can haproxy be configured to understand SSL sessions without being sticky to time

Am using HAProxy version haproxy-1.4.24 on a SLES 11 SP3 server. I need to load balance (using least connections or round robin) between 3 servers which talk only SSL. A session from client to server starts with client/server handshake followed by a series of "chatty" messages and then close of session.
I do not want to use stick src directive since it needs a time limit argument, making my load balancing ineffective.
Below is the configuration file am using. Can someone let me know how to achieve per session stickiness (one client sticks to one server until the SSL session ends)?
global
log /dev/log local0
log /dev/log local1 notice
#chroot /var/lib/haproxy
#stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
#user haproxy
#group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind *:80
bind *:443
mode tcp
default_backend nodes
backend nodes
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server s1 s1.mydomain.com:443 check
server s2 s2.mydomain.com:443 check
server s3 s3.mydomain.com:443 check

haproxy not allowing external traffic through

I set up and haproxy on a mesosphere cluster and set up three web servers using marathon. Now I am trying to load balance between them using this config
global
daemon
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
defaults
log global
retries 3
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
listen stats
bind 127.0.0.1:9090
balance
mode http
listen apiserver
bind 0.0.0.0:80
mode tcp
balance leastconn
server apiserver-3 10.132.62.240:31000 check
server apiserver-2 10.132.62.243:31000 check
server apiserver-1 10.132.62.242:31000 check
Now if I am in the VPN I can connect to the server normally - however externally I am unable to do that.Other Services manage to use the ports without problems (both local and global) but haproxy can't seem to work. If I put haproxy in a docker container it works , however I don't want to do that