How HAProxy forward traffic - haproxy

I have installed a HAProxy in my server, I want Haproxy distribute all traffic from port 9092 into server worker0 and server worker1.
The following is configuration:
frontend sample-traffic
bind *:9092
default_backend sample-traffic
mode tcp
option tcplog
backend sample-traffic
balance source
mode tcp
server worker0 10.16.38.210:9092 check
server worker1 10.16.38.211:9092 check
$ netstat -nap | grep 9092
tcp 0 0 0.0.0.0:9092 0.0.0.0:* LISTEN 8114/haproxy
$ ps aux | grep 8114
haproxy 8114 0.7 2.9 102784 55848 ? Ss Jan11 94:44 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Okay, haproxy is running well, it is listening port 9092.
Here HAproxy is acted as a load balancer and just forward tcp traffic. My question is,
(1) Is my tcp connection terminated in HAproxy and then HAProxy re-forward my traffic to my destination server?
The reason why I ask this question is, because I saw there is a process that is listening on Port 9092, HAProxy need to receive traffic, then HAProxy need to terminate tcp for receiving.
Then, HAProxy's another side will establish another tcp connection to my destination server, and use it to forward traffic, so I think my TCP connection should be terminated in HAProxy.
+-+----+--+ +-----------------------------+ +-----------------+
| | tcp1 | HAProxy | tcp2 | |
| Clients +--------------> receiver sender----------------->+ App Server |
| | | | | |
+---------+ +----------------------------- +-----------------+
However, if the above is true, there will be two tcp socket connection, I feel it will be very lower efficient.
So I need HAProxy experts to help me understand how HAProxy process this scenario inside, is it a tcp socket connection or two socket connections inside HAProxy?

Is my tcp connection terminated in HAproxy and then HAProxy re-forward my traffic to my destination server?
No, that is not how HAProxy or other proxies works, the connection will pass through HAProxy and will be terminated only when the client or the server ends the connection for some reason.
In your example you have a frontend listening to the port 9092, this frontend has two backend servers on different machines, also using the same port.
When your HAProxy server receives traffic in the port 9092, it will make a TCP connection with the client and it will also make another TCP connection to one of your backend servers to pass the traffic, so you will have two tcp connections, one with the client on the frontend side and another with the server in the backend side. It works like the drawing you made.

Related

Openshift 4.2 on VMware Vsphere, Loadbalancer Configuration and Understanding

Recently I have tried to install openshift 4.2 on VMWare and followed this documentation https://blog.openshift.com/openshift-4-2-vsphere-install-with-static-ips/ so I was able to install it successfully and it's working fine. But this installation is using a single LoadBalancer (HAProxy) for everything.
So In my case, the IP of LoadBalancer was 10.68.33.62 then I mapped the URL like below
10.68.33.62 api.openshift4.example.com
10.68.33.62 api-int.openshift4.example.com
10.68.33.62 *.apps.openshift4.example.com
That means all the URL's in a single LoadBalancer. I was able to access the console from below URL
https://console-openshift-console.apps.openshift4.example.com
Even another app was able to access from https://anotherapp.apps.openshift4.example.com
HA Proxy config file
frontend openshift-api-server
bind *:6443
default_backend openshift-api-server
mode tcp
option tcplog
backend openshift-api-server
balance source
mode tcp
server bootstrap 10.68.33.66:6443 check
server master1 10.68.33.63:6443 check
server master2 10.68.33.67:6443 check
server master3 10.68.33.68:6443 check
frontend machine-config-server68
bind *:22623
default_backend machine-config-server
mode tcp
option tcplog
backend machine-config-server
balance source
mode tcp
server bootstrap 10.68.33.66:22623 check
server master1 10.68.33.63:22623 check
server master2 10.68.33.67:22623 check
server master3 10.68.33.68:22623 check
frontend ingress-http
bind *:80
default_backend ingress-http
mode tcp
option tcplog
backend ingress-http
balance source
mode tcp
server worker1 10.68.33.64:80 check
server worker2 10.68.33.65:80 check
frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog
backend ingress-https
balance source
mode tcp
server worker1 10.68.33.64:443 check
server worker2 10.68.33.65:443 check
But After reading the documentation https://docs.openshift.com/container-platform/4.2/installing/installing_vsphere/installing-vsphere.html#installation-network-user-infra_installing-vsphere I decided to use two load balancers. The API requires one load balancer and the default Ingress Controller needs the second load balancer to provide ingress to applications.
Now in this case I mapped the URL like below
10.68.33.62 api.openshift4.example.com
10.68.33.62 api-int.openshift4.example.com
And assuming IP of the second loadbalancer is 10.68.33.69
10.68.33.69 *.apps.openshift4.example.com
And HAProxy config for the first loadbalancer is only balancing the master nodes.
frontend openshift-api-server
bind *:6443
default_backend openshift-api-server
mode tcp
option tcplog
backend openshift-api-server
balance source
mode tcp
server bootstrap 10.68.33.66:6443 check
server master1 10.68.33.63:6443 check
server master2 10.68.33.67:6443 check
server master3 10.68.33.68:6443 check
frontend machine-config-server68
bind *:22623
default_backend machine-config-server
mode tcp
option tcplog
backend machine-config-server
balance source
mode tcp
server bootstrap 10.68.33.66:22623 check
server master1 10.68.33.63:22623 check
server master2 10.68.33.67:22623 check
server master3 10.68.33.68:22623 check
And the second load balancer is balancing only worker nodes because it will be serving only applications.
frontend ingress-http
bind *:80
default_backend ingress-http
mode tcp
option tcplog
backend ingress-http
balance source
mode tcp
server worker1 10.68.33.64:80 check
server worker2 10.68.33.65:80 check
frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog
backend ingress-https
balance source
mode tcp
server worker1 10.68.33.64:443 check
server worker2 10.68.33.65:443 check
But unfortunately it's not working. Is my understanding correct? In a nutshell, I want to balance the Master Console and API's via first loadbalancer and the apps via second loadbalancer. How will I achieve it?
Thanks

Loadblance across kubernetes master nodes

Is there any documentation how to use an external load balancer to load balance traffic on kubernetes API server.
Use case:
I don't prefer to use single master node ip/name in kubeconfig file and need a common name for all of the masters so that if one master is down , it sends traffic to other.
I have DNS name already pinting to loadbalancer ip , and loadbalancer is confifured with SSL certificate and kubernetes master backend nodes , but it results in an error:
"plan http request was sent to https server "
Somehow the load balancer is sending http request to kubernetes API server instead of https.
Turns out that it doenst work on L7 http , but works fine on L4 tcp.
The HAProxy configuration looks like:
frontend k8s-api
bind 192.168.0.150:443
bind 127.0.0.1:443
mode tcp
option tcplog
default_backend k8s-api
backend k8s-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-api-1 192.168.0.147:6443 check
server k8s-api-2 192.168.0.148:6443 check
server k8s-api-3 192.168.0.149:6443 check

HAProxy, PGSQL with SSL and multiple clusters under single port

In my use case I'm using SSL to connect to the PG nodes, since I do not want to have SSL termination, I'm locked in to use TCP mode.
With TCP mode, I have no access to the header information, especially host. Because of this I can not use something like
# Primary - RW
frontend PGSQL_primary
bind *:5432
acl host_pglab hdr(host) -i pglab-db.local
acl host_stage hdr(host) -i stage-db.local
use_backend cluster_pglab-primary if host_pglab
use_backend cluster_stage-primary if host_stage
backend cluster_pglab-primary
option httpchk OPTIONS /master
http-check expect status 200
default-server inter 2s fall 2 rise 2 on-marked-down shutdown-sessions
server pglab-db-01 pglab-db-01.local:5432 maxconn 100 check check-ssl verify none port 8008
server pglab-db-02 pglab-db-02.local:5432 maxconn 100 check check-ssl verify none port 8008
backend cluster_stage-primary
option httpchk OPTIONS /master
http-check expect status 200
default-server inter 2s fall 2 rise 2 on-marked-down shutdown-sessions
server pglab-db-01 stage-db-01.local:5432 maxconn 100 check check-ssl verify none port 8008
server pglab-db-02 stage-db-02.local:5432 maxconn 100 check check-ssl verify none port 8008
From client connect to port 5432 and redirect the traffic to either pglab or stage cluster's primary node, depending on the hostname.
Is there some alternative to this, that I can avoid using new port for every cluster ?
I think you'll probably need a protocol-aware proxy like pgbouncer or pgpool.
Of the two I should think that pgbouncer is closer to haproxy in intention and usage.

Layer4 "Connection refused" with haproxy

I need some advise on how to setup haproxy. I have two web-servers up and running. For testing they run a simple node server on port 8080.
Now on my haproxy server I start haproxy which gives me the following:
$> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg
[WARNING] 325/202628 (16) : Server node-backend/server-a is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 325/202631 (16) : Server node-backend/server-b is DOWN, reason: Layer4 timeout, check duration: 2001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 325/202631 (16) : backend 'node-backend' has no server available!
Just one note: If I do:
haproxy$> wget server-a:8080
I get the response from the node server.
Here is my haproxy.cfg:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option tcplog
option dontlognull
option http-server-close
# option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend www
bind *:80
default_backend node-backend
#---------------------------------------------------------------------
# round robin balancing between the various backends
#--------------------------------------------------------------------
backend node-backend
balance roundrobin
mode tcp
server server-a 172.19.0.2:8080 check
server server-b 172.19.0.3:8080 check
If I remove the check option it seems to work. Any suggestions how I can fix this checking mechanism of haproxy?
You need to get exact ip address of your server with the help of command
ifconfig
and correct the below address in your haproxy.cfg file:
172.19.0.2:8080
172.19.0.3:8080
or modify line like below
server server-a server-a:8080 check
server server-b server-b:8080 check
Remove "mode tcp" and change it to "mode http".
Im just guessing here but i suppose haproxy is doing a tcp check against your web server and the web server can not respond to it.
in "mode http" it checks the web server in http mode and expects a "response 200" for L4 check
and expects a string (whatever you defined) as a L7 check
eg. L4
backend node-backend
balance roundrobin
mode http #(NOT NEEDED IF DEFINED IN DEFAULTS)
option httpchk
server server-a 172.19.0.2:8080 check
server server-b 172.19.0.3:8080 check
eg. L7
backend node-backend
balance roundrobin
mode http #(NOT NEEDED IF DEFINED IN DEFAULTS)
option httpchk get /SOME_URI
http-check expect status 200
server server-a 172.19.0.2:8080 check
server server-b 172.19.0.3:8080 check
Another note related to #basickarl's comment on docker. If you are sending into a docker (docker-compose) instance (namely where you have multiple instances of service running) you likely need to define the docker resolver and use it for dns resolution on your backend:
resolver:
resolvers docker_resolver
nameserver dns 127.0.0.11:53
backend usage of resolver:
backend main
balance roundrobin
option http-keep-alive
server haproxyapp app:80 check inter 10s resolvers docker_resolver resolve-prefer ipv4
i tryied all this answers nothing works for me. only put the gateway IP of network work, for default bridge is 172.17.0.1.
In the servers put the : and with this haproxy connects with success.
My example of custom network with fixed ips and gateway:
----- haproxy config
backend be_pe_8545
mode http
balance roundrobin
server p1 172.20.0.254:18545 check inter 10s
server p2 172.20.0.254:28545 check inter 10s
----- docker app / network
docker_app: ...
networks:
public_network:
ipv4_address: 172.20.0.50
public_network:
name: public_network
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.0.0/24
gateway: 172.20.0.254

connect to postgres server on google compute engine

I have searched for this everywhere but after an hour and a half of searching I've not found anything relevant.
How do I connect to a database on my google compute engine? i.e I want to connect to the postgres server running on my google compute engine using pgadmin3 from my laptop.
Is this even possible? If so how do I go about it?
Thanks in advance!
You need to:
Ensure Postgres is listening for TCP traffic (you can check that by connecting to your instance and running netstat -ntpl). Usually, Postgres will be listening on port 5432.
Ensure there is no local firewall blocking traffic to Postgres' port on the instance (you can run iptables -L)
Ensure there is no GCE firewall blocking traffic to your instance on Postgres' port from your IP. You should read this documentation page, and specifically the "firewalls" section
PostgreSQL must also be configured to allow remote connections, otherwise the connection request will fail, even if all firewalls rules are correct and PostgreSQL server is listening on the right port.
Steps
Outline
Couldn't create links, but this is a rather long answer so this may helps.
Tools to check ports during any step
0.1 nc or netcat
0.2 nmap
0.3 netstat
0.4 lsof
IP addresses
1.1 Your laptop's public IP address
1.2 GCE instance's IP address
Firewall rules
2.1 Check existing
2.2 Add new firewall rules
Configure PostgreSQL to accept remote connections
3.1 Finding the above configuration files
3.2 postgresql.conf
3.3 pg_hba.conf
0. Tools to check ports during any step
0.1 nc or netcat
$ nc -zv 4.3.2.1 5432
Where
-v Produce more verbose output.
-z Only scan for listening daemons, without sending any data to
them. Cannot be used together with -l.
Possible outcomes:
Connection to 4.3.2.1port [tcp/postgresql] succeeded!
Yay.
nc: connect to 4.3.2.1 port 8000 (tcp) failed: Connection refused
Port open by firewall, but service either not listening or refusing connection.
command just hangs
Firewall is blocking.
0.2 nmap
$ nmap 4.3.2.1
Starting Nmap 7.70 ( https://nmap.org ) at 2019-09-09 18:28 PDT
Nmap scan report for 1.2.3.4.bc.googleusercontent.com (4.3.2.1)
Host is up (0.12s latency).
Not shown: 993 filtered ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp closed http
443/tcp closed https
3389/tcp closed ms-wbt-server
4000/tcp closed remoteanything
5432/tcp open postgresql # firewall open, service up and listening
8000/tcp closed http-alt # firewall open, is service up or listening?
0.3 netstat
$ netstat -tuplen
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:4000 0.0.0.0:* LISTEN 1000 4223185 29432/beam.smp
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1000 4020942 15020/postgres
tcp 0 0 127.0.0.1:5433 0.0.0.0:* LISTEN 1000 3246566 20553/postgres
tcp6 0 0 ::1:5432 :::* LISTEN 1000 4020941 15020/postgres
tcp6 0 0 ::1:5433 :::* LISTEN 1000 3246565 20553/postgres
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 4624644 6311/chrome --type=
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 4624643 6311/chrome --type=
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 4625649 6230/chrome
udp 0 0 0.0.0.0:68 0.0.0.0:* 0 20911 -
udp6 0 0 :::546 :::* 0 4621237 -
where
-t | --tcp
-u | --udp
-p, --program
Show the PID and name of the program to which each socket belongs.
-l, --listening
Show only listening sockets. (These are omitted by default.)
-e, --extend
Display additional information. Use this option twice for maximum
detail.
--numeric, -n
Show numerical addresses instead of trying to determine symbolic host,
port or user names.
When issued on the instance where PostgreSQL is running, and you don't see the lines below, it means that PostgreSQL is not configured for remote connections:
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 1001 238400 30826/postgres
tcp6 0 0 :::5432 :::* LISTEN 1001 238401 30826/postgres
0.4 lsof
To check on instance whether service is running at all.
$ sudo lsof -i -P -n | grep LISTEN
systemd-r 457 systemd-resolve 13u IPv4 14870 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 733 root 3u IPv4 19233 0t0 TCP *:22 (LISTEN)
sshd 733 root 4u IPv6 19244 0t0 TCP *:22 (LISTEN)
postgres 2733 postgres 3u IPv4 23655 0t0 TCP 127.0.0.1:5432 (LISTEN)
python3 26083 a_user 4u IPv4 392307 0t0 TCP *:8000 (LISTEN)
1. IP addresses
To connect from your laptop, you will need the public IP address of your laptop, and that of the Google Compute Engine (GCE) instance.
1.1 Your laptop's public IP address
(From this article.)
$ dig +short myip.opendns.com #resolver1.opendns.com
4.3.2.1
1.2 GCE instance's IP address
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
access-news us-east1-d n1-standard-2 10.142.0.5 34.73.156.19 RUNNING
lynx-dev us-east1-d n1-standard-1 10.142.0.2 35.231.66.229 RUNNING
tr2 us-east1-d n1-standard-1 10.142.0.3 35.196.195.199 RUNNING
If you also need the network-tags of the instances:
$ gcloud compute instances list --format='table(name,status,tags.list())'
NAME STATUS TAGS
access-news RUNNING fingerprint=mdTPd8rXoQM=,items=[u'access-news', u'http-server', u'https-server']
lynx-dev RUNNING fingerprint=CpSmrCTD0LE=,items=[u'http-server', u'https-server', u'lynx-dev']
tr2 RUNNING fingerprint=84JxACwWD7U=,items=[u'http-server', u'https-server', u'tr2']
2. Firewall rules
Dealing only with GCE firewall rules below, but make sure that iptables doesn't inadvertently blocks traffic.
See also
"Firewall rules overview" (official docs)
GCE firewall rules vs. iptables
Summary of GCE firewall terms
Behaviour of GCE firewall rules on instances (external vs internal IP addresses)
2.1 Check existing
$ gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
default-allow-https default INGRESS 1000 tcp:443 False
default-allow-icmp default INGRESS 65534 icmp False
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp False
default-allow-rdp default INGRESS 65534 tcp:3389 False
default-allow-ssh default INGRESS 65534 tcp:22 False
pg-from-tag1-to-tag2 default INGRESS 1000 tcp:5432 False
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
A more comprehensive list that includes network-tags as well (from gcloud compute firewall-rules list --help):
$ gcloud compute firewall-rules list --format="table( \
name, \
network, \
direction, \
priority, \
sourceRanges.list():label=SRC_RANGES, \
destinationRanges.list():label=DEST_RANGES, \
allowed[].map().firewall_rule().list():label=ALLOW, \
denied[].map().firewall_rule().list():label=DENY, \
sourceTags.list():label=SRC_TAGS, \
sourceServiceAccounts.list():label=SRC_SVC_ACCT, \
targetTags.list():label=TARGET_TAGS, \
targetServiceAccounts.list():label=TARGET_SVC_ACCT, \
disabled \
)"
NAME NETWORK DIRECTION PRIORITY SRC_RANGES DEST_RANGES ALLOW DENY SRC_TAGS SRC_SVC_ACCT TARGET_TAGS TARGET_SVC_ACCT DISABLED
default-allow-http default INGRESS 1000 0.0.0.0/0 tcp:80 http-server False
default-allow-https default INGRESS 1000 0.0.0.0/0 tcp:443 https-server False
default-allow-icmp default INGRESS 65534 0.0.0.0/0 icmp False
default-allow-internal default INGRESS 65534 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp False
default-allow-rdp default INGRESS 65534 0.0.0.0/0 tcp:3389 False
default-allow-ssh default INGRESS 65534 0.0.0.0/0 tcp:22 False
pg-from-tag1-to-tag2 default INGRESS 1000 4.3.2.1 tcp:5432 tag1 tag2 False
2.2 Add new firewall rules
To open the default PostgreSQL port (5432) from every source to every instance:
$ gcloud compute firewall-rules create \
postgres-all \
--network default \
--priority 1000 \
--direction ingress \
--action allow \
--rules tcp:5432 \
To restrict it between your computer (source: YOUR_IP) and the GCE instance (destination: INSTANCE_IP):
$ gcloud compute firewall-rules create \
postgres-from-you-to-instance \
--network default \
--priority 1000 \
--direction ingress \
--action allow \
--rules tcp:5432 \
--destination-ranges INSTANCES_IP \
--source-ranges YOUR_IP \
Instead of --source-ranges and --destination-ranges one could use source and target network tags or service accounts as well. See the "Source or destination" section in the firewall docs.
3. Configure PostgreSQL to accept remote connections
This is an update to Neeraj Singh's post.
By default PostgreSQL is configured to be bound to “localhost”, therefore the below configuration files will need to be updated:
postgresql.conf, and
pg_hba.conf
3.1 Finding the above configuration files
The location of both files can be queried from PostgreSQL itself (trick taken from this Stackoverflow thread):
$ sudo -u postgres psql -c "SHOW hba_file" -c "SHOW config_file"
3.2 postgresql.conf
The configuration file comes with helpful hints to get this working:
listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
For a quick and dirty solution just change it to
listen_addresses = '*'
Restart the server (see here how). Once PostgreSQL is restarted, it will start listening on all IP addresses (see netstat -tuplen).
To restart PostgreSQL:
$ sudo systemctl restart postgresql#11-main
# or
$ pg_ctl restart
The listen_addresses documentation says that it "Specifies the TCP/IP address(es) on which the server is to listen for connections from client applications.", but that's all. It specifies the sockets the packets are accepted from, but if the incoming connections are not authenticated (configured via pg_hba.conf), then the packets will be rejected (dropped?) regardless.
3.3 pg_hba.conf
From 20.1. The pg_hba.conf File: "_Client authentication is controlled by a configuration file, which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. (HBA stands for host-based authentication.) _"
This is a complex topic so reading the documentation is crucial, but this will suffice for development on trusted networks:
host all all 0.0.0.0/0 trust
host all all ::/0 trust
Another restart is required at this point.