Im' trying to configure Haproxy to failover samba tcp traffic
Now I have this config:
frontend rserve_frontend445
bind *:445
mode tcp
option tcplog
timeout client 15s
default_backend rserve_backend445
backend rserve_backend445
mode tcp
#option tcplog
#option log-health-checks
#option redispatch
log global
#balance roundrobin
timeout connect 5s
timeout server 10s
server cf-m 192.168.1.2:445
server cf-l 192.168.2.2:445 backup
When I open share \haproxy\ I see SMB share on server 192.168.1.2
When I start copy big file and then I disconnect network on primary backend server 192.168.1.2
After that file coping freeze and haproxy cant redirect me to backup node 192.168.2.2
I want to create auto samba failover proxy via HAproxy. What I do wrong?
I'm new in it, thanks in advance!
There are 2 components you have to consider beside the frontend of samba presenting the cifs share to host a cluster:
Clustered Storage:
The storage backend which Samba writes to, the files on disk, have to be available to all Samba servers. A solution is using a cluster file system like GlusterFS or CephFS.
Shared Samba State:
Samba uses TDB, a local database, to store state information. To be able to share this state, there is CTDB
As HaProxy has no control over these components, it cannot load-balance an active connection transparently.
Even with a clustered filesystem and CTDB in place, Samba doesn't seem to be able to handle a transparent failover (correct me, if I'm wrong). More on that, see CTDB Samba failover not highly available .
Related
I need to allow inbound connections from a remote platform to do some administrative tasks on one of my databases (in my case, allow a reverse-ETL service to feed one of my postgresql databases in a pod in my k8s cluster)
The remote platform lets me configure a PostgreSQL destination through SSH tunnels or reverse SSH tunnels, or direct connections. Of course, I would like traffic to be encrypted, so I’m opting for the SSH or reverse SSH Tunnel.
Any idea if/how I can setup this access on my k8s cluster ?
I would like to give the remote service ONLY access to one of my pg database (and not the whole cluster/namespace for security reasons)
The scenario I was thinking about
Traefik listens to ssh on specific port (like 2222)
route this port to a SSH bastion pod capable of managing incoming SSH connections, and log in as a specific linux user. Only allow connections from the remote service IPs via an ip whitelist middleware.
Allow connections from this bastion host pod (or ideally, this linux user) ONLY to my postgresql instance on the default pg port
If I open a bastion host (2), by default, all my users will have access to all services on the cluster...right ? How can I isolate my bastion host instance to only connect it to PG ? I haven't used Network policies yet, but I believe they may be the answer... however, would it be possible to activate networking policies for a single pod only ? (my bastion host) and leave the rest as it is ?
I have an EKS Kubernetes cluster. High level the setup is:
a) There is an EC2 instance, lets call it "VM" or "Host"
b) In the VM, there is a POD running 2 containers: Side Car HAProxy Container + MyApp Container
What happens is that when external requests come, inside of HAProxy container, I can see that the source IP is the "Host" IP. As the Host has a single IP, there can be a maximum of 64K connections to HAProxy.
I'm curious to know how to workaround this problem as I want to be able to make like 256K connections per Host.
I'm not sure is you understand reason for 64k limit so try to explain it
At first that is a good answer about 64k limitations
Let's say that HAProxy (192.168.100.100) listening at port 8080 and free ports at Host (192.168.1.1) are 1,353~65,353, so you have combination of:
source 192.168.1.1:1353~65353 → destination 192.168.100.100:8080
That is 64k simultaneous connections. I don't know how often NAT table is updating, but after update unused ports will be reused. So simultaneous is important
If your only problem is limit of connections per IP, here is couple solutions:
Run multiple HAProxyes. Three containers increase limit to 64,000 X 3 = 192,000
Listen multiple ports on HAProxy (check about SO_REUSEPORT). Three ports (8080, 8081, 8082) increase max number of connections to 192,000
Host interface IP is acting like a gateway for Docker internal network so I not sure if it is possible to set couple IPs for Host or HAProxy. At least I didn't find information about it.
It turns that in Kubernetes one can configure how we want clients to access the service and the choice that we had was nodePort. When we changed it to hostPort, the source IP was seen in the haproxy container and hence the limitation that I was having was removed.
If this option would have failed, my next option was to try the recommendation in the other response which was to have haproxy listening in multiple ports. Thankfully that was not needed.
Thanks!
I trying to use haproxy to loadbalance 2 virtual machines. To avoid confusion, yes I have to 2 virtual machines and on both of them I have installed these components "HAProxy", "Keepalived" and "web application". Futhermore I have configured Floating IP.
So basic flow I want to achieve is - Master "HaProxy" takes all requests coming from "Floating IP" and load balance traffic. "Keepalived" checks if master server is online, I want to direct traffic to my second VM "HAProxy".
My question how to direct traffic to Backup VM HAProxy if master fails?
Currently, I am using pgbouncer for connection pooling in the postgresql cluster. I just want to make sure, Whether it is possible to load balance request between the nodes in the postgresql cluster using pgbouncer.
Now there's pgbouncer-rr-patch(pgbouncer fork by AWS) that can do load balancing:
Routing: intelligently send queries to different database servers from one client connection; use it to partition or load balance across multiple servers/clusters.
From the PgBouncer FAQ
How to load-balance queries between several servers?
PgBouncer does not have internal multi-host configuration. It is possible via some external tools:
DNS round-robin. Use several IPs behind one DNS name. PgBouncer does not look up DNS each time new connection is launched. Instead it caches all IPs and does round-robin internally. Note: if there is more than 8 IPs behind one name, the DNS backend must support EDNS0 protocol. See README for details.
Use a TCP connection load-balancer. Either LVS or HAProxy seem to be good choices. On PgBouncer side it may be good idea to make server_lifetime smaller and also turn server_round_robin on - by default idle connections are reused by LIFO algorithm which may work not so well when load-balancing is needed
I'm trying to use a mongoDB container on Amazon ECS and I want to set up a load balancer with a healthcheck that pings port 27017
What I've done:
Double checked all my security groups to make sure everything has access to port 27017
The load balancer has a health check set to TCP:27017
My ECS service is set up to identify the ELB and seems to be correctly mapped to the contianerName and port correctly.
It still fails every time. I can't even figure out how to debug this problem. When I run the container locally I can nc localhost 27017 and it "seems" to connect (well, I get a blank line which I don't get with any other port)
How can I health check my mongodb?
Update 1:
output of netstat -tulpn | grep 27017
tcp 0 0 :::27017 :::* LISTEN 2600/docker-proxy
Update 2:
My health check settings are as follows:
Ping Target TCP:27017
Timeout 10 seconds
Interval 30 seconds
Unhealthy threshold 5
Healthy threshold 10
This is just a configuration issue on your part.
If the ELB is public then you need to make sure the security groups for your instances allow it to talk to it publicly.
If the ELB is internal then you need to make sure the security groups allow them to talk to each other, such as putting them in the same security group.
You also need to make sure you've forwarded the port to the ECS instance so that it's reachable. Make sure you can hit the service on the ecs instance from a different server, whether at AWS or your laptop.
What do your security groups look like? What does your configuration look like? Are the instances private? public? Is the ELB private? public?