proxy Memcache_Servers has no server available - memcached

proxy Memcache_Servers has no server available, when I start the haproxy.service:
[root#ha-node1 log]# systemctl restart haproxy.service
Message from syslogd#localhost at Aug 2 10:49:23 ...
haproxy[81665]: proxy Memcache_Servers has no server available!
The configuration in my haproxy.cfg:
listen Memcache_Servers
bind 45.117.40.168:11211
balance roundrobin
mode tcp
option tcpka
server ha-node1 ha-node1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server ha-node2 ha-node2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server ha-node3 ha-node3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3

At last, I found the ip in my hosts is like below:
[root#ha-node1 sysconfig]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.101 ha-node1 ha-node1.aa.com
192.168.8.102 ha-node2 ha-node2.aa.com
192.168.8.103 ha-node3 ha-node3.aa.com
45.117.40.168 ha-vhost devops.aa.com
192.168.8.104 nfs-backend backend.aa.com
But in my /etc/sysconfig/memcached, the ip is not the host ip before, so I changed to the ip in the hosts:
Now I restart the memcached and haproxy, it works normal now.

Related

HAProxy for postgresql Load Balancer start error - cannot bind socket

I am setting postgresql loadbalance using Haproxy and I met a error messages as below:
Jun 30 07:57:43 vm0 systemd[1]: Starting HAProxy Load Balancer...
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadWrite: cannot bind socket [0.0.0.0:8081]
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadOnly: cannot bind socket [0.0.0.0:8082]
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jun 30 07:57:43 vm0 systemd[1]: Failed to start HAProxy Load Balancer.
the below is my haproxy.cfg file and I kept checking all the possiblilites but I couldn't find the reason why I have the error. actualy I check the port is already used but no other process use the port 8001, 8002
-- haproxy.cfg
listen ReadWrite
bind *:8081
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg1 pg1:5432 maxconn 100 check port 23267
listen ReadOnly
bind *:8082
option httpchk
http-check expect status 206
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg2 pg1:5432 maxconn 100 check port 23267
server pg3 pg2:5432 maxconn 100 check port 23267

ubuntu 20.04 in vagrant open port to private network

Running 2VMs by Vagrant within the private network like:
host1: 192.168.1.1/24
host2: 192.168.1.2/24
In host1, the app listens port 6443. But cannot access in host2:
# host1
root#host1:~# ss -lntp | grep 6443
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=10537,fd=7))
# host2
root#host2:~# nc -zv -w 3 192.168.1.2 6443
nc: connect to 192.168.1.2 port 6443 (tcp) failed: Connection refused
(Actually, the app is the "kube-apiserver" and fail to join the host2 as a worker node with kubeadm)
What am I missed?
Both are ubuntu focal (box_version '20220215.1.0') and ufw are inactivated.
After change the IP of hosts, it works:
host1: 192.168.1.1/24 -> 192.168.1.2/24
host2: 192.168.1.2/24 -> 192.168.1.3/24
I guess it is caused by using the reserved IP as the gateway, the first IP of the subnet, 192.168.1.1.
I'll update the references about that here later, I have to setup the k8s cluster for now.

Unable to curl pod IP using containerPort

Sorry if it's a naive question. Please correct me if my understanding is wrong.
Created POD using this command:
kubectl run nginx --image=nginx --port=8888
My understand of this command, nginx (application) container will be exposed/available at port 8888
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 10m 10.244.1.2 node01 <none> <none>
curl -v 10.244.1.2:8888 ===> i am wondering why this failed ?
* Trying 10.244.1.2:8888...
* TCP_NODELAY set
* connect to 10.244.1.2 port 8888 failed: Connection refused
* Failed to connect to 10.244.1.2 port 8888: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.1.2 port 8888: Connection refused
curl -v 10.244.1.2 ===> to my surprise this returned 200 success response
* Trying 10.244.1.2:80...
* TCP_NODELAY set
* Connected to 10.244.1.2 (10.244.1.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.244.1.2
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
If the application is still referring default 80 port, I am wondering about the significance of container port 8888 ?
OK, so it may be used to expose the POD to the outside world.
Let's see that, I went ahead and created service for the POD:
kubectl expose pod nginx --port=80 --target-port=8888
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.96.214.161 <none> 80/TCP 13m
$ curl -v 10.96.214.161 ==> here default port (80) didn't work
* Trying 10.96.214.161:80...
* TCP_NODELAY set
* connect to 10.96.214.161 port 80 failed: Connection refused
* Failed to connect to 10.96.214.161 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.96.214.161 port 80: Connection refused
$ curl -v 10.96.214.161:8888 ==> target port didn't work either
* Trying 10.96.214.161:8888...
* TCP_NODELAY set
....waiting forever
Which port do I need to use to make it work? Am I missing anything?
By default, nginx server listen to the port 80. You can see it in their docker image ref.
With kubectl run nginx --image=nginx --port=8888 what you have done here is you have expose another port along with 80. But the server is still listening on the 80 port.
So, try with target port 80. For this reason when you tried with other than port 80 it's not working. Try with set --target-port=8888 to --target-port=80.
Or, If you want to change the server port you need to use configmap along with pod to pass custom config to the server.

Haproxy SSL(https) health checks without terminating ssl

so I can't figure out a proper way to do the SSL check, I am not using certificates, just need to check against a HTTPS websites url (google.com/ for example)
Trying multiple combinations at a time, without success. Maybe someone has a similar configuration,
backends using -
> check-sni google.com sni ssl_fc_sni
returns - reason: Layer7 wrong status, code: 301, info: "Moved Permanently"
check port 80 check-ssl -
reason: Layer6 invalid response, info: "SSL handshake failure"
All others just timing out. Here's the complete configuration file-
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
ssl-server-verify none
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
# An alternative list with additional directives can be obtained from
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend myfront
bind *:8000
mode tcp
tcp-request inspect-delay 5s
default_backend backend1
listen stats
bind :444
stats enable
stats uri /
stats hide-version
stats auth test:test
backend Backends
balance roundrobin
option forwardfor
option httpchk
http-check send hdr host google.com meth GET uri /
http-check expect status 200
#http-check connect
#http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
#http-check expect status 200-399
#http-check connect port 443 ssl sni haproxy.1wt.eu
#http-check send meth GET uri / ver HTTP/1.1 hdr host haproxy.1wt.eu
#http-check expect status 200-399
#http-check connect port 443 ssl sni google.com
#http-check send meth GET uri / ver HTTP/1.1 hdr host google.com
default-server fall 10 rise 1
server Node1011 192.168.0.2:1011 check inter 15s check-ssl check port 443
server Node1012 192.168.0.2:1012 check inter 15s check-ssl check port 443
server Node1015 192.168.0.2:1015 check inter 15s check port 443
server Node1017 192.168.0.2:1017 check inter 15s check-ssl check-sni google.com sni ssl_fc_sni
server Node1018 192.168.0.2:1018 check inter 15s check-ssl check-sni google.com sni ssl_fc_sni
server Node1019 192.168.0.2:1019 check inter 15s check-sni google.com sni ssl_fc_sni
server Node1020 192.168.0.2:1020 check inter 15s check port 443 check-ssl
server Node1021 192.168.0.2:1021 check inter 15s check port 443 check-ssl
server Node1027 192.168.0.2:1027 check inter 15s check port 80
server Node1028 192.168.0.2:1028 check inter 15s check port 80
server Node1029 192.168.0.2:1029 check inter 15s check port 80
server Node1030 192.168.0.2:1030 check inter 15s check port 80 check-ssl
server Node1031 192.168.0.2:1031 check inter 15s check port 80 check-ssl
server Node1033 192.168.0.2:1033 check inter 15s check port 80 check-ssl verify none
server Node1034 192.168.0.2:1034 check inter 15s check port 80 check-ssl verify none
server Node1035 192.168.0.2:1035 check inter 15s check-ssl
server Node1036 192.168.0.2:1036 check inter 15s check-ssl
server Node1048 192.168.0.2:1048 check inter 15s check-ssl verify none
server Node1049 192.168.0.2:1049 check inter 15s check-ssl verify none
P.s Found a website, which explains just what I'm trying to do(https://hodari.be/posts/2020_09_04_configure_sni_for_haproxy_backends/), but that doesn't work either, my haproxy version is 2.2.3
P.s.s I am literally trying to check against www.google.com , just to be clear.
Thank you!
That's really not an error. If you do a curl to https://google.com it does do a 301 redirect to https://www.google.com/. I snipped out some protocol details below for brevity, but you get the idea.
Either change your expect to 301, or use www.google.com.
paul:~ $ curl -vv https://google.com
* Rebuilt URL to: https://google.com/
* Trying 172.217.1.206...
-[snip]-
> GET / HTTP/2
> Host: google.com
> User-Agent: curl/7.58.0
> Accept: */*
>
-[snip]-
< HTTP/2 301
< location: https://www.google.com/
< content-type: text/html; charset=UTF-8
< date: Mon, 18 Jan 2021 03:42:04 GMT
< expires: Wed, 17 Feb 2021 03:42:04 GMT
< cache-control: public, max-age=2592000
< server: gws
< content-length: 220
< x-xss-protection: 0
< x-frame-options: SAMEORIGIN
< alt-svc: h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
<
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
So, if you want to avoid the 301, use the www.google.com value in your config as thus:
http-check send hdr host www.google.com meth GET uri /

Kubernetes can telnet into POD but can't curl web content

In my Kubernetes environment I have following to pods running
NAME READY STATUS RESTARTS AGE IP NODE
httpd-6cc5cff4f6-5j2p2 1/1 Running 0 1h 172.16.44.12 node01
tomcat-68ccbb7d9d-c2n5m 1/1 Running 0 45m 172.16.44.13 node02
One is a Tomcat instance and other one is a Apache instance.
from node01 and node02 I can curl the httpd which is using port 80. But If i curl the tomcat server which is running on node2 from node1 it fails. I get below output.
[root#node1~]# curl -v 172.16.44.13:8080
* About to connect() to 172.16.44.13 port 8080 (#0)
* Trying 172.16.44.13...
* Connected to 172.16.44.13 (172.16.44.13) port 8080 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.16.44.13:8080
> Accept: */*
>
^C
[root#node1~]# wget -v 172.16.44.13:8080
--2019-01-16 12:00:21-- http://172.16.44.13:8080/
Connecting to 172.16.44.13:8080... connected.
HTTP request sent, awaiting response...
But I'm able telnet to port 8080 on 172.16.44.13 from node1
[root#node1~]# telnet 172.16.44.13 8080
Trying 172.16.44.13...
Connected to 172.16.44.13.
Escape character is '^]'.
^]
telnet>
Any reason for this behavior? why am I able to telnet but unable to get the web content? I have also tried different ports but curl is working only for port 80.
I was able to get this fixed by disabling selinux on my nodes.