HAProxy random Empty Response - haproxy

I installed a HAPROXY for balance between two servers. Unfortunately the HAPROXY return random ERR_EMPTY_RESPONSE. I installed the stats also but the stats does not appear frequently because sometimes the stats is shown. I double check with some friends my configuration and I did not found problems.
defaults
timeout connect 3000ms
timeout server 10000ms
timeout client 10000ms
global
log 127.0.0.1 local0 notice
maxconn 2000
user haproxy
group haproxy
frontend stats
bind *:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth user:password
frontend http_in
bind *:80
acl is_audio hdr_end(host) -i subdomain.myserver.com
acl is_proxystats hdr_end(host) -i stats.myserver.com
use_backend srv_audio if is_audio
use_backend srv_stats if is_proxystats
# acl url_blog path_beg /blog
# use_backend blog_back if url_blog
default_backend srv_audio
backend srv_audio
balance roundrobin
server audio1 10.10.10.1:80 check
server audio2 10.10.10.2:80 check
backend srv_stats
server Local 127.0.0.1:1936
My configuration:
HA Proxy version 1.6.3 (package 1.6.3-1ubuntu0.1 amd64)
Ubuntu 16.04.2 LTS
Cloud Machine on AWS LightSail 512KB RAM
System with all packages updated.
I already read the answer of similar question at HAProxy random HTTP 503 errors and the answer is not the same. As suggested there the command netstat -tulpn | grep 80 does not show two HAPROXY running:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
But ps ax | grep haproxy returns:
22890 ? Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
22891 ? S 0:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
22894 ? Ss 0:31 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

Well, I dig more into HAProxy and read a lot of tutorials and I believe I found the solution.
I did two changes:
Changed hdr_end(host) to hdr_dom(host)
Added mode httpto: frontend http_in, backend srv_audio and backend srv_stats
Now, HAPROXY is very stable without bizarre behavior

Related

The connection to the server x.x.x.:6443 was refused - did you specify the right host or port? Kubernetes

I've installed, Docker, Kubectl and kubeAdm.
I want to create my device model and device CRDs (I'm following this guide.
So, when I run the command :
kubectl create -f devices_v1alpha1_devicemodel.yaml
as a user I get the following out:
The connection to the server 10.0.0.68:6443 was refused - did you
specify the right host or port?
(I have added the permission for the user to access the .kube folder)
With netstat, I get :
> ubuntu#kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$
> sudo netstat -atunp Active Internet connections (servers and
> established) Proto
> Recv-Q Send-Q Local Address Foreign Address State
> PID/Program name tcp 0 0 0.0.0.0:22
> 0.0.0.0:* LISTEN 1298/sshd tcp 0 224 10.0.0.68:22 160.98.31.160:52503 ESTABLISHED
> 2061/sshd: ubuntu [ tcp6 0 0 :::22 :::*
> LISTEN 1298/sshd udp 0 0 0.0.0.0:68
> 0.0.0.0:* 910/dhclient udp 0 0 10.0.0.68:123 0.0.0.0:*
> 1241/ntpd udp 0 0 127.0.0.1:123
> 0.0.0.0:* 1241/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:*
> 1241/ntpd udp6 0 0 fe80::f816:3eff:fe0:123 :::*
> 1241/ntpd udp6 0 0 2001:620:5ca1:2f0:f:123 :::*
> 1241/ntpd udp6 0 0 ::1:123 :::*
> 1241/ntpd udp6 0 0 :::123 :::*
> 1241/ntpd
With lsof -i :
ubuntu#kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$ sudo lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 910 root 6u IPv4 12765 0t0 UDP *:bootpc
ntpd 1241 ntp 16u IPv6 15340 0t0 UDP *:ntp
ntpd 1241 ntp 17u IPv4 15343 0t0 UDP *:ntp
ntpd 1241 ntp 18u IPv4 15347 0t0 UDP localhost:ntp
ntpd 1241 ntp 19u IPv4 15349 0t0 UDP 10.0.0.68:ntp
ntpd 1241 ntp 20u IPv6 15351 0t0 UDP ip6-localhost:ntp
ntpd 1241 ntp 21u IPv6 15353 0t0 UDP [2001:620:5ca1:2f0:f816:3eff:fe0a:874a]:ntp
ntpd 1241 ntp 22u IPv6 15355 0t0 UDP [fe80::f816:3eff:fe0a:874a]:ntp
sshd 1298 root 3u IPv4 18821 0t0 TCP *:ssh (LISTEN)
sshd 1298 root 4u IPv6 18830 0t0 TCP *:ssh (LISTEN)
sshd 2061 root 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh->160.98.31.160:52503 (ESTABLISHED)
sshd 2124 ubuntu 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh->160.98.31.160:52503 (ESTABLISHED)
I've already tried this
and:sudo swapoff -a
Please perform below steps on the master node. It works like charm.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
I am facing similar problem with following error while deploying the pod network into a cluster using flannel:
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server 192.168.1.101:6443 was refused - did you specify the right host or port?
I performed below steps to solved the issue:
$ sudo systemctl stop kubelet
$ sudo systemctl start kubelet
$ strace -eopenat kubectl version
then apply the yml file
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
kubelet must be down. you need to check kubelet logs on the master and ensure api server is running and online. then only you should be able to deploy
I'll add another reason for this error that was the issue in my case.
I exported the wrong Kubeconfig file to shell and the error message was very accurate in that case - The endpoint for the API server was wrong (and of course other fields like the cluster name and the certificates - but the server endpoint is the first step in the chain).
I've encountered this problem and swapoff -a works for me though.
sudo -i
swapoff -a
exit
strace -eopenat kubectl version
This is because docker is down. Start docker on your machine.
I have tried many ways but couldn't get it work, then accidentally found the solution to my own situation:
In ~/.kube/ I have
drwxr-x--- 4 staff 128 25 Jul 22:31 cache
-rw------- 1 staff 8781 25 Jul 22:46 config
drwxr-xr-x 8 staff 256 25 Jul 22:46 configs
-rw-r--r-- 1 staff 14 25 Jul 22:31 kubectx
drwxr-xr-x 4 staff 128 29 Jun 16:59 kubens
My assumption is that there is something messed up in the .kube configuration, however I couldn't figure out which files, so I removed most of the directories/files, including cache, config. (If you don't want to keep all the configs, maybe you should remove all of them)
Then from docker dashboard to re-enable kubernetes, to get all the files installed back.
Re-config the docker-desktop by kubectl config use-context docker-desktop.
Finally, my 6443 responded.
Another suggestion is to restart your container sudo systemctl restart containerd In my situation I'm working with containerd and not a typical docker container
After doing this I think it should fix the issue for you but if doesn't work then try sudo swapoff -a
Assuming that's all the output from your netstat command and that you ran it on the master node (the one you installed Kubernetes via kubadm on), it looks to me like the installation did not complete correctly as none of the usual ports you would expect to see on a Kubernetes master node are present.
Usually on a kubernetes master node you'd expect to see kube-apiserver, kube-scheduler, kube-controller, kubelet and possibly etcd all listening on the network.
What was the output of your kubeadm init command?
I ran into this issue as well, I tried the solutions noted above and it did not work for me. Here is what worked for me:
FIX:
kubeadm init --apiserver-advertise-address=10.139.0.42 --ignore-preflight-errors all --pod-network-cidr=172.17.0.1/16 --token-ttl 0
source:
https://www.c-sharpcorner.com/article/kubernetes-installation-in-redhat-and-centos/
I faced this issue recently due to expired certificates for my K8S cluster.
I followed this blog link to renew the certificates and also replace the kube config file that I was using.
Note, it is important to replace the kube config post renewal of the certificates or else you would end up getting following error message for kubectl CLIs:
error: You must be logged in to the server (Unauthorized)
I faced same issue with the same error, You need to check that your container run time (docker/containerd) is active and running:
systemctl status containerd
systemctl restart containerd
systemctl restart kubelet
then if you check its status, it supposed to be up and running and you are able to create k8s objects right now.
In my case KUBECONFIG was the problem causing the same error.
This solved it:
export KUBECONFIG=/home/$(whoami)/.kube/config
I solved this exact problem by making sure that in the /etc/hosts file the IP address and host name was set correctly:
192.168.10.11 kube-01.testing
instead of:
127.0.1.1 kube-01.testing
(or 127.0.0.1).
This happens on Ubuntu and Debian as far as I know-
I solved this exact problem by making sure that in the /etc/hosts file the IP address and host name was set correctly:
192.168.10.11 kube-01.testing kube-01
instead of:
127.0.1.1 kube-01.testing kube-01
(or 127.0.0.1).
This happens on Ubuntu and Debian as far as I know-
If you did all the above steps (sudo swapoff -a, kubeconfig file permissions, kubelet and containerd status, etc) and nothing works for you, It is a good idea to take a look at the kubelet logs:
journalctl -xeu kubelet
In my case, I realize that the kubelet wants to download the images, but it fails.
So I turned on the VPN on the node, and after a couple of seconds, everything worked!
I have faced the same issue "The connection to the server {IP}:6443 was refused - did you specify the right host or port?"
The reason was that since Kubernetes 1.24+, kubenet has been removed.
So, when installing the Kubernetes cluster with kubeadm and using Docker as a Container Runtime, the cri-dockerd must be installed as well (otherwise I got the above error).
As it is mention in the kubernetes documentation:
On each of your nodes, install Docker for your Linux distribution as
per Install Docker Engine.
Install cri-dockerd, following the
instructions in that source code repository.
As per the error message, it is clearly says port number 6443 connection is refused.
Means
port number is blocked by firewall
if there is no firewall, then
Blockquote
the port number is not running on the specified host 6443. you can
cross verify using the below command
#netstat -tulpn | grep -i 6443.
Solution:
6443 it is kube-apiserver port number in k8s. if it is not running make sure kube-apiserver is running properly. I have faced the same problem. After that I have fix properly set the correct argument in that specific port.
/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.11:2379,https://192.168.5.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
sudo -i
swapoff -a
exit
strace -eopenat kubectl version

Haproxy + percona 5.7 xtradb error

i configure
Hello, I configure haproxy by digitalocean manual, roundrobin for percona 5.7 bases, but on the haproxy server, when I try to connect to the database I getting error.
On the haproxy server:
mysql -h 127.0.0.1 -u haproxy_root -p -e "SHOW DATABASES"
And i get error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 2
Haproxy config:
lobal
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 1024
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 1024
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen galera_cluster
bind 127.0.0.1:3306
mode tcp
option httpchk
balance leastconn
server galera-node01 192.168.0.101:3306 check port 9200
server galera-node02 192.168.0.102:3306 check port 9200
server galera-node03 192.168.0.103:3306 check port 9200
If I connect directly to the database 192.168.0.101, everything works, I get a response from the database, but when I make the request through to haproxy 127.0.0.1 I get this error:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading
initial communication packet', system error: 2
My xinetd config on mysql:
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
server_args = percona percona
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
If i telnet to PXC node on port 9200, i got:
telnet 192.168.0.101 9200
Trying 192.168.0.101...
Connected to 192.168.0.101.
Escape character is '^]'.
HTTP/1.1 503 Service Unavailable
Content-Type: text/plain
Connection: close
Content-Length: 57
Percona XtraDB Cluster Node is not synced or non-PRIM.
Connection closed by foreign host.
The most common reason for this is that all nodes in the cluster is down. If you have enabled your HAProxy stats, check that all nodes are up. If not, you mysqlchk service is likely not being able to connect to the cluster nodes properly.
Check your mysqlchk xinetd service should have the proper server_args configured. Once these are set, restart xinetd, and telnet to port 9200 to validate
[root#node02 log]# cat /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
...
server = /usr/bin/clustercheck
server_args = percona percona
...
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
}
UPDATE:
A more thorough procedure, including making sure mysqlchk is configured, can be found here https://www.percona.com/doc/percona-xtradb-cluster/5.6/howtos/virt_sandbox.html

HAProxy not running stats socket

I installed haproxy from aur in Arch Linux and modified the config file a bit:
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
stats socket /run/haproxy/haproxy.sock mode 660 level admin
stats timeout 30s
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
defaults
mode http
stats enable
stats uri /stats
stats realm Haproxy\ Statistics
frontend www-http
bind 127.0.0.1:80
default_backend www-backend
backend www-backend
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
timeout queue 30s
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
I have made sure that the directory /run/haproxy exists and has permissions for the user haproxy to write to it:
ツ ls -al /run/haproxy
total 0
drwxr-xr-x 2 haproxy root 40 May 13 21:37 .
drwxr-xr-x 27 root root 720 May 13 22:00 ..
When I launch haproxy using systemctl start haproxy.service, it loads fine. I can even go to the /stats page and view stats, however, socat reports the following error:
ツ sudo socat unix-connect:/run/haproxy/haproxy.sock stdio
2016/05/13 22:04:11 socat[24202] E connect(5, AF=1 "/run/haproxy/haproxy.sock", 27): No such file or directory
I am at wits end and not able to understand what is happening. This is what I get from journalctl -xe:
May 13 21:56:31 rohanarch.local systemd[1]: Starting HAProxy Load Balancer...
May 13 21:56:31 rohanarch.local systemd[1]: Started HAProxy Load Balancer.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: haproxy-systemd-wrapper: executing /usr/bin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: [WARNING] 133/215631 (20456) : config : missing timeouts for frontend 'www-http'.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | While not properly invalid, you will certainly encounter various problems
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | with such a configuration. To fix this, please ensure that all following
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Basically, no errors/warnings or not even so much as an indication about the stats socket. Others who have faced a problem with the stats socket fail to get haproxy started. In my case, it starts up fine, but the socket just isn't creating.
You need to manually create the directory yourself. Please ensure
/run/haproxy exists. If it doesn't, then first create it with:
sudo mkdir /run/haproxy
This should resolve your issue.
try to make selinux permissive with the command belowe and restart HAproxy service.
selinux command

HAProxy doesn't start, can not bind UNIX socket [/run/haproxy/admin.sock]

I'm trying to start haproxy (version 1.5.8 2014/10/31) with an "empty" config file and I get:
user#server:~$ sudo service haproxy start
[....] Starting haproxy: haproxy[ALERT] 126/120540 (7363) : Starting frontend GLOBAL: cannot bind UNIX socket [/run/haproxy/admin.sock]
altough it's enabled:
user#server:~$ cat /etc/default/haproxy
# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1
Configuration file:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
Does anyone have an idea why it can't start?
Haproxy needs to write to /run/haproxy/admin.sock but it wont create the directory for you. Create the directory /run/haproxy/ first or set stats socket to a different path.
I ran into this problem and had to remove the /run/haproxy/admin.sock file for HAProxy to restart successfully. I can only think it became corrupted after I aborted a yum update command. Oops! 😅
After updating pfSense from 2.4.5 to 2.5.2 I was facing this issue.
As #datacarl said, using command mkdir -p /run/haproxy/ from pfSense CLI works great.
Couple things with this. Know not the newest convo.
Anything i create in the /run folder disappears after reboot.
If I move to /var/lib/haproxy rather than /run/haproxy it starts fine manually as root.
If I reboot it fails. Not sure if it's because it's trying to use haproxy on reboot? if I su haproxy it says the account isn't available but think that's because it's set to nologin.

openldap fails to bind ldaps://127.0.0.1:636

Here is my testcase :
[root#192.168.121.130 ~$]slapd -d 1 -h ldaps://127.0.0.1:636
#(#) $OpenLDAP: slapd 2.4.23 (Apr 29 2013 07:47:08) $
mockbuild#c6b7.bsys.dev.centos.org:/builddir/build/BUILD/openldap-2.4.23/openldap-2.4.23/build-servers/servers/slapd
ldap_pvt_gethostbyname_a: host=centos-6.3, r=0
daemon_init: listen on ldaps://127.0.0.1:636
daemon_init: 1 listeners to open...
ldap_url_parse_ext(ldaps://127.0.0.1:636)
daemon: bind(7) failed errno=98 (Address already in use)
slap_open_listener: failed on ldaps://127.0.0.1:636
slapd stopped.
connections_destroy: nothing to destroy.
But if I change another port , such as 6361, it works.
My environment:
OS: centos 6.4 x86_64
OpenLDAP: 2.4.23 installed by yum
Any suggestion?
it seems that another service is already running on port 636:
daemon: bind(7) failed errno=98 (Address already in use)
you can try the following command to identify this service:
netstat -tulpn | grep ':636 ' | grep 'LISTEN'
Old post, but still ...
This error is also displayed when SELinux prevents slapd from starting. Personally I experienced this after manually copying data (/var/lib/ldap/) from another server, to this one. I had to restore the imported files to default SELinux security contexts:
restorecon -R /var/lib/ldap
And I see this doesn't apply to you, but this might also happen if you're attempting to bind slapd to a port out of the ordinary. Default on CentOS7, these are the allowed ports:
#semanage port -l | grep ldap
ldap_port_t tcp 389, 636, 3268, 7389
ldap_port_t udp 389, 636
Adding another one to the legal port range, could be done with semanage. (You might need to install the package policycoreutils-python.):
semanage port -a -t ldap_port_t -p tcp 10389
... if you wish to allow slapd to bind on TCP port 10389 in addition to the four listed above. After this, the previous result would look like:
# semanage port -l | grep ldap
ldap_port_t tcp 10389, 389, 636, 3268, 7389
ldap_port_t udp 389, 636