the following command:
sudo -u postgres pg_basebackup -h master-ip -D /var/lib/postgresql/9.3/main -U rep -v -P -x
is returning the following error
pg_basebackup: could not connect to server: could not connect to server: Connection refused
Is the server running on host "master-ip" and accepting
TCP/IP connections on port 5432?
notwithstanding the master EC2 instance has the following rule:
Custom TCP Rule TCP 5432 slave-ip/32
and the master's /etc/postgresql/9.3/main/pg_hba.conf file includes:
host all all 127.0.0.1/32 md5
host all ubuntu master-ip/32 trust
host all ubuntu slave-ip/32 md5
host all postgres slave-ip/32 md5
host replication replicator slave-ip/32 md5
Both master and slave instances are EC2 instances, but created from different accounts.
What is wrong here?
Related
My docker postgres instance can't be connected to from the internet.
I think it is because it is mapped by docker to localhost.
root#VM01:~/docker# docker port postgres
5432/tcp -> 127.0.0.1:5432
I am new to docker and I would like to try remapping that to
5432/tcp -> 0.0.0.0:5432
To see if I can then connect remotely over the internet.
root#VM01:~/docker# netstat -na | grep 5432
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
Does anyone have experience doing this or advice on if this might work...?
I have another docker instance on the same host that reflects 0.0.0.0:8000 and using telnet from any machine on the internet shows it is accessible.
Not this one though:
127.0.0.1:5432->5432/tcp
Running 2VMs by Vagrant within the private network like:
host1: 192.168.1.1/24
host2: 192.168.1.2/24
In host1, the app listens port 6443. But cannot access in host2:
# host1
root#host1:~# ss -lntp | grep 6443
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=10537,fd=7))
# host2
root#host2:~# nc -zv -w 3 192.168.1.2 6443
nc: connect to 192.168.1.2 port 6443 (tcp) failed: Connection refused
(Actually, the app is the "kube-apiserver" and fail to join the host2 as a worker node with kubeadm)
What am I missed?
Both are ubuntu focal (box_version '20220215.1.0') and ufw are inactivated.
After change the IP of hosts, it works:
host1: 192.168.1.1/24 -> 192.168.1.2/24
host2: 192.168.1.2/24 -> 192.168.1.3/24
I guess it is caused by using the reserved IP as the gateway, the first IP of the subnet, 192.168.1.1.
I'll update the references about that here later, I have to setup the k8s cluster for now.
I can't seem to get a node to join the cluster.
[discovery] Trying to connect to API Server "10.0.2.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
I0702 11:09:08.268102 10342 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info'
I0702 11:09:08.268676 10342 round_trippers.go:405] GET https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info in 0 milliseconds
I0702 11:09:08.268873 10342 round_trippers.go:411] Response Headers:
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: connect: connection refused]
The port seems closed (from the node):
telnet 10.0.2.15 6443
Trying 10.0.2.15...
telnet: Unable to connect to remote host: Connection refused
While on the master:
telnet 10.0.2.15 6443
Trying 10.0.2.15...
Connected to 10.0.2.15.
Escape character is '^]'.
^CConnection closed by foreign host.
What may be the cause of this?
Both machines are virtual machines and 10.02.15 is the NAT ip - which is the same for both machines (they are independent)...
Sigh...
In the event it is helpful to someone else:
iptables -t raw -A OUTPUT -p tcp --dport 6443 -j TRACE
iptables -t raw -A PREROUTING -p tcp --dport 6443 -j TRACE
tail -f /var/log/kern.log
If you are running on VM (like using vagrant and virtual box) run init command with the private IP which is used in the vagrant file. So if you use join command on a node it is able to reach it else no.
Syntax: kubeadm init
--apiserver-advertise-address=private-ip-address
Example: kubeadm init --apiserver-advertise-address=192.168.33.50
I have mongod and docker running on a host.
Within the container I want to access mongod but I get,
root#bac0e41ed475:/opt/test# telnet 10.1.1.1 27017
Trying 10.1.1.1 ...
telnet: Unable to connect to remote host: Connection refused
Am I missing something simple here ?
I bought a VPS from DigitalOcean with applications Rails+Unicorn+Nginx. I installed Postgresql 9.1 and trying to accept remote connections from that. I read all of the solutions/problems about it (Googled much) and did exactly. The problem is the following:
psql: could not connect to server: Connection refused
Is the server running on host "xxx.xxx.xxx.xxx" and accepting
TCP/IP connections on port 5432?
I edited the postgresql.conf file with listen_addresses='*'
I edited the pg_hba.conf file and added host all all 0.0.0.0/0 md5
I restarted postgresql service and even the VPS however still I cannot connected to the database. So I tried to check the server's listening ports:
netstat -an | grep 5432
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
unix 2 [ ACC ] STREAM LISTENING 8356 /tmp/.s.PGSQL.5432
and then I nmap'ed the server:
Not shown: 996 closed ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
80/tcp open http
554/tcp open rtsp
But still I cannot understand why postgresql not serving at the port 5432 after the configurations. Need advice.
Thanks.