I have mongod and docker running on a host.
Within the container I want to access mongod but I get,
root#bac0e41ed475:/opt/test# telnet 10.1.1.1 27017
Trying 10.1.1.1 ...
telnet: Unable to connect to remote host: Connection refused
Am I missing something simple here ?
Related
My docker postgres instance can't be connected to from the internet.
I think it is because it is mapped by docker to localhost.
root#VM01:~/docker# docker port postgres
5432/tcp -> 127.0.0.1:5432
I am new to docker and I would like to try remapping that to
5432/tcp -> 0.0.0.0:5432
To see if I can then connect remotely over the internet.
root#VM01:~/docker# netstat -na | grep 5432
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
Does anyone have experience doing this or advice on if this might work...?
I have another docker instance on the same host that reflects 0.0.0.0:8000 and using telnet from any machine on the internet shows it is accessible.
Not this one though:
127.0.0.1:5432->5432/tcp
Running 2VMs by Vagrant within the private network like:
host1: 192.168.1.1/24
host2: 192.168.1.2/24
In host1, the app listens port 6443. But cannot access in host2:
# host1
root#host1:~# ss -lntp | grep 6443
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=10537,fd=7))
# host2
root#host2:~# nc -zv -w 3 192.168.1.2 6443
nc: connect to 192.168.1.2 port 6443 (tcp) failed: Connection refused
(Actually, the app is the "kube-apiserver" and fail to join the host2 as a worker node with kubeadm)
What am I missed?
Both are ubuntu focal (box_version '20220215.1.0') and ufw are inactivated.
After change the IP of hosts, it works:
host1: 192.168.1.1/24 -> 192.168.1.2/24
host2: 192.168.1.2/24 -> 192.168.1.3/24
I guess it is caused by using the reserved IP as the gateway, the first IP of the subnet, 192.168.1.1.
I'll update the references about that here later, I have to setup the k8s cluster for now.
I have a K8s cluster that was working properly but because of power failure, all the nodes got rebooted.
At the moment I have some problem recovering the master (and other nodes):
sudo systemctl kubelet status is returning Unknown operation kubelet. but when I run kubeadm init ... (the command that I set up the cluster with) it returns:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
and when I checked those ports I can see that kubelet and other K8s components are using them:
~/k8s-multi-node$ sudo lsof -i :10251
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-sche 26292 root 3u IPv6 104933 0t0 TCP *:10251 (LISTEN)
~/k8s-multi-node$ sudo lsof -i :10252
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-cont 26256 root 3u IPv6 115541 0t0 TCP *:10252 (LISTEN)
~/k8s-multi-node$ sudo lsof -i :10250
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kubelet 24781 root 27u IPv6 106821 0t0 TCP *:10250 (LISTEN)
I tried to kill them but they start to use those ports again.
My second problem is because of the power failure my machines don't have access to internet at the moment.
So what is the proper way to recover such a cluster? Do I need to remove kubelet and all the otehr components and install them again?
You need to first stop kubelet using sudo systemctl stop kubelet.service
After that run kubeadm reset and then kubeadm init. Note that this will clean up existing cluster and create a new cluster altogether.
Regarding proper way to recover check this question
the following command:
sudo -u postgres pg_basebackup -h master-ip -D /var/lib/postgresql/9.3/main -U rep -v -P -x
is returning the following error
pg_basebackup: could not connect to server: could not connect to server: Connection refused
Is the server running on host "master-ip" and accepting
TCP/IP connections on port 5432?
notwithstanding the master EC2 instance has the following rule:
Custom TCP Rule TCP 5432 slave-ip/32
and the master's /etc/postgresql/9.3/main/pg_hba.conf file includes:
host all all 127.0.0.1/32 md5
host all ubuntu master-ip/32 trust
host all ubuntu slave-ip/32 md5
host all postgres slave-ip/32 md5
host replication replicator slave-ip/32 md5
Both master and slave instances are EC2 instances, but created from different accounts.
What is wrong here?
I bought a VPS from DigitalOcean with applications Rails+Unicorn+Nginx. I installed Postgresql 9.1 and trying to accept remote connections from that. I read all of the solutions/problems about it (Googled much) and did exactly. The problem is the following:
psql: could not connect to server: Connection refused
Is the server running on host "xxx.xxx.xxx.xxx" and accepting
TCP/IP connections on port 5432?
I edited the postgresql.conf file with listen_addresses='*'
I edited the pg_hba.conf file and added host all all 0.0.0.0/0 md5
I restarted postgresql service and even the VPS however still I cannot connected to the database. So I tried to check the server's listening ports:
netstat -an | grep 5432
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
unix 2 [ ACC ] STREAM LISTENING 8356 /tmp/.s.PGSQL.5432
and then I nmap'ed the server:
Not shown: 996 closed ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
80/tcp open http
554/tcp open rtsp
But still I cannot understand why postgresql not serving at the port 5432 after the configurations. Need advice.
Thanks.