Rancher desktop error when starting kubernetes - kubernetes

My Rancher desktop was working just fine, until today when I switched container runtime from containerd to dockerd. When I wanted to change it back to containerd, it says:
Error Starting Kubernetes
Error: unable to verify the first certificate
Some recent logfile lines:
client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUV1eXhYdFYvTDZOQmZsZVV0Mnp5ekhNUmlzK2xXRzUxUzBlWklKMmZ5MHJvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNGdQODBWNllIVzBMSW13Q3lBT2RWT1FzeGNhcnlsWU8zMm1YUFNvQ2Z2aTBvL29UcklMSApCV2NZdUt3VnVuK1liS3hEb0VackdvbTJ2bFJTWkZUZTZ3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
2022-09-02T13:03:15.834Z: Error starting lima: Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:390:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12) {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
Tried reinstalling, factory reset etc. but no luck. I am using 1.24.4 verison.

TLDR: Try turning off Docker/Something that is binding to port 6443. Reset Kubernetes in Rancher Desktop, then try again.
Try checking if there is anything else listening on port 6443 which is needed by kubernetes:rancher-desktop.
In my case, lsof -i :6443 gave me...
~ lsof -i :6443
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 63385 ~~~~~~~~~~~~ 150u IPv4 0x44822db677e8e087 0t0 TCP localhost:sun-sr-https (LISTEN)
ssh 82481 ~~~~~~~~~~~~ 27u IPv4 0x44822db677ebb1e7 0t0 TCP *:sun-sr-https (LISTEN)

Related

Connection refused when deploying in cloud on kubernetes

I am deploying kubernetes in Cloud and I'm trying to call another container inside the same pod through an API.
I am using localhost but also I treid with 127.0.0.1. Also, I tried with the container's name.
2022/11/04 15:50:47 dial tcp [::1]:4245: connect: connection refused
2022/11/04 15:50:47 Successfully processed file.json file
2022/11/04 15:50:47 Get "http://localhost:4245/api/admin/projects/default": dial tcp [::1]:4245: connect: connection refused
panic: Get "http://localhost:4245/api/admin/projects/default": dial tcp [::1]:4245: connect: connection refused
goroutine 1 [running]:
log.Panic({0xc000119dc8?, 0xc000166000?, 0x6aaaea?})
/opt/app-root/src/sdk/go1.19.2/src/log/log.go:388 +0x65
main.StatusServer({0xc000020570?, 0x30?}, {0x0, 0x0})
/build/script.go:197 +0x1ee
main.ProcessData({0xc000020041, 0x15}, {0x0, 0x0}, {0xc00002000f?, 0x43ce05?})
/build/script.go:291 +0xa6
main.main()
/build/script.go:443 +0xc5
Any idea if I can call the container like that?
You get a connection refused means you reached localhost and it decided to refuse the connection.
This is most likly because nothing is listening on the port.
If it was a firewall issue the request would timeout.
You can check listening ports with command like:
netstat -an
If not installed maybe you can try it from the workernode where the pod is running.
Another method of testing is to use
curl http://127.0.0.1:4245
This will probably result in same connection refused.
Are you really sure the container is running in same pod?
Please check your deployment and service.
If you cant find the failure please come back with more information so it can be analysed.

The connection to the server x.x.x.:6443 was refused - did you specify the right host or port? Kubernetes

I've installed, Docker, Kubectl and kubeAdm.
I want to create my device model and device CRDs (I'm following this guide.
So, when I run the command :
kubectl create -f devices_v1alpha1_devicemodel.yaml
as a user I get the following out:
The connection to the server 10.0.0.68:6443 was refused - did you
specify the right host or port?
(I have added the permission for the user to access the .kube folder)
With netstat, I get :
> ubuntu#kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$
> sudo netstat -atunp Active Internet connections (servers and
> established) Proto
> Recv-Q Send-Q Local Address Foreign Address State
> PID/Program name tcp 0 0 0.0.0.0:22
> 0.0.0.0:* LISTEN 1298/sshd tcp 0 224 10.0.0.68:22 160.98.31.160:52503 ESTABLISHED
> 2061/sshd: ubuntu [ tcp6 0 0 :::22 :::*
> LISTEN 1298/sshd udp 0 0 0.0.0.0:68
> 0.0.0.0:* 910/dhclient udp 0 0 10.0.0.68:123 0.0.0.0:*
> 1241/ntpd udp 0 0 127.0.0.1:123
> 0.0.0.0:* 1241/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:*
> 1241/ntpd udp6 0 0 fe80::f816:3eff:fe0:123 :::*
> 1241/ntpd udp6 0 0 2001:620:5ca1:2f0:f:123 :::*
> 1241/ntpd udp6 0 0 ::1:123 :::*
> 1241/ntpd udp6 0 0 :::123 :::*
> 1241/ntpd
With lsof -i :
ubuntu#kubernetesmaster:~/src/github.com/kubeedge/kubeedge/build/crds/devices$ sudo lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 910 root 6u IPv4 12765 0t0 UDP *:bootpc
ntpd 1241 ntp 16u IPv6 15340 0t0 UDP *:ntp
ntpd 1241 ntp 17u IPv4 15343 0t0 UDP *:ntp
ntpd 1241 ntp 18u IPv4 15347 0t0 UDP localhost:ntp
ntpd 1241 ntp 19u IPv4 15349 0t0 UDP 10.0.0.68:ntp
ntpd 1241 ntp 20u IPv6 15351 0t0 UDP ip6-localhost:ntp
ntpd 1241 ntp 21u IPv6 15353 0t0 UDP [2001:620:5ca1:2f0:f816:3eff:fe0a:874a]:ntp
ntpd 1241 ntp 22u IPv6 15355 0t0 UDP [fe80::f816:3eff:fe0a:874a]:ntp
sshd 1298 root 3u IPv4 18821 0t0 TCP *:ssh (LISTEN)
sshd 1298 root 4u IPv6 18830 0t0 TCP *:ssh (LISTEN)
sshd 2061 root 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh->160.98.31.160:52503 (ESTABLISHED)
sshd 2124 ubuntu 3u IPv4 18936 0t0 TCP 10.0.0.68:ssh->160.98.31.160:52503 (ESTABLISHED)
I've already tried this
and:sudo swapoff -a
Please perform below steps on the master node. It works like charm.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
I am facing similar problem with following error while deploying the pod network into a cluster using flannel:
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server 192.168.1.101:6443 was refused - did you specify the right host or port?
I performed below steps to solved the issue:
$ sudo systemctl stop kubelet
$ sudo systemctl start kubelet
$ strace -eopenat kubectl version
then apply the yml file
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
kubelet must be down. you need to check kubelet logs on the master and ensure api server is running and online. then only you should be able to deploy
I'll add another reason for this error that was the issue in my case.
I exported the wrong Kubeconfig file to shell and the error message was very accurate in that case - The endpoint for the API server was wrong (and of course other fields like the cluster name and the certificates - but the server endpoint is the first step in the chain).
I've encountered this problem and swapoff -a works for me though.
sudo -i
swapoff -a
exit
strace -eopenat kubectl version
This is because docker is down. Start docker on your machine.
I have tried many ways but couldn't get it work, then accidentally found the solution to my own situation:
In ~/.kube/ I have
drwxr-x--- 4 staff 128 25 Jul 22:31 cache
-rw------- 1 staff 8781 25 Jul 22:46 config
drwxr-xr-x 8 staff 256 25 Jul 22:46 configs
-rw-r--r-- 1 staff 14 25 Jul 22:31 kubectx
drwxr-xr-x 4 staff 128 29 Jun 16:59 kubens
My assumption is that there is something messed up in the .kube configuration, however I couldn't figure out which files, so I removed most of the directories/files, including cache, config. (If you don't want to keep all the configs, maybe you should remove all of them)
Then from docker dashboard to re-enable kubernetes, to get all the files installed back.
Re-config the docker-desktop by kubectl config use-context docker-desktop.
Finally, my 6443 responded.
Another suggestion is to restart your container sudo systemctl restart containerd In my situation I'm working with containerd and not a typical docker container
After doing this I think it should fix the issue for you but if doesn't work then try sudo swapoff -a
Assuming that's all the output from your netstat command and that you ran it on the master node (the one you installed Kubernetes via kubadm on), it looks to me like the installation did not complete correctly as none of the usual ports you would expect to see on a Kubernetes master node are present.
Usually on a kubernetes master node you'd expect to see kube-apiserver, kube-scheduler, kube-controller, kubelet and possibly etcd all listening on the network.
What was the output of your kubeadm init command?
I ran into this issue as well, I tried the solutions noted above and it did not work for me. Here is what worked for me:
FIX:
kubeadm init --apiserver-advertise-address=10.139.0.42 --ignore-preflight-errors all --pod-network-cidr=172.17.0.1/16 --token-ttl 0
source:
https://www.c-sharpcorner.com/article/kubernetes-installation-in-redhat-and-centos/
I faced this issue recently due to expired certificates for my K8S cluster.
I followed this blog link to renew the certificates and also replace the kube config file that I was using.
Note, it is important to replace the kube config post renewal of the certificates or else you would end up getting following error message for kubectl CLIs:
error: You must be logged in to the server (Unauthorized)
I faced same issue with the same error, You need to check that your container run time (docker/containerd) is active and running:
systemctl status containerd
systemctl restart containerd
systemctl restart kubelet
then if you check its status, it supposed to be up and running and you are able to create k8s objects right now.
In my case KUBECONFIG was the problem causing the same error.
This solved it:
export KUBECONFIG=/home/$(whoami)/.kube/config
I solved this exact problem by making sure that in the /etc/hosts file the IP address and host name was set correctly:
192.168.10.11 kube-01.testing
instead of:
127.0.1.1 kube-01.testing
(or 127.0.0.1).
This happens on Ubuntu and Debian as far as I know-
I solved this exact problem by making sure that in the /etc/hosts file the IP address and host name was set correctly:
192.168.10.11 kube-01.testing kube-01
instead of:
127.0.1.1 kube-01.testing kube-01
(or 127.0.0.1).
This happens on Ubuntu and Debian as far as I know-
If you did all the above steps (sudo swapoff -a, kubeconfig file permissions, kubelet and containerd status, etc) and nothing works for you, It is a good idea to take a look at the kubelet logs:
journalctl -xeu kubelet
In my case, I realize that the kubelet wants to download the images, but it fails.
So I turned on the VPN on the node, and after a couple of seconds, everything worked!
I have faced the same issue "The connection to the server {IP}:6443 was refused - did you specify the right host or port?"
The reason was that since Kubernetes 1.24+, kubenet has been removed.
So, when installing the Kubernetes cluster with kubeadm and using Docker as a Container Runtime, the cri-dockerd must be installed as well (otherwise I got the above error).
As it is mention in the kubernetes documentation:
On each of your nodes, install Docker for your Linux distribution as
per Install Docker Engine.
Install cri-dockerd, following the
instructions in that source code repository.
As per the error message, it is clearly says port number 6443 connection is refused.
Means
port number is blocked by firewall
if there is no firewall, then
Blockquote
the port number is not running on the specified host 6443. you can
cross verify using the below command
#netstat -tulpn | grep -i 6443.
Solution:
6443 it is kube-apiserver port number in k8s. if it is not running make sure kube-apiserver is running properly. I have faced the same problem. After that I have fix properly set the correct argument in that specific port.
/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.11:2379,https://192.168.5.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
sudo -i
swapoff -a
exit
strace -eopenat kubectl version

How to use non 22 ssh port in visual studio code insiders remote developments?

ssh login by authentication is working fine.
C:\Users\${DEVELOPER_NAME}>ssh ${HOST_IP_ADDRESS} -l ${DEVELOPER_NAME} -p ${SSHD_PORT} -i D:\prefix\PuTTY\${OPENSSH_FORMAT_PRIVATE_KEY}
Last login: Sun May 5 15:27:50 2019 from 10.40.171.44
Welcome to ...
[${DEVELOPER_NAME}#${HOST_AKA} ~]$
but sshd is running on 36000 not default 22 port, how can I tell that to vs code remote-ssh plugin
Host ${DEVELOPER_NAME}#${HOST_IP_ADDRESS}
HostName ${HOST_IP_ADDRESS}:${SSHD_PORT}
User ${DEVELOPER_NAME}
Port ${SSHD_PORT}
IdentityFile D:\prefix\PuTTY\${OPENSSH_FORMAT_PRIVATE_KEY}
this way gives me
Can't connect to ${DEVELOPER_NAME}#${HOST_IP_ADDRESS}: unreachable or not Linux x86_64 (ssh: connect to host ${HOST_IP_ADDRESS} port 22: Connection refused)
and
${DEVELOPER_NAME}#${HOST_IP_ADDRESS}:${SSHD_PORT}
in Remote-SSH: Connect to Host... gives me
Can't connect to ${DEVELOPER_NAME}#${HOST_IP_ADDRESS}:${SSHD_PORT}: unreachable or not Linux x86_64 (ssh: Could not resolve hostname ${HOST_IP_ADDRESS}:${SSHD_PORT}: Name or service not known)
Thanks to your question, I solved it. You may have already figured it out ... You do not need to add a port for HostName. Just write Port and it works.
Host ${HOST_NICKNAME}
User ${USER_ID_HOST}
HostName ${HOST_IP_ADDRESS}
Port ${SSHD_PORT}
IdentityFile ~/.ssh/id_rsa-remote-ssh

Mongo can't connect to remote instance

I installed mongodb on remote server via vagrant. I can access postgres from my local system but mongo is not available. When I login via ssh and check mongo status it says that mongo running, I can make queries too. When I try to connect from my local system using this command:
mongo 192.168.192.168:27017
I get an error
MongoDB shell version: 2.6.5
connecting to: 192.168.192.168:27017/test
2014-12-27T22:19:19.417+0100 warning: Failed to connect to 192.168.192.168:27017, reason: errno:111 Connection refused
2014-12-27T22:19:19.418+0100 Error: couldn't connect to server 192.168.192.168:27017 (192.168.192.168), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
looks like mongo not listen to connection from other ips? I commented bind_ip in mongo settings but it doesn't help.
services for 192.168.192.168 via nmap command:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
5432/tcp open postgresql
9000/tcp open cslistener
Looks like mongd listen
sudo lsof -iTCP -sTCP:LISTEN | grep mongo
mongod 1988 mongodb 6u IPv4 5407 0t0 TCP *:27017 (LISTEN)
mongod 1988 mongodb 8u IPv4 5411 0t0 TCP *:28017 (LISTEN)
Firewall rules
sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Update
My mongo config
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
#bind_ip = 127.0.0.1
#port = 27017
# Enable journaling, http://www.mongodb.org/display/DOCS/Journaling
journal=true
# Enables periodic logging of CPU utilization and I/O wait
#cpu = true
# Turn on/off security. Off is currently the default
#noauth = true
#auth = true
Solution is to change mongo configuration
bind_ip = 0.0.0.0

Force quit play framework application

I am unable to bind to my regular port 9000 with the typical error message:
[error] org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:9000
However, I do not have anything currently running on that port..
Checking what port 9000 is listing to:
sudo lsof -i -P | grep "9000"
gives me:
java 2642 ow 137u IPv6 0xe9a3870d7acf02fd 0t0 TCP *:9000 (LISTEN)
java 2642 ow 142u IPv6 0xe9a3870d7e430f1d 0t0 TCP localhost:9000->localhost:62403 (CLOSE_WAIT)
java 2642 ow 156u IPv6 0xe9a3870d856676dd 0t0 TCP localhost:9000->localhost:60860 (CLOSE_WAIT)
Any idea how to close this?
Edit
Turns out google chrome is using my 9000 which is kind of weird
Google 51558 ow 125u IPv4 0xe9a3870d8683581d 0t0 TCP localhost:61238->localhost:9000 (ESTABLISHED)
When I killed it, chrome crashed
Guess I'll have to start using a different port!
Play isn't running anymore?
Otherwise for reference, one can find the Play process with ps auxwww | grep play and kill it withkill <pid> or kill -9 <pid>.
I have the same issue with play framework using scalaVersion := "2.11.7".
java 19068 ecamur 342u IPv6 40371923 0t0 TCP *:9000 (LISTEN)
I killed using bellow comment
kill -9 19068
It appeared to be nothing was crashed. I ran the application without any issue.
I've often the same problem when my Play application hang out without releasing the socket.
The easiest solution I found is to restart the network interface.
ifconfig en0 down
ifconfig en0 up
(Assuming en0 is your main interface)