Docker Desktop on Windows11: kubectl get pods gives error on my laptop: The connection to the server 127.0.0.1:49994 was refused - kubernetes

I want to run Kubernetes locally so I can try it out on a small Java program I have written.
I installed WSL2 on my Windows 11 laptop, Docker Desktop, and enabled Kubernetes in Docker Settings.
There are a number of SO questions with the same error but I do not see any of them regarding Windows 11 and Docker Desktop.
I open a terminal, type wsl to open a linux terminal. Then I issue the command:
$ kubectl get pods
The connection to the server 127.0.0.1:49994 was refused - did you specify the right host or port?
but I see

Using Docker Desktop and Kubernetes on Linux Ubuntu, I got the same error, but also with Docker Desktop being unable to start normally because I already had a Docker Installation on my machine, resulting in the docker context being set to the default Docker environment instead of the required Docker Desktop.
Confirm the following first:
Make sure kubectl is correctly installed and ~/.kube/config file exists and is correctly configured on your machine because it holds the configuration of the cluster info and the port to connect to, which both are used by kubectl.
Check the context with
kubectl config view
If not set to current-context: docker-desktop, for example
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
then Set the docker context to Docker Desktop on your machine
kubectl config use-context docker-desktop
If that doesn't solve your issue maybe you have to check for specific Windows 11 Docker Desktop Kubernetes configuration/features
Check also:
Docker Desktop Windows FAQs

Related

Configure Kubectl to connect to a local network Kubernetes cluster

I'm trying to connect to a kubernetes cluster running on my Windows PC from my Mac. This is so I can continue to develop from my Mac but run everything on a machine with more resources. I know that to do this I need to change the kubectl context on my Mac to point towards my Windows PC but don't know how to manually do this.
When I've connected to a cluster before on AKS, I would use az aks get-credentials and this would correctly an entry to .kube/config and change the context to it. I'm basically trying to do this but on a local network.
I've tried to add an entry into kubeconfig but get The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?. I've also checked my antivirus on the Windows computer and no requests are getting blocked.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {CERT}
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
current-context: windows-docker-desktop
kind: Config
preferences: {}
users:
- name: windows-docker-desktop
user:
client-certificate-data: {CERT}
client-key-data: {KEY}
I've also tried using kubectl --insecure-skip-tls-verify --context=windows-docker-desktop get pods which results in the same error: The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?.
Many thanks.
From your MAC try if the port is open: Like nc -zv 192.168.yourwindowsIp 6443. If it doest respond Open, you have a network problem.
Try this.
clusters:
- cluster:
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
insecure-skip-tls-verify: true
directly in the configfile
the set-context you dont need to specify as you have only one.
To be sure it is not your firewall, disable it just for a very short period, only to test the conection.
Last thing: Seems you are using Kubernetes in Docker-Desktop. If not and you have a local cluster with more than 1 node, you need to install a network fabric in your cluster like Flannel or Calico.
https://projectcalico.docs.tigera.io/about/about-calico
https://github.com/flannel-io/flannel

Kubectl commands cannot be executed from another VM

I'm having an issue when executing the "kubectl" commands. In fact, my cluster consists of one Master and one Worker node. The kubectl commands can be executed from the Master server without having an issue. But, I also have another VM which I use that VM as a Jump server to login to the master and worker nodes. I need to execute the kubectl commands from that Jump server. I created the .kube directory, and copied the kubeconfig file from the Master node to the Jump server. And also I set the context correctly as well. But the kubectl commands hangs when executing from the Jump server and it gives a timeout error.
Below are the information.
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.240.0.30:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
ubuntu#ansible:~$ kubectl config use-context kubernetes-admin#kubernetes
Switched to context "kubernetes-admin#kubernetes".
ubuntu#ansible:~$ kubectl get pods
Unable to connect to the server: dial tcp 10.240.0.30:6443: i/o timeout
ubuntu#ansible:~$ kubectl config current-context
kubernetes-admin#kubernetes
Everything seems to be OK for me and wondering why kubectl commands hang when wxecuting from the Jump server.
Troubleshooted the issue by verifying whether the Jump VM can telnet to Kubernetes Master Node by executing the below.
telnet <ip-address-of-the-kubernetes-master-node> 6443
Since the error was a "Connection Timed Out" I had to add a firewall rule to Kubernetes Master node. Added a firewall rule as below. Note: In my case I'm using GCP.
gcloud compute firewall-rules create allow-kubernetes-apiserver \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes \
--source-ranges 0.0.0.0/0
Then I was able to telnet to the Master Node without any issue. If still you can't get connected to the Master node, change the Internal IP in the kubconfig file under .kube directory to the Public IP address of the Master node.
Then change the context using below command.
kubectl config set-context <context-name>

Kubernetes: Unable to connect to the server: Service Unavailable [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
After reboot of kubernetes master node, "kubectl" command throwing this error:
Unable to connect to the server: Service Unavailable
Kindly suggest how to start the service or fix this issue.
# kubectl get nodes
Unable to connect to the server: Service Unavailable
#
# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: Service Unavailable
#
# echo $KUBECONFIG
/root/.kube/config
#
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.1.1:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
#
You did not specify which container driver your cluster is using, so I'm going to assume it is docker, also assuming you are using linux.
Most probably you don't have kube-apiserver running. You have to start/restart it to solve your issue.
Check if docker daemon is running. If not, start/restart it.
$ systemctl status docker
# systemctl start docker
or
# systemctl restart docker
Once docker daemon is running, check if kube-apiserver container is running
# docker ps -a
look for a container with COMMAND kube-apiserver. If it's running you are done, if not, restart it
# docker restart <container ID>
To ensure you won't have the same problem in the future, configure Docker to start on boot
# systemctl enable docker.service
# systemctl enable containerd.service
If your distribution does not user systemd use service instead of systemctl
# service <service name> status/stop/start/restart
In above commands $ means run as normal user, # means run as root.

GCP credentials with skaffold for local development

What is the standard approach to getting GCP credentials into k8s pods when using skaffold for local development?
When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?
When I previously used docker compose and aws it was easy to volume
mount the ~/.aws folder to the container and everything just worked.
Is there an equivalent solution for skaffold and gcp?
You didn't mention what kind of kubernetes cluster you have deployed locally but if you use Minikube it can be actually achieved in a very similar way.
Supposed you have already initialized your Cloud SDK locally by running:
gcloud auth login
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project name>
gcloud config set project <project name>
so you can run your gcloud commands on the local machine on which Minikube is installed. You can easily delagate this access to your Pods created either by Skakffold or manually on Minikube.
You just need to start your Minikube as follows:
minikube start --mount=true --mount-string="$HOME/.config/gcloud/:/home/docker/.config/gcloud/"
To make things simple I'm mounting local Cloud SDK config directory into Minikube host using as a mount point /home/docker/.config/gcloud/.
Once it is available on Minikube host VM, it can be easily mounted into any Pod. We can use one of the Cloud SDK docker images available here or any other image that comes with Cloud SDK preinstalled.
Sample Pod to test this out may look like the one below:
apiVersion: v1
kind: Pod
metadata:
name: cloud-sdk-pod
spec:
containers:
- image: google/cloud-sdk:alpine
command: ['sh', '-c', 'sleep 3600']
name: cloud-sdk-container
volumeMounts:
- mountPath: /root/.config/gcloud
name: gcloud-volume
volumes:
- name: gcloud-volume
hostPath:
# directory location on host
path: /home/docker/.config/gcloud
# this field is optional
type: Directory
After connecting to the Pod by running:
kubectl exec -ti cloud-sdk-pod -- /bin/bash
we'll be able to execute any gcloud commands as we are able to execute them on our local machine.
If you're using Skaffold with a Minikube cluster then you can use their gcp-auth addon which does exactly this, described here in detail.

Error while running kubectl commands

I have recently installed minikube and kubectl. However when I run kubectl get pods or any other command related to kubectl I get the error
Unable to connect to the server: unexpected EOF
Does anyone know how to fix this?I am using Ubuntu server 16.04.Thanks in advance.
The following steps can be used for further debugging.
Check the minikube local cluster status using minikube status command.
$: minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 172.0.x.y
If problem with kubectl configuratuion,then configure it using, kubectl config use-context minikube command.
$: kubectl config use-context minikube
Switched to context "minikube".
Check the cluster status, using kubectl cluster-info command.
$: kubectl cluster-info
Kubernetes master is running at ...
Heapster is running at ...
KubeDNS is running at ...
...
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Note: It can even be due to very simple reason: internet speed (it happend to me just now).
I have same problem too. I solved after change the server addr to localhost
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/minikube/certs/ca.crt
server: https://localhost:8443 # check it
name: m01
...
users:
- name: m01
user:
client-certificate: /var/lib/minikube/certs/apiserver.crt
client-key: /var/lib/minikube/certs/apiserver.key
I think your kubernetes master is not setup properly. You can check that by checking the following services in master node are in active state and running.
etcd2.service
kube-apiserver.service Kubernetes API Server
kube-controller-manager.service Kubernetes Controller Manager
kube-scheduler.service Kubernetes Scheduler