Can't connect to K8S cluster from kubectl - kubernetes

I am running kubectl from Mac OS M1 laptop. There is a K8S cluster deployed to gcp and I have logged in from cli gcloud auth login. Then I run below command to authenticate GKS credential
> gcloud container clusters get-credentials gcp-test --zone australia-southeast2-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gcp-test.
it looks ok but I got below error when run get pods
> kubectl get pods
Unable to connect to the server: EOF
The version of kubectl I am using is
> kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:38:50Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:35:40Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
The K8S cluster version in GCP is 1.24.5-gke.600. Does anyone know why I get Unable to connect to the server: EOF error? It seems like not able to connect to the server.

It looks like you have already a config for an EKS cluster. Maybe gcloud added a new context to your kubeconfig file, but the old one is still the active one.
Use
kubectl config get-contexts
to check if there are multiple contexts.
Use
kubectl config use-context <context-name>
to set the active context.
You could also examine your kubeconfig file. The default location is
~/.kube/config
but can be overridden by KUBECONFIG environment variable. To find that out, execute
echo $KUBECONFIG
Could also be that gcloud added the context to the default kubeconfig file but you have a different on set with the KUBECONFIG env variable that contains the EKS context.

Related

kubectl suddently asking for username

My last deployment pipeline stage on gitlab suddently started asking for username when running "kubectl apply" command to deploy my application in my cluster in GKE (version 1.18.12-gke.1201 and also tested in 1.17). I'm using the imge dtzar/helm-kubectl to run kubectl commands in the pipeline (also tested older versions of this image). This is the log generated when running the stage:
$ kubectl config set-cluster $ITINI_STAG_KUBE_CLUSTER --server="$ITINI_STAG_KUBE_URL" --insecure-skip-tls-verify=true
Cluster "itini-int-cluster" set.
$ kubectl config set-credentials gitlab --token="$ITINI_STAG_KUBE_TOKEN"
User "gitlab" set.
$ kubectl config set-context default --cluster=$ITINI_STAG_KUBE_CLUSTER --user=gitlab
Context "default" created.
$ kubectl config use-context default
Switched to context "default".
$ kubectl apply -f deployment.yml
Please enter Username: error: EOF
I reproduced the same commands locally, and it's working fine.
When running "kubectl version" in the pipeline, this is the output:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
When running it locally (with exact same environment variables and tokens configured in the pipeline), this is the output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.12-gke.1201", GitCommit:"160736941f00e54acb1c0a4647166b6f6eb211d9", GitTreeState:"clean", BuildDate:"2020-12-08T19:38:26Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
What could be causing this problem?
The problem was actually in gitlab. My token variable was marked as "protected" in the gitlab settings, and it was being replaced by an empty string when running the config command. So, the problem was solved by unmarking it as protected.

Helm 3: x509 error when connecting to local Kubernetes

I'm a perfect noob with K8s. I installed microk8s and Helm using snap to experiment locally. I wonder whether my current issue comes from the use of snap (purpose of which is encapsulation, from what I understood)
Environment
Ubuntu 20.04LTS
helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-1+6f17be3f1fd54a", GitCommit:"6f17be3f1fd54a88681869d1cf8bedd5a2174504", GitTreeState:"clean", BuildDate:"2020-06-23T21:16:24Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-1+6f17be3f1fd54a", GitCommit:"6f17be3f1fd54a88681869d1cf8bedd5a2174504", GitTreeState:"clean", BuildDate:"2020-06-23T21:17:52Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* microk8s microk8s-cluster admin
Post install set up
microk8s enable helm3
Kubernetes is up and running
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Problem while connecting helm to microk8s
helm ls --kube-token ~/token --kube-apiserver https://127.0.0.1:16443
Error: Kubernetes cluster unreachable: Get https://127.0.0.1:16443/version?timeout=32s: x509: certificate signed by unknown authority
How can I tell helm
to trust microk8s certs or
to ignore this verification step
From what I read, I may overcome this issue by pointing to kube's config using --kubeconfig.
helm ls --kube-token ~/token --kube-apiserver https://127.0.0.1:16443 --kubeconfig /path/to/kubernetes/config
In the context of microk8s installed with snap, I am not quite sure what this conf file is nor where to find it.
/snap/microk8s/1503 ?
/var/snap/microk8s/1503 ?
Helm looks for kubeconfig at this path $HOME/.kube/config.
Please run this command
microk8s.kubectl config view --raw > $HOME/.kube/config
This will save the config at required path in your directory and shall work
Reference Link here
Please try exporting kubeconfig file using following command:
export KUBECONFIG=/var/snap/microk8s/current/credentials/client.config
If you happen to be using WSL with docker desktop with k8s running in docker desktop but helm running in WSL a very similar command as provided by Tarun will also work.
Assuming you are running the Windows version of kubectl
➜ which kubectl.exe
➜ /mnt/c/Program Files/Docker/Docker/resources/bin/kubectl.exe
➜ which kubectl
➜ kubectl: aliased to /mnt/c/Program\ Files/Docker/Docker/resources/bin/kubectl.exe
➜ kubectl config view --raw > $HOME/.kube/config

Kubernetes - Kubectl version command returns error

MacOs
I just install kubectl via: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos
MacBook-Air:~ admin$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
what could be the problem, Any ideas?
kubectl version prints out both the client version and server version. To fetch the server version, it connects to kubernetes api server. You either do not have the cluster installed or have not configured your kubectl properly to communicate with the remote cluster. So it is only printing client version and throwing an error for server version.
Sample output:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:26:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
You can use kubectl version --client to get only client version.
# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:26:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
As you mentioned you have installed kubectl - command-line tool, which allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
However to kubectl could run any command you need to have a cluster.
The most common is Minikube. However, you will need a hypervisor as Virtualbox or Hyperkit.
You also should read about Docker Desktop on Mac.
If you will search for more information you can find that people also using Kubeadm but its not supported on MacOS.
It was mentioned in another StackOverflow question, you can find it here.
Note : kubectl is a command-line tool tool that allows you to run commands against Kubernetes clusters
In order for kubectl to find and access a given kubernetes cluster, it needs kubeconfig file for a given K8s cluster you want to connect with (if you dont have one you can install a local cluster to play with using something like K8s Minikube which will then give you this file to connect to minikube)
If you already have a cluster then check that kubectl is properly configured use kubectl cluster-info command , if it is not you will receive below error log.
$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
So to connect to a cluster you want work toward using kubectl you need to find kubeconfig file and configure the environment variable to point to it. Meaning if your $HOME/.kube/config file is not already listed in your KUBECONFIG environment variable, fix your KUBECONFIG environment variable by export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config to point to correct kubeconfig file to be used.
Once you have right kubeconfig exported cluster-info command should load the details as below
$ kubectl cluster-info
Kubernetes master is running at https://xx.xx.xx.xx:6443
KubeDNS is running at https://xx.xx.xx.xx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Kubernetes cluster on AWS,The connection to the server localhost:8080 was refused

On my vagrant VM,I created kops cluster
kops create cluster \
--state "s3://kops-state-1000" \
--zones "eu-central-1a","eu-central-1b" \
--master-count 3 \
--master-size=t2.micro \
--node-count 2 \
--node-size=t2.micro \
--ssh-public-key ~/.ssh/id_rsa.pub\
--name jh.mydomain.com \
--yes
I can list cluster with kops get cluster
jh.mydomain.com aws eu-central-1a,eu-central-1b
The problem is that I can not find .kube repository. When I go for
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Is this somehow related to my virtual box deployment or not? I am aware that the command kubectl works on the master node because that's where kube-apiserver runs. Does this mean that my deployment is workers node?
I tried what Vasily suggested
kops export kubecfg $CLUSTER_NAME
W0513 15:44:17.286965 3279 root.go:248] no context set in kubecfg
--name is required
You need to obtain kubeconfig from your cluster and save it as ${HOME}/.kube/config:
kops export kubecfg $CLUSTER_NAME
After that you may start using kubectl.
It seems that kops didn't configure the context of the cluster in your machine
You can try the following command:
kops export kubecfg jh.mydomain.com
You can also find the config file at S3->kops-state-1000->jh.mydomain.com->config
After downloading the config file you can try:
kubectl --kubeconfig={config you downloaded} version
Also, it's kinda unrelated to your question, but it may be not a bad idea to have a look at gossip-based kubernetes cluster. To try that feature, you just need to create a cluster with kobs using --name=something.k8s.local

Kubernetes pod secrets /var/run/secrets missing

TL;DR:
Kubernetes pods created via terraform don't have a /var/run/secrets folder, but a pod created according to the hello-minikube tutorial does - why is that, and how can I fix it?
Motivation: I need traefik to be able to talk to the k8s cluster.
Details
I have set up a local Kubernetes cluster w/ minikube and set up terraform to work with that cluster.
To set up traefik, you need to create an Ingress and a Deployment which are not yet supported by terraform. Based on the workaround posted in that issue, I use an even simpler module to execute yaml files via terraform:
# A tf-module that can create Kubernetes resources from YAML file descriptions.
variable "name" {}
variable "file_name" { }
resource "null_resource" "kubernetes_resource" {
triggers {
configuration = "${var.file_name}"
}
provisioner "local-exec" {
command = "kubectl apply -f ${var.file_name}"
}
}
The resources created in this way show up correctly in the k8s dashboard.
However, the ingress controller's pod logs:
time="2017-12-30T13:49:10Z"
level=error
msg="Error starting provider *kubernetes.Provider: failed to create in-cluster
configuration: open /var/run/secrets/kubernetes.io/serviceaccount/token:
no such file or directory"
(line breaks added for readability)
/bin/bashing into the pods, I realize none of them have a path /var/run/secrets, except for the hello-minikube pod from the minikube tutorial, which was created with just two kubectl commands:
$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
Compared to the script in the terraform issue, I removed kubectl params like --certificate-authority=${var.cluster_ca_certificate}, but then again, I didn't provide this when setting up hello-minikube either, and the original script doesn't work as-is, since I couldn't figure out how to access the provider details from ~/.kube/config in terraform.
I tried to find out if hello-minikube is doing something fancy, but I couldn't find its source code.
Do I have to do something specific to make the token available? According to traefic issue 611, the InCluster-configuration should be automated, but as it currently stands I'm stuck.
Versions
Host system is a Windows 10 machine
> minikube version
minikube version: v0.24.1
> kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Related
There are some related questions and github issues, but they didn't help me fix the problem yet either.
How do I access the Kubernetes api from within a pod container?
Why would /var/run/secrets/kubernetes.io/serviceaccount/token be an empty file in a Pod?
https://github.com/datawire/telepresence/issues/353
https://github.com/openshift/origin/issues/13865
Foremost, thank you for an amazing question write up; I would use this question as a template for how others should ask!
Can you check the automountServiceAccountToken field in the PodSpec and see if it is true?
The only other constructive question I know is whether its serviceAccountName points to an existing S.A.; I would hope a bogus one would bomb the deploy, but don't know for sure what will happen in that case