TL;DR:
Kubernetes pods created via terraform don't have a /var/run/secrets folder, but a pod created according to the hello-minikube tutorial does - why is that, and how can I fix it?
Motivation: I need traefik to be able to talk to the k8s cluster.
Details
I have set up a local Kubernetes cluster w/ minikube and set up terraform to work with that cluster.
To set up traefik, you need to create an Ingress and a Deployment which are not yet supported by terraform. Based on the workaround posted in that issue, I use an even simpler module to execute yaml files via terraform:
# A tf-module that can create Kubernetes resources from YAML file descriptions.
variable "name" {}
variable "file_name" { }
resource "null_resource" "kubernetes_resource" {
triggers {
configuration = "${var.file_name}"
}
provisioner "local-exec" {
command = "kubectl apply -f ${var.file_name}"
}
}
The resources created in this way show up correctly in the k8s dashboard.
However, the ingress controller's pod logs:
time="2017-12-30T13:49:10Z"
level=error
msg="Error starting provider *kubernetes.Provider: failed to create in-cluster
configuration: open /var/run/secrets/kubernetes.io/serviceaccount/token:
no such file or directory"
(line breaks added for readability)
/bin/bashing into the pods, I realize none of them have a path /var/run/secrets, except for the hello-minikube pod from the minikube tutorial, which was created with just two kubectl commands:
$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
$ kubectl expose deployment hello-minikube --type=NodePort
Compared to the script in the terraform issue, I removed kubectl params like --certificate-authority=${var.cluster_ca_certificate}, but then again, I didn't provide this when setting up hello-minikube either, and the original script doesn't work as-is, since I couldn't figure out how to access the provider details from ~/.kube/config in terraform.
I tried to find out if hello-minikube is doing something fancy, but I couldn't find its source code.
Do I have to do something specific to make the token available? According to traefic issue 611, the InCluster-configuration should be automated, but as it currently stands I'm stuck.
Versions
Host system is a Windows 10 machine
> minikube version
minikube version: v0.24.1
> kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Related
There are some related questions and github issues, but they didn't help me fix the problem yet either.
How do I access the Kubernetes api from within a pod container?
Why would /var/run/secrets/kubernetes.io/serviceaccount/token be an empty file in a Pod?
https://github.com/datawire/telepresence/issues/353
https://github.com/openshift/origin/issues/13865
Foremost, thank you for an amazing question write up; I would use this question as a template for how others should ask!
Can you check the automountServiceAccountToken field in the PodSpec and see if it is true?
The only other constructive question I know is whether its serviceAccountName points to an existing S.A.; I would hope a bogus one would bomb the deploy, but don't know for sure what will happen in that case
Related
I am running kubectl from Mac OS M1 laptop. There is a K8S cluster deployed to gcp and I have logged in from cli gcloud auth login. Then I run below command to authenticate GKS credential
> gcloud container clusters get-credentials gcp-test --zone australia-southeast2-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gcp-test.
it looks ok but I got below error when run get pods
> kubectl get pods
Unable to connect to the server: EOF
The version of kubectl I am using is
> kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:38:50Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:35:40Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
The K8S cluster version in GCP is 1.24.5-gke.600. Does anyone know why I get Unable to connect to the server: EOF error? It seems like not able to connect to the server.
It looks like you have already a config for an EKS cluster. Maybe gcloud added a new context to your kubeconfig file, but the old one is still the active one.
Use
kubectl config get-contexts
to check if there are multiple contexts.
Use
kubectl config use-context <context-name>
to set the active context.
You could also examine your kubeconfig file. The default location is
~/.kube/config
but can be overridden by KUBECONFIG environment variable. To find that out, execute
echo $KUBECONFIG
Could also be that gcloud added the context to the default kubeconfig file but you have a different on set with the KUBECONFIG env variable that contains the EKS context.
I'm using kubernetes 1.11.0 and running heapster. When I run
kubectl top pod
It will show error
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
while I have installed heapster already
kubectl create -f deploy/kube-config/influxdb/
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml
Any suggest?
Update:
the command kubectl top pod works now but the endpoint doesn't work
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods"
#Error from server (ServiceUnavailable): the server is currently unable to handle the request
Can you check and ensure that your kubectl binary is the latest? Something like
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
This generally happens if kubectl is older. Old kubectl versions were looking for heapster service to be present but new ones should not have this problem.
Hope this helps.
In addition to above, you might want to consider moving to metrics server since heapster is on its way to being deprecated.
https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md
I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:
kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash
I download the correct version of kubectl inside the running container, make it executable and try to run
./kubectl get pods
but get the following error:
Error from server (Forbidden): pods is forbidden:
User "system:serviceaccount:default:default" cannot
list pods in the namespace "default"
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run helm inside the container. According to the docs I found, this should work fine as soon as kubectl is working fine.
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?
Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected /var/run/secrets/kubernetes.io/serviceaccount/token file to authenticate itself.
How do I allow the serviceaccount to list the pods?
That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.
See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions for more information.
I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the serviceAccountName in the helm pods). Then, to grant superuser permissions to that service account, run:
kubectl create clusterrolebinding helm-superuser \
--clusterrole=cluster-admin \
--serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME
True kubectl will try to get everything needs to authenticate with the master.
But with ClusterRole and "cluster-admin" you'll give unlimited permissions across all namespaces for that pod and sounds a bit risky.
For me, it was a bit annoying adding extra 43MB for the kubectl client in my Kubernetes container but the alternative was to use one of the SDKs to implement a more basic client. kubectl is easier to authenticate because the client will get the token needs from /var/run/secrets/kubernetes.io/serviceaccount plus we can use manifests files if we want. I think for most common of the Kubernetes setups you shouldn't add any additional environment variables or attach any volume secret, will just work if you have the right ServiceAccount.
Then you can test if is working with something like:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME. READY STATUS RESTARTS AGE
pod1-0 1/1 Running 0 6d17h
pod2-0 1/1 Running 0 6d16h
pod3-0 1/1 Running 0 6d17h
pod3-2 1/1 Running 0 67s
or permission denied:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
Tested on:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
You can check my answer at How to run kubectl commands inside a container? for RoleBinding and RBAC.
I am trying to follow the example in this blog post to supply my pods with upstream DNS servers.
I created a new GKE cluster in us-east1-d (where 1.6.0 is available according to the April 4 entry here).
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
I Then defined a ConfigMap in the following YAML file, kube-dns-cm.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.2.3.4"]
When I try to create the ConfigMap, I am told it already exists:
$ kubectl create -f kube-dns-cm.yml
Error from server (AlreadyExists): error when creating "kube-dns-cm.yml": configmaps "kube-dns" already exists
I tried deleting the existing ConfigMap and replacing it with my own, but when I subsequently create pods, they do not seem to have taken effect (names are not resolved as I hoped). Is there more to the story than is explained in the blog post (e.g., restarting the kube-dns service, or the pods)? Thanks!
EDIT: Deleting and recreating the ConfigMap actually does work, but not entirely the way I was hoping. My docker registry is on a private (corporate) network, and resolving the name of the registry requires the upstream nameservers. So I cannot use the DNS name of the registry on my pod yaml file, but the pods created from that yaml file will have the desired DNS resolution (after the ConfigMap is replaced)
It has been my experience with ConfigMap that when I update it, the changes don't seem to reflect in the running pod that is using the ConfigMap.
So when I delete the pod and when pod comes up again it shows the latest ConfigMap.
Hence I suggest try restarting the kube-dns pod, which seems to use this ConfigMap you created.
I believe your ConfigMap config is correct, while you do need restart kube-dns.
Because ConfigMap is injected into pod by either env pairs or volumes(kube-dns use volume), you need to restart the target pod(here I mean all the kube-dns pod) to make it take effect.
Another suggestion is that, from you description, you only want this upstream dns help you access your docker registry which is in private network, then I suggest you may want to try stubDomains in that post you referred.
And most important, you need to check you dnsPolicy is set to ClusterFirst, although from your description, I guess it is.
Hope this will help you out.
kube-dns pod applied with recent configMap changes after it is restarted. But, still got the similar issue.
Check logs using kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
Tried changing --cluster-dns value to hosts server "nameserver" ip which is already connecting for me both internal and external network. Restarted kubelet service for changes to take effect.
Environment="KUBELET_DNS_ARGS=**--cluster-dns=<<your-nameserver-resolver>>** --cluster-domain=cluster.local" in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
That works !!! Now containers able to connect external network.
I need some help in creating my first Kubernetes cluster with Vagrant. I have installed Kubernetes, Vagrant and libvirt. My kubectl status displays:
[root#localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
From the documentation I can read that the cluster can be started with:
export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh
However no "cluster" folder has been created by installing kubernetes, nor the "kube-up.sh" command is available. I can see only the following ones:
kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler kube-version-change
Am I missing some steps ? Thanks
From the documentation I can read that the cluster can be started with:
export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh
This documentation assumes that you have downloaded a kubernetes release. You can grab the latest release (1.1.7) from the github releases page. Once you unpack kubernetes.tar.gz you should see a cluster directory with kube-up.sh inside of it that you can execute it as suggested by the documentation.