I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:
kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash
I download the correct version of kubectl inside the running container, make it executable and try to run
./kubectl get pods
but get the following error:
Error from server (Forbidden): pods is forbidden:
User "system:serviceaccount:default:default" cannot
list pods in the namespace "default"
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run helm inside the container. According to the docs I found, this should work fine as soon as kubectl is working fine.
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?
Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected /var/run/secrets/kubernetes.io/serviceaccount/token file to authenticate itself.
How do I allow the serviceaccount to list the pods?
That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.
See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions for more information.
I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the serviceAccountName in the helm pods). Then, to grant superuser permissions to that service account, run:
kubectl create clusterrolebinding helm-superuser \
--clusterrole=cluster-admin \
--serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME
True kubectl will try to get everything needs to authenticate with the master.
But with ClusterRole and "cluster-admin" you'll give unlimited permissions across all namespaces for that pod and sounds a bit risky.
For me, it was a bit annoying adding extra 43MB for the kubectl client in my Kubernetes container but the alternative was to use one of the SDKs to implement a more basic client. kubectl is easier to authenticate because the client will get the token needs from /var/run/secrets/kubernetes.io/serviceaccount plus we can use manifests files if we want. I think for most common of the Kubernetes setups you shouldn't add any additional environment variables or attach any volume secret, will just work if you have the right ServiceAccount.
Then you can test if is working with something like:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME. READY STATUS RESTARTS AGE
pod1-0 1/1 Running 0 6d17h
pod2-0 1/1 Running 0 6d16h
pod3-0 1/1 Running 0 6d17h
pod3-2 1/1 Running 0 67s
or permission denied:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
Tested on:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
You can check my answer at How to run kubectl commands inside a container? for RoleBinding and RBAC.
Related
I am running kubectl from Mac OS M1 laptop. There is a K8S cluster deployed to gcp and I have logged in from cli gcloud auth login. Then I run below command to authenticate GKS credential
> gcloud container clusters get-credentials gcp-test --zone australia-southeast2-a
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gcp-test.
it looks ok but I got below error when run get pods
> kubectl get pods
Unable to connect to the server: EOF
The version of kubectl I am using is
> kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:38:50Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.13-eks-fb459a0", GitCommit:"55bd5d5cb7d32bc35e4e050f536181196fb8c6f7", GitTreeState:"clean", BuildDate:"2022-10-24T20:35:40Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/amd64"}
The K8S cluster version in GCP is 1.24.5-gke.600. Does anyone know why I get Unable to connect to the server: EOF error? It seems like not able to connect to the server.
It looks like you have already a config for an EKS cluster. Maybe gcloud added a new context to your kubeconfig file, but the old one is still the active one.
Use
kubectl config get-contexts
to check if there are multiple contexts.
Use
kubectl config use-context <context-name>
to set the active context.
You could also examine your kubeconfig file. The default location is
~/.kube/config
but can be overridden by KUBECONFIG environment variable. To find that out, execute
echo $KUBECONFIG
Could also be that gcloud added the context to the default kubeconfig file but you have a different on set with the KUBECONFIG env variable that contains the EKS context.
Is there a way to inspect a container running in pod directly from the kubernetes command line (using kubectl) to see some details such as running in priveleged mode for instance.
something like:
kubectl inspect -c <containerName>
The only way I found is to ssh to the node hosting the pod and perform a docker inspect <containerID> but this is kind of tedious.
My kubernetes version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Check kubectl describe pods/<pod_name>
If it is not enough for you, you can go for JSON and filter it with jq
kubectl get <pod_name> -ojson | jq '.spec.containers[] | .securityContext'
Also, check kubectl Cheat Sheet
You have below kubectl commands to know details of a pod
kubectl describe <pod_name> -n <namespacename>
kubectl get <pod_name> -n <namespacename> -o yaml # output in yaml format
kubectl get <pod_name> -n <namespacename> -o json # output in json format
If you want to know which containers are running in privileged mode from an audit point of view then I suggest to look at project Falco which has a mechanism to write policies and trigger alert when a container is violating a policy. The policy could be no container can run in privileged mode.
I a similar problem, that I had a pod with status Evicted and needed to inspect it (on kubectl is describe). So i used:
kubectl describe pod <pod-name>
So I could see what I was looking for:
...
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
...
So searching i found a very nice article talking about 12 Critical Kubernetes Health Conditions You Need to Monitor and Why.
Still on solving, but this log may help others.
The command fails due to credentials problems, but when you test with kubectl get nodes everything looks fine.
Output of helm install:
⋊> ~/t/mtltech on master ⨯ helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true 00:31:41
Error: the server has asked for the client to provide credentials
Output of kubectl get nodes:
⋊> ~/t/mtltech on master ⨯ kubectl get nodes 00:37:41
NAME STATUS ROLES AGE VERSION
gke-mtltech-default-pool-977ee0b2-5lmi Ready <none> 7h v1.11.7-gke.4
gke-mtltech-default-pool-977ee0b2-hi4v Ready <none> 7h v1.11.7-gke.4
gke-mtltech-default-pool-977ee0b2-mjiv Ready <none> 7h v1.11.7-gke.4
Output of helm version:
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.7-gke.4", GitCommit:"618716cbb236fb7ca9cabd822b5947e298ad09f7", GitTreeState:"clean", BuildDate:"2019-02-05T19:22:29Z", GoVersion:"go1.10.7b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider: Google Cloud
I've tried to reset it several times with rm -rf ~/.helm && helm init --service-account tiller but it doesn't change anything.
Any idea ?
Thanks.
The problem here is the Tiller. I do not know how you deployed Helm and Tiller, but the mistake was there.
I used this chart and all works correctly, then I deleted my service account and cluster role binding and I met the same error - deleting only cluster role binding gives error:
Error: release nginx-ingress failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:tiller" cannot get namespaces in the namespace "default"
So the error is due to missing Service Account or both.
Solution for this:
rm -rf ~/.helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system
check the full name of tiller pod:
kubectl delete pod -n kube-system tiller-deploy-xxx
Wait till the tiller pod will be redeployed and install your helm chart:
helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true
I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.
Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)
Further details of the two errors are in the code block further below.
I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).
This was created using kubeadm using Weave network for 1.6+.
Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).
Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.
root#K8SMST01:/home/blah/charts# helm version
Client: &version.Version{SemVer:"v2.3.0",
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
root#K8SMST01:/home/blah/charts# helm install ./mychart
Error: no available release name found
root#K8SMST01:/home/blah/charts# helm ls
Error: the server does not allow access to the requested resource (get configmaps)
Here are the running pods:
root#K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8smst01 1/1 Running 4 1d 10.139.75.19 k8smst01
kube-apiserver-k8smst01 1/1 Running 3 19h 10.139.75.19 k8smst01
kube-controller-manager-k8smst01 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-dns-3913472980-dm661 3/3 Running 6 1d 10.32.0.2 k8smst01
kube-proxy-56nzd 1/1 Running 2 1d 10.139.75.19 k8smst01
kube-proxy-7hflb 1/1 Running 1 1d 10.139.75.20 k8sn01
kube-proxy-nbc4c 1/1 Running 1 1d 10.139.75.21 k8sn02
kube-scheduler-k8smst01 1/1 Running 3 1d 10.139.75.19 k8smst01
tiller-deploy-1172528075-x3d82 1/1 Running 0 22m 10.44.0.3 k8sn01
weave-net-45335 2/2 Running 2 1d 10.139.75.21 k8sn02
weave-net-7j45p 2/2 Running 2 1d 10.139.75.20 k8sn01
weave-net-h279l 2/2 Running 5 1d 10.139.75.19 k8smst01
The solution given by kujenga from the GitHub issue worked without any other modifications:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.
There is a issue open for this on Helm's Github.
https://github.com/kubernetes/helm/issues/2224
"When installing a cluster for the first time using kubeadm v1.6.1,
the initialization defaults to setting up RBAC controlled access,
which messes with permissions needed by Tiller to do installations,
scan for installed components, and so on. helm init works without
issue, but helm list, helm install, and so on all do not work, citing
some missing permission or another."
A temporary work around has been suggest:
"We "disable" RBAC using the command kubectl create clusterrolebinding
permissive-binding --clusterrole=cluster-admin --user=admin
--user=kubelet --group=system:serviceaccounts;"
But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.
I had the same issue with the kubeadm setup on to CentOS 7.
Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.
This got me past it:
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.
All addons in the kubernetes use the "defaults" service account.
So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.
For read-only permissions:
kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system
For admin access: Eg: to install packages.
kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default
You can also install tiller server in adifferent namespace using the below command.
First create the namesapce
Create the serviceaccount for the namespace
install the tiller in this respective namespace using the below command.
helm init --tiller-namespace test-namespace
This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller --upgrade
$ helm update repo
$ helm install stable/redis --version 3.3.5
But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.
Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Per https://github.com/kubernetes/helm/issues/3055
helm init --service-account default
This worked for me when the RBAC (serviceaccount) commands didn't.
It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.
For example, if you have created a service account with the name tiller, you heml command would look like the following.
helm init --service-account=tiller
I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/
check the logs for your tiller container:
kubectl logs tiller-deploy-XXXX --namespace=kube-system
if you found something like this:
Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Then probably a firewall/iptables as described here solution is to remove some rules:
sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited
I am trying to follow the example in this blog post to supply my pods with upstream DNS servers.
I created a new GKE cluster in us-east1-d (where 1.6.0 is available according to the April 4 entry here).
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:24:30Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
I Then defined a ConfigMap in the following YAML file, kube-dns-cm.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.2.3.4"]
When I try to create the ConfigMap, I am told it already exists:
$ kubectl create -f kube-dns-cm.yml
Error from server (AlreadyExists): error when creating "kube-dns-cm.yml": configmaps "kube-dns" already exists
I tried deleting the existing ConfigMap and replacing it with my own, but when I subsequently create pods, they do not seem to have taken effect (names are not resolved as I hoped). Is there more to the story than is explained in the blog post (e.g., restarting the kube-dns service, or the pods)? Thanks!
EDIT: Deleting and recreating the ConfigMap actually does work, but not entirely the way I was hoping. My docker registry is on a private (corporate) network, and resolving the name of the registry requires the upstream nameservers. So I cannot use the DNS name of the registry on my pod yaml file, but the pods created from that yaml file will have the desired DNS resolution (after the ConfigMap is replaced)
It has been my experience with ConfigMap that when I update it, the changes don't seem to reflect in the running pod that is using the ConfigMap.
So when I delete the pod and when pod comes up again it shows the latest ConfigMap.
Hence I suggest try restarting the kube-dns pod, which seems to use this ConfigMap you created.
I believe your ConfigMap config is correct, while you do need restart kube-dns.
Because ConfigMap is injected into pod by either env pairs or volumes(kube-dns use volume), you need to restart the target pod(here I mean all the kube-dns pod) to make it take effect.
Another suggestion is that, from you description, you only want this upstream dns help you access your docker registry which is in private network, then I suggest you may want to try stubDomains in that post you referred.
And most important, you need to check you dnsPolicy is set to ClusterFirst, although from your description, I guess it is.
Hope this will help you out.
kube-dns pod applied with recent configMap changes after it is restarted. But, still got the similar issue.
Check logs using kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
Tried changing --cluster-dns value to hosts server "nameserver" ip which is already connecting for me both internal and external network. Restarted kubelet service for changes to take effect.
Environment="KUBELET_DNS_ARGS=**--cluster-dns=<<your-nameserver-resolver>>** --cluster-domain=cluster.local" in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
That works !!! Now containers able to connect external network.