Attach IAM Role to Serviceaccount from a Pod in EKS - kubernetes

I am trying to attach an IAM role to a pod's service account from within the POD in EKS.
kubectl annotate serviceaccount -n $namespace $serviceaccount eks.amazonaws.com/role-arn=$ARN
The current role attached to the $serviceaccountis outlined below:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: common-role
rules:
- apiGroups: [""]
resources:
- event
- secrets
- configmaps
- serviceaccounts
verbs:
- get
- create
However, when I execute the kubectl command I get the following:
error from server (forbidden): serviceaccounts $serviceaccount is forbidden: user "system:servi...." cannot get resource "serviceaccounts" in API group "" ...
Is my role correct? Why can't I modify the service account?

Kubernetes by default will run the pods with service account: default which don`t have the right permissions. Since I cannot determine which one you are using for your pod I can only assume that you are using either default or some other created by you. In both cases the error suggest that the service account your are using to run your pod does not have proper rights.
If you run this pod with service account type default you will have add the appropriate rights to it. Alternative way is to run your pod with another service account created for this purpose. Here`s an example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: run-kubectl-from-pod
Then you will have to create appropriate role (you can find full list of verbs here):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-service-accounts
rules:
- apiGroups: [""]
resources:
- serviceaccounts
verbs:
- get
- create
- patch
- list
I'm using here more verbs as a test. Get and Patch would be enough for this use case. I`m mentioning this since its best practice to provide as minimum rights as possible.
Then create your role accordingly:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-service-account-bind
subjects:
- kind: ServiceAccount
name: run-kubectl-from-pod
roleRef:
kind: Role
name: modify-service-accounts
apiGroup: rbac.authorization.k8s.io
And now you just have reference that service account when your run your pod:
apiVersion: v1
kind: Pod
metadata:
name: run-kubectl-in-pod
spec:
serviceAccountName: run-kubectl-from-pod
containers:
- name: kubectl-in-pod
image: bitnami/kubectl
command:
- sleep
- "3600"
Once that is done, you just exec into the pod:
➜ kubectl-pod kubectl exec -ti run-kubectl-in-pod sh
And then annotate the service account:
$ kubectl get sa
NAME SECRETS AGE
default 1 19m
eks-sa 1 36s
run-kubectl-from-pod 1 17m
$ kubectl annotate serviceaccount eks-sa eks.amazonaws.com/role-arn=$ARN
serviceaccount/eks-sa annotated
$ kubectl describe sa eks-sa
Name: eks-sa
Namespace: default
Labels: <none>
Annotations: eks.amazonaws.com/role-arn:
Image pull secrets: <none>
Mountable secrets: eks-sa-token-sldnn
Tokens: <none>
Events: <none>
If you encounter any issues with request being refused please start with reviewing your request attributes and determine the appropriate request verb.
You can also check your access with kubectl auth can-i command:
kubectl-pod kubectl auth can-i patch serviceaccount
API server will respond with simple yes or no.
Please Note that If you want to patch a service account to use an IAM role you will have delete and re-create any existing pods that are assocaited with the service account to apply credentials environment variables. You can read more about it here.

While your role appears to be correct, please keep in mind that when executing kubectl, the RBAC permissions of your account in kubeconfig are relevant for whether you are allowed to perform an action.
From your question, I understand that your role is attached to the service account you are trying to annotate, which is irrelevant to the kubectl permission check.

Related

Kubernetes kube-apiserver to kubelet permissions

What controls the permissions when you call
kubectl logs pod-name ?
I've played around and tried calling the kubelet api from on of the controller nodes.
sudo curl -k --key /var/lib/kubernetes/cert-k8s-apiserver-key.pem --cert /var/lib/kubernetes/cert-k8s-apiserver.pem https://worker01:10250/pods
This fails with Forbidden (user=apiserver, verb=get, resource=nodes, subresource=proxy).
I've tried the same call using the admin key and cert and it succeeds and return a healthy blob of JOSN.
I'm guessing this is why kubectl logs pod-name doesn't work.
A little more reading suggests that the CN of the certificate determines the user that is authenticated and authorized.
What controls whether a user is authorized to access the kubelet API?
Background
I'm setting up a K8s cluster using the following instructions Kubernetes the not so hard way with Ansible
Short Answer
The short answer is that you need to grant the user apiserver access to the resource node by creating a ClusterRole and ClusterRoleBinding.
Longer Explanation
Kubernetes has a bunch of resources. The relevant ones here are:
Role
Node
ClusterRole
ClusterRoleBinding
Roles and ClusterRoles are similar, except ClusterRoles are not namespaced.
A ClsuterRole can be associated (bound) to a user with a ClusterRoleBinding object.
Kubelet provides some resources (maybe more)
nodes/proxy
nodes/stats
nodes/log
nodes/spec
nodes/metrics
To make this work, you need to create a ClusterRole that allow access to the resource and sub-resource on the Node.
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
Then you associate this ClusteRole with a user. In my case, the kube-apiserver is using a certificate with CN=apiserver.
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: apiserver
EOF

Kubernetes service account default permissions

I am experimenting with service accounts. I believe the following should produce an access error (but it doesn't):
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
serviceAccountName: test-sa
containers:
- image: alpine
name: test-container
command: [sh]
args:
- -ec
- |
apk add curl;
KUBE_NAMESPACE="$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)";
curl \
--cacert "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
"https://kubernetes.default.svc/api/v1/namespaces/$KUBE_NAMESPACE/services";
while true; do sleep 1; done;
kubectl apply -f test.yml
kubectl logs test-pod
What I see is a successful listing of services, but I would expect a permissions error because I never created any RoleBindings or ClusterRoleBindings for test-sa.
I'm struggling to find ways to list the permissions available to a particular SA, but according to Kubernetes check serviceaccount permissions, it should be possible with:
kubectl auth can-i list services --as=system:serviceaccount:default:test-sa
> yes
Though I'm skeptical whether that command is actually working, because I can replace test-sa with any gibberish and it still says "yes".
According to the documentation, service accounts by default have "discovery permissions given to all authenticated users". It doesn't say what that actually means, but from more reading I found this resource which is probably what it's referring to:
kubectl get clusterroles system:discovery -o yaml
> [...]
> rules:
> - nonResourceURLs:
> - /api
> - /api/*
> [...]
> verbs:
> - get
Which would imply that all service accounts have get permissions on all API endpoints, though the "nonResourceURLs" bit implies this wouldn't apply to APIs for resources like services, even though those APIs live under that path… (???)
If I remove the Authorization header entirely, I see an access error as expected. But I don't understand why it's able to get data using this empty service account. What's my misunderstanding and how can I restrict permissions correctly?
It turns out this is a bug in Docker Desktop for Mac's Kubernetes support.
It automatically adds a ClusterRoleBinding giving cluster-admin to all service accounts (!). It only intends to give this to service accounts inside the kube-system namespace.
It was originally raised in docker/for-mac#3694 but fixed incorrectly. I have raised a new issue docker/for-mac#4774 (the original issue is locked due to age).
A quick fix while waiting for the bug to be resolved is to run:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: docker-for-desktop-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-system
EOF
I don't know if that might cause issues with future Docker Desktop upgrades but it does the job for now.
With that fixed, the code above correctly gives a 403 error, and would require the following to explicitly grant access to the services resource:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-reader
rules:
- apiGroups: [""]
resources: [services]
verbs: [get, list]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-sa-service-reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: service-reader
subjects:
- kind: ServiceAccount
name: test-sa
A useful command for investigating is kubectl auth can-i --list --as system:serviceaccount, which shows the rogue permissions were applying to all service accounts:
Resources Non-Resource URLs Resource Names Verbs
*.* [] [] [*]
[*] [] [*]
[...]
The same bug exists in Docker-Desktop for Windows.
It automatically adds a ClusterRoleBinding giving cluster-admin to all service accounts (!). It only intends to give this to service accounts inside the kube-system namespace.
This is because in Docker Desktop by default a clusterrolebinding docker-for-desktop-binding gives cluster-admin role to all the service accounts created.
For more details check the issue here

Kubernetes: ClusterRole created in the cluster are not visible during rbac checks

I have a problem in my Kubernetes cluster, that suddendly appeared two weeks ago. The ClusterRoles I create are not visible when RBAC for a given ServiceAccount are resolved. Here is a minimal set to reproduce the problem.
Create relevant ClusterRole, ClusterRoleBinding and a ServiceAccount in the default namespace to have the rights to see Endpoints with this SA.
# test.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-cr
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-cr
subjects:
- kind: ServiceAccount
name: test-sa
namespace: default
$ kubectl apply -f test.yaml
serviceaccount/test-sa created
clusterrole.rbac.authorization.k8s.io/test-cr created
clusterrolebinding.rbac.authorization.k8s.io/test-crb created
All objects, in particular the ClusterRole, are visible if requested directly.
$ kubectl get serviceaccount test-sa
NAME SECRETS AGE
test-sa 1 57s
$ kubectl get clusterrolebinding test-crb
NAME AGE
test-crb 115s
$ kubectl get clusterrole test-cr
NAME AGE
test-cr 2m19s
However, when I try to resolve the effective rights for this ServiceAccount, here the error I get back:
$ kubectl auth can-i get endpoints --as=system:serviceaccount:default:test-sa
no - RBAC: clusterrole.rbac.authorization.k8s.io "test-cr" not found
The RBAC rules created before the breakage are working properly. For instance, here for the ServiceAccount of my etcd-operator that I deployed with Helm several months ago:
$ kubectl auth can-i get endpoints --as=system:serviceaccount:etcd:etcd-etcd-operator-etcd-operator
yes
The version of Kubernetes in this cluster is the 1.17.0-0.
I am also seeing very slow deployements lately of new Pods, that can take up to 5 mins to start to be deployed after they have been created by a StatefulSet or a Deployment, if this can help.
Do you have any insight of what is going on, or even what I could do about it? Please note that my Kubernetes cluster is managed, so I do not have any control on the underlying system, I just have the cluster-admin privileges as a customer. But it would greatly help anyway if I could give any direction to the administrators.
Thanks in advance!
Thanks a lot for your answers!
It turned out that we will certainly never have the final world about what happen. The cluster provider just restarted the kube-apiserver, and this fixed the issue.
I suppose that something went wrong like caching or other transient failures, that can not be defined as a reproductible error.
To give a little more data for a future reader, the error occured on a Kubernetes cluster managed by OVH, and their specificity is to run the control plane itself as pods deployed in a master Kubernetes cluster on their side.

How to run kubectl within a job in a namespace?

Hi I saw this documentation where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities
Security Considerations:
Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.
Test Environment:
I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
In order to avoid incompatibility, use the kubectl with matching version with your server.
Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount.
Let me know if you have further questions about this case.
Create service account like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
Create ClusterRoleBinding using this.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: modify-pods-to-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: internal-kubectl
Now create pod with same config that are given at Documentation.
When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to
create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.
After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: internal-kubectl

Kubernetes dashboard error using service account token

I have a Kubernetes cluster with various resources running fine. I am trying to get the Dashboard working but getting the following error when I launch the dashboard and enter the service-account token.
persistentvolumeclaims is forbidden: User
"system:serviceaccount:kube-system:kubernetes-dashboard" cannot list
resource "persistentvolumeclaims" in API group "" in the namespace
"default"
It does not allow the listing of any resources from my cluster (persistent volumes, pods, ingresses etc). My cluster has multiple namespaces.
This is my service-account yaml file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-test # replace with your preferred username
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin # replace with your preferred username
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin # replace with your preferred username
namespace: kube-system
Any help is appreciated.
FIX: Create a Role Binding for the cluster role.
This should fix the problem:
kubectl delete clusterrole cluster-admin
kubectl delete clusterrolebinding kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
The above command will create a role binding that gives all permissions to all resources.
Run the Proxy:
kubectl proxy
Check the DashBoard: Please check the URL and port provided by kubectl
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/persistentvolume?namespace=default
More info: Cluster role:
You can check out the 'cluster-admin' role by:
kubectl edit clusterrole cluster-admin
The problem here is that the serviceaccount 'kubernetes-dashboard' does not have 'list' permissions for the resource 'persistentVolumeClaims'.
I would recommend using Web UI (Dashboard) documentation from Kubernetes.
Deploying the Dashboard UI
The Dashboard UI is not deployed by default. To deploy it, run the following command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
From your yaml I can see that you specified them for namespace kube-system but dashboard is trying to list resources from namespace default, at least that's what is says in your error message.
Also it seems your yaml is also incorrect for ServiceAccount name, as in the file you have k8s-test and error message says it's using kubernetes-dashboard.