Kubernetes kube-apiserver to kubelet permissions - kubernetes

What controls the permissions when you call
kubectl logs pod-name ?
I've played around and tried calling the kubelet api from on of the controller nodes.
sudo curl -k --key /var/lib/kubernetes/cert-k8s-apiserver-key.pem --cert /var/lib/kubernetes/cert-k8s-apiserver.pem https://worker01:10250/pods
This fails with Forbidden (user=apiserver, verb=get, resource=nodes, subresource=proxy).
I've tried the same call using the admin key and cert and it succeeds and return a healthy blob of JOSN.
I'm guessing this is why kubectl logs pod-name doesn't work.
A little more reading suggests that the CN of the certificate determines the user that is authenticated and authorized.
What controls whether a user is authorized to access the kubelet API?
Background
I'm setting up a K8s cluster using the following instructions Kubernetes the not so hard way with Ansible

Short Answer
The short answer is that you need to grant the user apiserver access to the resource node by creating a ClusterRole and ClusterRoleBinding.
Longer Explanation
Kubernetes has a bunch of resources. The relevant ones here are:
Role
Node
ClusterRole
ClusterRoleBinding
Roles and ClusterRoles are similar, except ClusterRoles are not namespaced.
A ClsuterRole can be associated (bound) to a user with a ClusterRoleBinding object.
Kubelet provides some resources (maybe more)
nodes/proxy
nodes/stats
nodes/log
nodes/spec
nodes/metrics
To make this work, you need to create a ClusterRole that allow access to the resource and sub-resource on the Node.
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
Then you associate this ClusteRole with a user. In my case, the kube-apiserver is using a certificate with CN=apiserver.
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: apiserver
EOF

Related

kubectl - Error from server (Forbidden): users "xxx#xxx.it" is forbidden: User "system:serviceaccount:gke-connect:connect-agent-sa"

I have this strange situation, how I can solve this problem ?
ubuntu#anth-mgt-wksadmin:~$ kubectl get nodes
error: the server doesn't have a resource type "nodes"
ubuntu#anth-mgt-wksadmin:~$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
ubuntu#anth-mgt-wksadmin:~$ kubectl cluster-info dump
Error from server (Forbidden): users "xxx#xxx.it" is forbidden: User system:serviceaccount:gke-connect:connect-agent-sa" cannot impersonate resource "users" in API group "" at the cluster scope
I think that the problem has been generated by this following apply searching a way to connect the admin cluster to Cloud Console but how to rollback ?
USER_ACCOUNT=foo#example.com
cat <<EOF > /tmp/impersonate.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gateway-impersonate
rules:
- apiGroups:
- ""
resourceNames:
- ${USER_ACCOUNT}
resources:
- users
verbs:
- impersonate
- --
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
metadata:
name: gateway-impersonate
roleRef:
kind: ClusterRole
name: gateway-impersonate
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: connect-agent-sa
namespace: gke-connect
EOF
# Apply impersonation policy to the cluster.
kubectl apply -f /tmp/impersonate.yaml
I have copied the admin.conf file from one admin cluster node to the admin workstation and renamed to kubeconfig
root#anth-admin-host1:~# cat /etc/kubernetes/admin.conf apiVersion: v1 clusters:

Is there a way for K8s service account to create another service account in a different namespace?

I have an app which interacts with an existing service account ("the agent") on a designated namespace. I want the agent to be able to create additional service accounts and roles on other namespaces. Is there a way to do so?
I've already answered to this question in a comment section, but I've also decided to provide more comprehensive information with examples.
Background
Kubernetes includes RBAC (role-based access control) mechanism that enables you to specify which actions are permitted for specific user or group of users. From Kubernetes v1.6 RBAC is enabled by default.
There are four Kubernetes objects: Role, ClusterRole, RoleBinding and ClusterRoleBinding, that we can use to configure needed RBAC rules. Role and RoleBinding are namespaced and ClusterRole and ClusterRoleBinding are cluster scoped resources.
We use Role and RoleBinding to authorize user to namespaced resources and we use ClusterRole and ClusterRoleBinding for cluster wide resources.
However, we can also mix this resurces.
Below I will briefly describe common combinations.
NOTE: It is impossible to link ClusterRoleBindings with Role.
For every test case I created new test namespace and test-agent service account.
Role and RoleBinding
I created simple Role and RoleBinding in specific namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-role
namespace: test
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-agent
We can see that test-agent has access only to resources in test namespace:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
no
ClusterRole and RoleBinding
I created ClusterRole and RoleBinding:
NOTE: I didn't specify any namespace for ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-clusterrole
subjects:
- kind: ServiceAccount
name: test-agent
Now we can see, that if a ClusterRole is linked to a ServiceAccount using a RoleBinding, the ClusterRole permissions apply ONLY to the namespace in which this RoleBinding has been created:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
no
ClusterRole and ClusterRoleBinding
Finally I created ClusterRole and ClusterRoleBinding:
NOTE: I didn't specify any namespace for ClusterRole and ClusterRoleBinding.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-clusterrole
subjects:
- kind: ServiceAccount
name: test-agent
namespace: test
Now we can see, that if a ClusterRole is linked to a ServiceAccount using a ClusterRoleBinding, the ClusterRole permissions apply to all namespaces:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n kube-system --as=system:serviceaccount:test:test-agent
yes
Useful note: You can display all possible verbs for specific resource using
kubectl api-resources -o wide, e.g. to display all possible verbs for Deployment we can use:
$ kubectl api-resources -o wide | grep deployment
deployments deploy apps/v1 true Deployment [create delete deletecollection get list patch update watch]

Attach IAM Role to Serviceaccount from a Pod in EKS

I am trying to attach an IAM role to a pod's service account from within the POD in EKS.
kubectl annotate serviceaccount -n $namespace $serviceaccount eks.amazonaws.com/role-arn=$ARN
The current role attached to the $serviceaccountis outlined below:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: common-role
rules:
- apiGroups: [""]
resources:
- event
- secrets
- configmaps
- serviceaccounts
verbs:
- get
- create
However, when I execute the kubectl command I get the following:
error from server (forbidden): serviceaccounts $serviceaccount is forbidden: user "system:servi...." cannot get resource "serviceaccounts" in API group "" ...
Is my role correct? Why can't I modify the service account?
Kubernetes by default will run the pods with service account: default which don`t have the right permissions. Since I cannot determine which one you are using for your pod I can only assume that you are using either default or some other created by you. In both cases the error suggest that the service account your are using to run your pod does not have proper rights.
If you run this pod with service account type default you will have add the appropriate rights to it. Alternative way is to run your pod with another service account created for this purpose. Here`s an example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: run-kubectl-from-pod
Then you will have to create appropriate role (you can find full list of verbs here):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-service-accounts
rules:
- apiGroups: [""]
resources:
- serviceaccounts
verbs:
- get
- create
- patch
- list
I'm using here more verbs as a test. Get and Patch would be enough for this use case. I`m mentioning this since its best practice to provide as minimum rights as possible.
Then create your role accordingly:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-service-account-bind
subjects:
- kind: ServiceAccount
name: run-kubectl-from-pod
roleRef:
kind: Role
name: modify-service-accounts
apiGroup: rbac.authorization.k8s.io
And now you just have reference that service account when your run your pod:
apiVersion: v1
kind: Pod
metadata:
name: run-kubectl-in-pod
spec:
serviceAccountName: run-kubectl-from-pod
containers:
- name: kubectl-in-pod
image: bitnami/kubectl
command:
- sleep
- "3600"
Once that is done, you just exec into the pod:
➜ kubectl-pod kubectl exec -ti run-kubectl-in-pod sh
And then annotate the service account:
$ kubectl get sa
NAME SECRETS AGE
default 1 19m
eks-sa 1 36s
run-kubectl-from-pod 1 17m
$ kubectl annotate serviceaccount eks-sa eks.amazonaws.com/role-arn=$ARN
serviceaccount/eks-sa annotated
$ kubectl describe sa eks-sa
Name: eks-sa
Namespace: default
Labels: <none>
Annotations: eks.amazonaws.com/role-arn:
Image pull secrets: <none>
Mountable secrets: eks-sa-token-sldnn
Tokens: <none>
Events: <none>
If you encounter any issues with request being refused please start with reviewing your request attributes and determine the appropriate request verb.
You can also check your access with kubectl auth can-i command:
kubectl-pod kubectl auth can-i patch serviceaccount
API server will respond with simple yes or no.
Please Note that If you want to patch a service account to use an IAM role you will have delete and re-create any existing pods that are assocaited with the service account to apply credentials environment variables. You can read more about it here.
While your role appears to be correct, please keep in mind that when executing kubectl, the RBAC permissions of your account in kubeconfig are relevant for whether you are allowed to perform an action.
From your question, I understand that your role is attached to the service account you are trying to annotate, which is irrelevant to the kubectl permission check.

Kubernetes service account default permissions

I am experimenting with service accounts. I believe the following should produce an access error (but it doesn't):
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
serviceAccountName: test-sa
containers:
- image: alpine
name: test-container
command: [sh]
args:
- -ec
- |
apk add curl;
KUBE_NAMESPACE="$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)";
curl \
--cacert "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
"https://kubernetes.default.svc/api/v1/namespaces/$KUBE_NAMESPACE/services";
while true; do sleep 1; done;
kubectl apply -f test.yml
kubectl logs test-pod
What I see is a successful listing of services, but I would expect a permissions error because I never created any RoleBindings or ClusterRoleBindings for test-sa.
I'm struggling to find ways to list the permissions available to a particular SA, but according to Kubernetes check serviceaccount permissions, it should be possible with:
kubectl auth can-i list services --as=system:serviceaccount:default:test-sa
> yes
Though I'm skeptical whether that command is actually working, because I can replace test-sa with any gibberish and it still says "yes".
According to the documentation, service accounts by default have "discovery permissions given to all authenticated users". It doesn't say what that actually means, but from more reading I found this resource which is probably what it's referring to:
kubectl get clusterroles system:discovery -o yaml
> [...]
> rules:
> - nonResourceURLs:
> - /api
> - /api/*
> [...]
> verbs:
> - get
Which would imply that all service accounts have get permissions on all API endpoints, though the "nonResourceURLs" bit implies this wouldn't apply to APIs for resources like services, even though those APIs live under that path… (???)
If I remove the Authorization header entirely, I see an access error as expected. But I don't understand why it's able to get data using this empty service account. What's my misunderstanding and how can I restrict permissions correctly?
It turns out this is a bug in Docker Desktop for Mac's Kubernetes support.
It automatically adds a ClusterRoleBinding giving cluster-admin to all service accounts (!). It only intends to give this to service accounts inside the kube-system namespace.
It was originally raised in docker/for-mac#3694 but fixed incorrectly. I have raised a new issue docker/for-mac#4774 (the original issue is locked due to age).
A quick fix while waiting for the bug to be resolved is to run:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: docker-for-desktop-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-system
EOF
I don't know if that might cause issues with future Docker Desktop upgrades but it does the job for now.
With that fixed, the code above correctly gives a 403 error, and would require the following to explicitly grant access to the services resource:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-reader
rules:
- apiGroups: [""]
resources: [services]
verbs: [get, list]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-sa-service-reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: service-reader
subjects:
- kind: ServiceAccount
name: test-sa
A useful command for investigating is kubectl auth can-i --list --as system:serviceaccount, which shows the rogue permissions were applying to all service accounts:
Resources Non-Resource URLs Resource Names Verbs
*.* [] [] [*]
[*] [] [*]
[...]
The same bug exists in Docker-Desktop for Windows.
It automatically adds a ClusterRoleBinding giving cluster-admin to all service accounts (!). It only intends to give this to service accounts inside the kube-system namespace.
This is because in Docker Desktop by default a clusterrolebinding docker-for-desktop-binding gives cluster-admin role to all the service accounts created.
For more details check the issue here

How to create a service account to get a list of pods from inside a Kubernetes cluster?

I have created a service account to get a list of pods in minikube.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: list-pods
namespace: default
rules:
- apiGroups:
- ''
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: list-pods_demo-sa
namespace: default
roleRef:
kind: Role
name: list-pods
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: demo-sa
namespace: default
The problem is, that I get an error message if I use the service account to get the list of pods. kubectl auth can-i list pod --as demo-sa answers always with no.
You cannot use:
kubectl auth can-i list pod --as <something>
to impersonate ServiceAccounts. You can only impersonate users --as and impersonate groups --as-group
A workaround is to use the service account token.
kubectl get secret demo-sa-token-7fx44 -o=jsonpath='{.data.token}' | base64 -d
You can use the output here and any kubectl request. However, I checked with kubectl auth can-i list pod and I don't think auth works with a token (you always get a yes)