why i can't create pods a a user with enough permissions in kubernetes - kubernetes

I am following a tutorial regarding RBAC, I think I understand the main idea but I don't get why this is failing:
kc auth can-i "*" pod/compute --as deploy#test.com
no
kc create clusterrole deploy --verb="*" --resource=pods --resource-name=compute
clusterrole.rbac.authorization.k8s.io/deploy created
kc create clusterrolebinding deploy --user=deploy#test.com --clusterrole=deploy
clusterrolebinding.rbac.authorization.k8s.io/deploy created
# this tells me that deploy#test.com should be able to create a pod named compute
kc auth can-i "*" pod/compute --as deploy#test.com
yes
# but it fails when trying to do so
kc run compute --image=nginx --as deploy#test.com
Error from server (Forbidden): pods is forbidden: User "deploy#test.com" cannot create resource "pods" in API group "" in the namespace "default"
the namespace name should be irrelevant afaik, since this is a clusterrole.

Restricting the create permission to a specific resource name is not supported.
This is from the Kubernetes documentation:
Note: You cannot restrict create or deletecollection requests by resourceName. For create, this limitation is because the object name is not known at authorization time.
This means the ClusterRole you created doesn't allow you to create any Pod.
You need to have another ClusterRole assigned where you don't specify the resource name.

Related

Openshift Monitoring with REST_API

I am trying to use Openshift REST-API's to get the status of my cron-jobs. I am the admin of my namespace but I don't have cluster access so I can't do anything on cluster level.
Now, to get the status, I am first creating the role :
# oc create role podreader --verb=get --verb=list --verb=watch --resource=pods,cronjobs.batch,jobs.batch
role.rbac.authorization.k8s.io/podreader created
But when I try to add a role to a service account it fails.
# oc create serviceaccount nagios
# oc policy add-role-to-user podreader system:serviceaccount:uc-immoscout-dev:nagios
Warning: role 'podreader' not found
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "podreader" not found
My main intention is to to get the status of my cron-jobs, jobs and pods which I am scheduling.
You'll have to add --role-namespace=namespace-of-role to the oc policy add-role-to-user command otherwise the role is treated as a cluster role.
From the docs:
--role-namespace='': namespace where the role is located: empty means a role defined in cluster policy

Kubernetes understanding output of - kubectl auth can-i

I'm trying to understand why on one cluster an operation is permitted but on the other i'm getting the following
Exception encountered setting up namespace watch from Kubernetes API v1 endpoint https://10.100.0.1:443/api: namespaces is forbidden: User \"system:serviceaccount:kube-system:default\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces is forbidden: User \\\"system:serviceaccount:kube-system:default\\\" cannot list resource \\\"namespaces\\\" in API group \\\"\\\" at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"namespaces\"},\"code\":403}\n)"
I'm managing two Kubernetes clusters -
clusterA booted with Kops version v1.14.8
clusterB booted on AWS EKS version v1.14.9-eks-f459c0
So i've tried using the kubectl auth command to try figuring out and I do see that on one i'm allowed however on the second i'm not as you can see:
kubectl config use-context clusterA
Switched to context "clusterA".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
yes
kubectl config use-context clusterB
Switched to context "clusterB".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
no
Is there a way to understand what are these two decisions based on yes/no?
Thanks for helping out!
The decision yes/no is based on whether there is a clusterrole and a clusterrolebinding or rolebinding which permits the default serviceaccount in kube-system namespace to perform verb list on resource namespace.
The trick in case of namespace resource is that there needs to be a clusterrole instead of role because namespace is a cluster scoped resource.
You check what are the clusterrole,role, clusterrolebinding,rolebinding exists in a kubernetes cluster using below command
kubectl get clusterrole,clusterrolebinding
kubectl get role,rolebinding -n namespacename
For more details refer Kubernetes RBAC here

A way to communicate to a Pod that it's restarting

I need to communicate to a Pod if it's restarting or not. Because depending of the situation my app is working differently (stateful app). I wouldn't like to create other pod that runs a kind of watchdog and then informs my app if it's restarting or not (after a fault). But maybe there is a way to do it with Kubernetes components (Kubelet,..).
Quoting from Kubernetes Docs:
Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account
(for example, default)
A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.
An RBAC Role or ClusterRole contains rules that represent a set of
permissions.
A Role always sets permissions within a particular namespace.
ClusterRole, by contrast, is a non-namespaced resource
So, In-order to get/watch the status of the other pod, you can call Kubernetes API from the pod running your code by using serviceaccounts. Follow below steps in-order to automatically retrieve other pod status from a given pod without any external dependency (Due to reliability concerns, you shouldn't rely upon nodes)
Create a serviceaccount in your pod's (requestor pod) namespace
kubectl create sa pod-reader
If both pods are in same namespace, create role,rolebinding
Create a role
kubectl create role pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create rolebinding pod-reader-binding --role=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Else, i.e the pods are in different namespaces, create clusterrole,clusterrolebinding
Create a clusterrole
kubectl create clusterrole pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create clusterrolebinding pod-reader-binding --clusterrole=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Verify the permissions
kubectl auth can-i watch pods --as=system:serviceaccount:<NAMESPACE>:pod-reader
Now deploy your pod/(your app) with this serviceaccount.
kubectl create <MY-POD> --image=<MY-CONTAINER-IMAGE> --serviceaccount=pod-reader
This will mount serviceaccount secret token in your pod, which can be found at /var/run/secrets/kubernetes.io/serviceaccount/token. Your app can use this token to make GET requests to Kubernetes API server in-order to get the status of the pod. See below example (this assumes your pod has curl utility installed. However, you can make a relevant API call from your code, pass the Header by reading the serviceaccount token file mounted in your pod).
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl https://kubernetes.default/api/v1/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD> -H "Authorization: Bearer ${TOKEN}" -k
curl https://kubernetes.default/api/v1/watch/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD>?timeoutSeconds=30 -H "Authorization: Bearer ${TOKEN}" -k
References:
Kubernetes API
serviceaccount

Can I use Role and ServiceAccounts with several namespaces?

I'm trying to connect my k8s cluster to my ceph cluster with this manual:
https://akomljen.com/using-existing-ceph-cluster-for-kubernetes-persistent-storage/
I want to deploy rbd-provision pods into kube-system namespace like this https://paste.ee/p/C1pB4
After deploying pvc I get errors because my pvc is in default namespace. Can I do with that anything? I read docs and if I'm right I can't use ServiceAccount with 2 ns, or can?
No. Service account is namespaced object and it is limited to particular namespace only
Service accounts can be granted permissions in another namespace.
For example, within the namespace "acme", grant the permissions in the view ClusterRole to the service account in the namespace "acme" named "myapp" :
kubectl create rolebinding myapp-view-binding \
--clusterrole=view --serviceaccount=acme:myapp \
--namespace=acme

Kubernetes RBAC authentication for default user

I am using kops in AWS to create my Kubernetes cluster.
I have created a cluster with RBAC enabled via --authorization=RBAC as described here.
I am trying to use the default service account token to interact with the cluster and getting this error:
Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)
Am I missing a role or binding somewhere?
I thing it is not a good idea to give the cluster-admin role to default service account in default namespace.
If you will give cluster-admin access to default user in default namespace - every app (pod) that will be deployed in cluster, in default namespace - will be able to manipulate the cluster (delete system pods/deployments or make other bad stuff).
By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.
try to give admin role and try.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default