Kubernetes RBAC default user - kubernetes

I'm reading myself currently into RBAC and am using Docker For Desktop with a local Kubernetes cluster enabled.
If I run kubectl auth can-i get pods which user or group or serviceaccount is used by default?
Is it the same call like:
kubectl auth can-i get pods --as docker-for-desktop --as-group system:serviceaccounts ?
kubectl config view shows:
contexts:
- context:
cluster: docker-for-desktop-cluster
namespace: default
user: docker-for-desktop
name: docker-for-desktop
...
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
But simply calling kubectl auth can-i get pods --as docker-for-desktop returns NO.
Thanks,
Kim

To answer your question
If I run kubectl auth can-i get pods which user or group or serviceaccount is used by default?
As you can read on Configure Service Accounts for Pods:
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
You can use kubectl get serviceaccount to see what serviceaccounts are setup in the cluster.
Try checking which contexts you have available and switching into a which ever you need:
kubectl config get-contexts
kubectl config use-context docker-for-desktop
If you are experiencing an issue with missing Role please check Referring to Resources to set they correctly for docker-for-desktop

Related

Kubernetes understanding output of - kubectl auth can-i

I'm trying to understand why on one cluster an operation is permitted but on the other i'm getting the following
Exception encountered setting up namespace watch from Kubernetes API v1 endpoint https://10.100.0.1:443/api: namespaces is forbidden: User \"system:serviceaccount:kube-system:default\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces is forbidden: User \\\"system:serviceaccount:kube-system:default\\\" cannot list resource \\\"namespaces\\\" in API group \\\"\\\" at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"namespaces\"},\"code\":403}\n)"
I'm managing two Kubernetes clusters -
clusterA booted with Kops version v1.14.8
clusterB booted on AWS EKS version v1.14.9-eks-f459c0
So i've tried using the kubectl auth command to try figuring out and I do see that on one i'm allowed however on the second i'm not as you can see:
kubectl config use-context clusterA
Switched to context "clusterA".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
yes
kubectl config use-context clusterB
Switched to context "clusterB".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
no
Is there a way to understand what are these two decisions based on yes/no?
Thanks for helping out!
The decision yes/no is based on whether there is a clusterrole and a clusterrolebinding or rolebinding which permits the default serviceaccount in kube-system namespace to perform verb list on resource namespace.
The trick in case of namespace resource is that there needs to be a clusterrole instead of role because namespace is a cluster scoped resource.
You check what are the clusterrole,role, clusterrolebinding,rolebinding exists in a kubernetes cluster using below command
kubectl get clusterrole,clusterrolebinding
kubectl get role,rolebinding -n namespacename
For more details refer Kubernetes RBAC here

kubernetes service account permissions

When I create a service account on my docker-desktop kubernetes environment on windows 10 using
kubectl create serviceaccount test -n test-namespace
if I run the following command it returns 'yes'
kubectl auth can-i create pods --all-namespaces --token <token from test service account>
but if I run the same on setup on a cloud managed kubernetes cluster it returns 'no'
What is the difference in the setups? I'm trying to limit control on a local cluster.
found the solution this also applies to windows https://github.com/docker/for-mac/issues/3694#issuecomment-619474504

A way to communicate to a Pod that it's restarting

I need to communicate to a Pod if it's restarting or not. Because depending of the situation my app is working differently (stateful app). I wouldn't like to create other pod that runs a kind of watchdog and then informs my app if it's restarting or not (after a fault). But maybe there is a way to do it with Kubernetes components (Kubelet,..).
Quoting from Kubernetes Docs:
Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account
(for example, default)
A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.
An RBAC Role or ClusterRole contains rules that represent a set of
permissions.
A Role always sets permissions within a particular namespace.
ClusterRole, by contrast, is a non-namespaced resource
So, In-order to get/watch the status of the other pod, you can call Kubernetes API from the pod running your code by using serviceaccounts. Follow below steps in-order to automatically retrieve other pod status from a given pod without any external dependency (Due to reliability concerns, you shouldn't rely upon nodes)
Create a serviceaccount in your pod's (requestor pod) namespace
kubectl create sa pod-reader
If both pods are in same namespace, create role,rolebinding
Create a role
kubectl create role pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create rolebinding pod-reader-binding --role=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Else, i.e the pods are in different namespaces, create clusterrole,clusterrolebinding
Create a clusterrole
kubectl create clusterrole pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create clusterrolebinding pod-reader-binding --clusterrole=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Verify the permissions
kubectl auth can-i watch pods --as=system:serviceaccount:<NAMESPACE>:pod-reader
Now deploy your pod/(your app) with this serviceaccount.
kubectl create <MY-POD> --image=<MY-CONTAINER-IMAGE> --serviceaccount=pod-reader
This will mount serviceaccount secret token in your pod, which can be found at /var/run/secrets/kubernetes.io/serviceaccount/token. Your app can use this token to make GET requests to Kubernetes API server in-order to get the status of the pod. See below example (this assumes your pod has curl utility installed. However, you can make a relevant API call from your code, pass the Header by reading the serviceaccount token file mounted in your pod).
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl https://kubernetes.default/api/v1/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD> -H "Authorization: Bearer ${TOKEN}" -k
curl https://kubernetes.default/api/v1/watch/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD>?timeoutSeconds=30 -H "Authorization: Bearer ${TOKEN}" -k
References:
Kubernetes API
serviceaccount

Login to GKE via service account with token

I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments.
I would like to use something like this (remotely):
kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true
kubectl config set-credentials foo --token="<TOKEN>"
kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default
kubectl config use-context cluster
kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1
So I created the service account and then get the secret token:
kubectl create serviceaccount foo
kubectl get secret foo-token-gqvgn -o yaml
But, when I try to update the image in any deployment, I receive:
error: You must be logged in to the server (Unauthorized)
IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP.
Any suggestions? Thanks.
I have tried to recreate your problem.
Steps I have followed
kubectl create serviceaccount foo
kubectl get secret foo-token-* -o yaml
Then, I have tried to do what you have done
What I have used as token is base64 decoded Token.
Then I tried this:
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:foo" cannot list pods in the namespace "default": Unknown user "system:serviceaccount:default:foo"
This gave me error as expected. Because, I need to grant permission to this ServiceAccount.
How can I grant permission to this ServiceAccount? I need to create ClusterRole & ClusterRoleBinding with necessary permission.
Read more to learn more role-based-access-control
I can do another thing
$ kubectl config set-credentials foo --username="admin" --password="$PASSWORD"
This will grant you admin authorization.
You need to provide cluster credential.
Username: admin
Password: -----
You will get this info in GKE -> Kubernetes Engine -> {cluster} -> Show credential

Kubectl get nodes return "the server doesn't have a resource type "nodes""

I installed the Kubernetes and performed kubeadm init and join from the worker too. But when i run kubectl get nodes it gives the following response
the server doesn't have a resource type "nodes"
What might be the problem here? COuld not see anything in the /var/log/messages
Any hints here?
In my case, I wanted to see the description of my pods.
When I used kubectl describe postgres-deployment-866647ff76-72kwf, the error said error: the server doesn't have a resource type "postgres-deployment-866647ff76-72kwf".
I corrected it by adding pod, before the pod name, as follows:
kubectl describe pod postgres-deployment-866647ff76-72kwf
It looks to me that the authentication credentials were not set correctly. Did you copy the kubeconfig file /etc/kubernetes/admin.conf to ~/.kube/config? If you used kubeadm the API server should be configured to run on 6443, not in 8080. Could you also check that the KUBECONFIG variable is not set?
It would also help to increase the verbose level using the flag --v=99. Moreover, are you accessing from the same machine where the Kubernetes master components are installed, or are you accessing from the outside?
I got this message when I was trying to play around with Docker-Desktop. I had previously been doing a few experiments with Google Cloud and run some kubectl commands for that. The result was that in my ~/.kube/config file I still had stale config related to a now non-existent GCP cluster, and my default k8s context was set to that.
Try the following:
# Find what current contexts you have
kubectl config view
I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
So only one context now. If you have more than one context here, check that its the one you expect that is set to current-context. If not change it with:
# Get rid of old contexts that you don't use
kubectl config delete-context some-old-context
# Selecting the context that I have auth for
kubectl config use-context docker-desktop