I am using Google cloud's GKE for my kubernetes operations.
I am trying to restrict access to the users that access the clusters using command line. I have applied IAM roles in Google cloud and given view role to the Service accounts and users. It all works fine if we use it through api or "--as " in kubectl commands but when someone tries to do a kubectl create an object without specifying "--as" object still gets created with "default" service account of that particular namespace.
To overcome this problem we gave restricted access to "default" service account but still we were able to create objects.
$ kubectl auth can-i create deploy --as default -n test-rbac
no
$ kubectl run nginx-test-24 -n test-rbac --image=nginx
deployment.apps "nginx-test-24" created
$ kubectl describe rolebinding default-view -n test-rbac
Name: default-view
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: view
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default test-rbac
I expect users who are accessing cluster through CLI should not be able to create objects if they dont have permisssions, even if they dont use "--as" flag they should be restricted.
Please take in count that first you need to review the prerequisites to use RBAC in GKE
Also, please note that IAM roles applies to the entire Google Cloud project and all clusters within that project and RBAC enables fine grained authorization at a namespace level. So, with GKE these approaches to authorization work in parallel.
For more references, please take a look on this document RBAC in GKE
For all the haters of this question, I wish you could've tried pointing to this:
there is a file at:
~/.config/gcloud/configurations/config_default
in this there is a option under [container] section:
use_application_default_credentials
set to true
Here you go , you learnt something new.. enjoy. Wish you could have tried helping instead of down-voting.
Related
I am building an application which should execute tasks in a separate container/pods.
this application would be running in a specific namespace the new pods must be created in the same namespace as well.
I understand we can similar via custom CRD and Operators, but I found it is overly complicated and we need Golang knowledge for the same.
Is there any way this could be achived without having to learn Operators and GoLang?
I am ok to use kubctl or api within my container and wanted to connect the host and to the same namespace.
Yes, this is certainly possible using a ServiceAccount and then connecting to the API from within the Pod.
First, create a ServiceAccount in your namespace using
kubectl create serviceaccount my-service-account
For your newly created ServiceAccount, give it the permissions you want using Roles and RoleBindings. The subject would be something like this:
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: my-namespace
Then, add the ServiceAccount to the Pod from where you want to create other Pods from (see documentation). Credentials are automatically mounted inside the Pod using automountServiceAccountToken.
Now from inside the Pod you can either use kubectl or call the API using the credentials inside the Pod. There are libraries for a lot of programming languages to talk to Kubernetes, use those.
I need to communicate to a Pod if it's restarting or not. Because depending of the situation my app is working differently (stateful app). I wouldn't like to create other pod that runs a kind of watchdog and then informs my app if it's restarting or not (after a fault). But maybe there is a way to do it with Kubernetes components (Kubelet,..).
Quoting from Kubernetes Docs:
Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account
(for example, default)
A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.
An RBAC Role or ClusterRole contains rules that represent a set of
permissions.
A Role always sets permissions within a particular namespace.
ClusterRole, by contrast, is a non-namespaced resource
So, In-order to get/watch the status of the other pod, you can call Kubernetes API from the pod running your code by using serviceaccounts. Follow below steps in-order to automatically retrieve other pod status from a given pod without any external dependency (Due to reliability concerns, you shouldn't rely upon nodes)
Create a serviceaccount in your pod's (requestor pod) namespace
kubectl create sa pod-reader
If both pods are in same namespace, create role,rolebinding
Create a role
kubectl create role pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create rolebinding pod-reader-binding --role=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Else, i.e the pods are in different namespaces, create clusterrole,clusterrolebinding
Create a clusterrole
kubectl create clusterrole pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create clusterrolebinding pod-reader-binding --clusterrole=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Verify the permissions
kubectl auth can-i watch pods --as=system:serviceaccount:<NAMESPACE>:pod-reader
Now deploy your pod/(your app) with this serviceaccount.
kubectl create <MY-POD> --image=<MY-CONTAINER-IMAGE> --serviceaccount=pod-reader
This will mount serviceaccount secret token in your pod, which can be found at /var/run/secrets/kubernetes.io/serviceaccount/token. Your app can use this token to make GET requests to Kubernetes API server in-order to get the status of the pod. See below example (this assumes your pod has curl utility installed. However, you can make a relevant API call from your code, pass the Header by reading the serviceaccount token file mounted in your pod).
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl https://kubernetes.default/api/v1/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD> -H "Authorization: Bearer ${TOKEN}" -k
curl https://kubernetes.default/api/v1/watch/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD>?timeoutSeconds=30 -H "Authorization: Bearer ${TOKEN}" -k
References:
Kubernetes API
serviceaccount
I'm trying to connect my k8s cluster to my ceph cluster with this manual:
https://akomljen.com/using-existing-ceph-cluster-for-kubernetes-persistent-storage/
I want to deploy rbd-provision pods into kube-system namespace like this https://paste.ee/p/C1pB4
After deploying pvc I get errors because my pvc is in default namespace. Can I do with that anything? I read docs and if I'm right I can't use ServiceAccount with 2 ns, or can?
No. Service account is namespaced object and it is limited to particular namespace only
Service accounts can be granted permissions in another namespace.
For example, within the namespace "acme", grant the permissions in the view ClusterRole to the service account in the namespace "acme" named "myapp" :
kubectl create rolebinding myapp-view-binding \
--clusterrole=view --serviceaccount=acme:myapp \
--namespace=acme
I have deployed kubernetes v1.8 in my workplace. I have created roles for admin and view access to namespaces 3months ago. In the initial phase RBAC is working as per the access given to the users. Now RBAC is not happening every who has access to the cluster is having clusteradmin access.
Can you suggest the errors/changes that had to be done?
Ensure the RBAC authorization mode is still being used (--authorization-mode=…,RBAC is part of the apiserver arguments)
If it is, then check for a clusterrolebinding that is granting the cluster-admin role to all authenticated users:
kubectl get clusterrolebindings -o yaml | grep -C 20 system:authenticated
I am using kops in AWS to create my Kubernetes cluster.
I have created a cluster with RBAC enabled via --authorization=RBAC as described here.
I am trying to use the default service account token to interact with the cluster and getting this error:
Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)
Am I missing a role or binding somewhere?
I thing it is not a good idea to give the cluster-admin role to default service account in default namespace.
If you will give cluster-admin access to default user in default namespace - every app (pod) that will be deployed in cluster, in default namespace - will be able to manipulate the cluster (delete system pods/deployments or make other bad stuff).
By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.
try to give admin role and try.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default