convert kubernetes yaml to kubectl cli command - kubernetes

kubectl I am looking for a single command or maybe combination of commands for the following yaml file
---
#
# Create a role, `pod-reader`, that can list pods and
# bind the default service account in the `mynamespace` namespace
# to that role.
#
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
namespace: mynamespace
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: mynamespace
subjects:
- kind: Group
name: system:serviceaccounts:mynamespace
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
So pretty much create a namespace that can get, watch and list pods

This assumes the namespace and service account are already created:
$ kubectl create namespace mynamespace
$ kubectl create serviceaccount mysa -n mynamespace
Create the role with:
$ kubectl create role pod-reader --namespace=mynamespace \
--verb=get,list,watch \
--resource=pods \
Create the rolebinding with:
$ kubectl create rolebinding read-pods --namespace=mynamespace \
--role=pod-reader \
--group=system:serviceaccounts:mynamespace
Tip: if you want to preview the generated YAML rather than actually creating the resource, append --dry-run=true -o=yaml to the commands.

Related

Shell or bash in pods in Kubernetes

How do you disable shell or bash access to pods in a container? I do not want anyone to get access inside the pod via kubectl exec or docker exec or via k9s
Kubectl is a CLI tool so it connects with the K8s API server and authenticates.
You can restrict the user by their Role, so using the RBAC with proper permission will resolve your issue.
Ref : https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Example :
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-role
namespace: default
rules:
- apiGroups: [""]
resources:
- pods/attach
- pods/exec
verbs: [""]
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-view
namespace: default
subjects:
- kind: ServiceAccount
name: serviceaccount
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: default-user-role
check auth using the
kubectl auth can-i --as=system:serviceaccount:default:serviceaccount exec pod

Why the pods running with a service account with needed permissions cannot list pods?

I have created a service account "serviceacc" in a namespace xyz and gave it needed permissions. Yet it could not list pods. Here are the steps I followed.
$kubectl create namespace xyz
$kubectl apply -f objects.yaml
Where content of objects.yaml
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: xyz
name: listpodser
rules:
- apiGroups: [""]
resources: ["pod"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: xyz
name: service-listpodser
subjects:
- kind: ServiceAccount
name: serviceacc
apiGroup: ""
roleRef:
kind: Role
name: listpodser
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceacc
namespace: xyz
Then I checked if the service account has permission to list pods:
$ kubectl auth can-i get pods --namespace xyz --as system:serviceaccount:xyz:serviceacc
no
$ kubectl auth can-i list pods --namespace xyz --as system:serviceaccount:xyz:serviceacc
no
As we can see from the output of above command, it cannot get/list pods.
Simple naming confusion. Use pods instead of pod in the resource list.
You can do it simpler and easier this way:
kubectl create sa serviceacc -n xyz
kubectl create role listpodser --verb=get,list --resource=po -n xyz
kubectl create -n xyz rolebinding service-listpodser --role=listpodser --serviceaccount=xyz:serviceacc
Note the short name for pods is accepted here po, you can use the short name for any api object
short names: If you need to list the short names for all objects, run this command kubectl apu-resources, the same way you can use the short name for other objects.e.g. pv instead of persistentvolume.
This way you can write these three lines into a shell script to create all in one shot

Is there a way for K8s service account to create another service account in a different namespace?

I have an app which interacts with an existing service account ("the agent") on a designated namespace. I want the agent to be able to create additional service accounts and roles on other namespaces. Is there a way to do so?
I've already answered to this question in a comment section, but I've also decided to provide more comprehensive information with examples.
Background
Kubernetes includes RBAC (role-based access control) mechanism that enables you to specify which actions are permitted for specific user or group of users. From Kubernetes v1.6 RBAC is enabled by default.
There are four Kubernetes objects: Role, ClusterRole, RoleBinding and ClusterRoleBinding, that we can use to configure needed RBAC rules. Role and RoleBinding are namespaced and ClusterRole and ClusterRoleBinding are cluster scoped resources.
We use Role and RoleBinding to authorize user to namespaced resources and we use ClusterRole and ClusterRoleBinding for cluster wide resources.
However, we can also mix this resurces.
Below I will briefly describe common combinations.
NOTE: It is impossible to link ClusterRoleBindings with Role.
For every test case I created new test namespace and test-agent service account.
Role and RoleBinding
I created simple Role and RoleBinding in specific namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-role
namespace: test
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-agent
We can see that test-agent has access only to resources in test namespace:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
no
ClusterRole and RoleBinding
I created ClusterRole and RoleBinding:
NOTE: I didn't specify any namespace for ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-clusterrole
subjects:
- kind: ServiceAccount
name: test-agent
Now we can see, that if a ClusterRole is linked to a ServiceAccount using a RoleBinding, the ClusterRole permissions apply ONLY to the namespace in which this RoleBinding has been created:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
no
ClusterRole and ClusterRoleBinding
Finally I created ClusterRole and ClusterRoleBinding:
NOTE: I didn't specify any namespace for ClusterRole and ClusterRoleBinding.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test-clusterrole
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: test-clusterrolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: test-clusterrole
subjects:
- kind: ServiceAccount
name: test-agent
namespace: test
Now we can see, that if a ClusterRole is linked to a ServiceAccount using a ClusterRoleBinding, the ClusterRole permissions apply to all namespaces:
$ kubectl auth can-i get pod -n test --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:test:test-agent
yes
$ kubectl auth can-i get pod -n kube-system --as=system:serviceaccount:test:test-agent
yes
Useful note: You can display all possible verbs for specific resource using
kubectl api-resources -o wide, e.g. to display all possible verbs for Deployment we can use:
$ kubectl api-resources -o wide | grep deployment
deployments deploy apps/v1 true Deployment [create delete deletecollection get list patch update watch]

RBAC not working as expected when trying to lock namespace

I'm trying to lock down a namespace in kubernetes using RBAC so I followed this tutorial.
I'm working on a baremetal cluster (no minikube, no cloud provider) and installed kubernetes using Ansible.
I created the folowing namespace :
apiVersion: v1
kind: Namespace
metadata:
name: lockdown
Service account :
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-lockdown
namespace: lockdown
Role :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
rules:
- apiGroups: [""] # "" indicates the core API group
resources: [""]
verbs: [""]
RoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
And finally I tested the authorization using the next command
kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown
This SHOULD be returning "No" but I got "Yes" :-(
What am I doing wrong ?
Thx
A couple possibilities:
are you running the "can-i" check against the secured port or unsecured port (add --v=6 to see). Requests made against the unsecured (non-https) port are always authorized.
RBAC is additive, so if there is an existing clusterrolebinding or rolebinding granting "get pods" permissions to that service account (or one of the groups system:serviceaccounts:lockdown, system:serviceaccounts, or system:authenticated), then that service account will have that permission. You cannot "ungrant" permissions by binding more restrictive roles
I finally found what was the problem.
The role and rolebinding must be created inside the targeted namespace.
I changed the following role and rolebinding types by specifying the namespace inside the yaml directly.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
namespace: lockdown
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
namespace: lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
In this example I gave permission to the user sa-lockdown to get, watch and list the pods in the namespace lockdown.
Now if I ask to get the pods : kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return yes.
On the contrary if ask to get the deployments : kubectl auth can-i get deployments --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return no.
You can also leave the files like they were in the question and simply create them using kubectl create -f <file> -n lockdown.

How to protect k8s secrets using rbac?

I set up a k8s cluster in GKE with rbac enabled, and I install Istio into the cluster.
I follow this step (link) to create key/certs for the Istio ingress controller, and key/certs are stored as secret whose name is istio-ingress-certs.
Now I want to use RBAC to limit access to istio-ingress-certs, so that every component in istio-system is allowed to read the secret, but none could modify or delete it.
I create a secrets-rbac.yaml file, and run kubectl apply -f secrets-rbac.yaml, which creates a role to read the secret, and binds this role to all serviceaccounts in istio-system namespace.
To verify that a serviceaccount is not allowed to modify istio-ingress-certs. I use this command to test.
kubectl auth can-i edit secrets/istio-ingress-certs -n istio-system --as system:serviceaccount:istio-system:istio-pilot-service-account
I expect that the command would return false, but it returns true. I think I didn't set up rbac correctly in the yaml file, but I am not clear which part is not correct.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: istio-system
name: istio-ingress-certs-reader
rules:
- apiGroups: ["*"]
resources: ["secrets"]
resourceNames: ["istio-ingress-certs"]
verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: istio-system
name: read-istio-ingress-certs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: istio-ingress-certs-reader
subjects:
- kind: Group
name: system:serviceaccounts:istio-system
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:unauthenticated
apiGroup: rbac.authorization.k8s.io