I am trying to create a Role and RoleBinding so I can use Helm. What are the equivelant kubectl commands to create the following resources? Using the command line makes dev-ops simpler in my scenario.
Role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager-foo
namespace: foo
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding-foo
namespace: foo
subjects:
- kind: ServiceAccount
name: tiller-foo
namespace: foo
roleRef:
kind: Role
name: tiller-manager-foo
apiGroup: rbac.authorization.k8s.io
Update
According to #nightfury1204 I can run the following to create the Role:
kubectl create role tiller-manager-foo --namespace foo --verb=* --resource=.,.apps,.batch,
.extensions -n foo --dry-run -o yaml
This outputs:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: tiller-manager-foo
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- '*'
verbs:
- '*'
- apiGroups:
- extensions
resources:
- '*'
verbs:
- '*'
The namespace is missing and secondly, is this equivelant?
For Role:
kubectl create role tiller-manager-foo --verb=* --resource=*.batch,*.extensions,*.apps,*. -n foo
--resource=* support added on kubectl 1.12 version
For Rolebinding:
kubectl create rolebinding tiller-binding-foo --role=tiller-manager-foo --serviceaccount=foo:tiller-foo -n foo
kubectl apply -f can submit an arbitrary Kubernetes YAML file like what you have in the question.
I’d specifically suggest this here because you can commit these YAML files to source control, and if you’re using Helm anyways then this is far from the only Kubernetes YAML file you have. That gives you a consistent path even to bootstrap your Helm setup.
Related
How do you disable shell or bash access to pods in a container? I do not want anyone to get access inside the pod via kubectl exec or docker exec or via k9s
Kubectl is a CLI tool so it connects with the K8s API server and authenticates.
You can restrict the user by their Role, so using the RBAC with proper permission will resolve your issue.
Ref : https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Example :
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-role
namespace: default
rules:
- apiGroups: [""]
resources:
- pods/attach
- pods/exec
verbs: [""]
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-view
namespace: default
subjects:
- kind: ServiceAccount
name: serviceaccount
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: default-user-role
check auth using the
kubectl auth can-i --as=system:serviceaccount:default:serviceaccount exec pod
I used below file to create service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-reader
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: reader-cr
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-only-rb
subjects:
- kind: ServiceAccount
name: sa-reader
roleRef:
kind: ClusterRole
name: reader-cr
apiGroup: rbac.authorization.k8s.io
The kubeconfig I created is something similar
apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: qa
cluster:
certificate-authority-data: ca
server: https:/<server>:443
users:
- name: sa-reader
user:
as-user-extra: {}
token: <token>
contexts:
- name: qa
context:
cluster: qa
user: sa-reader
namespace: default
current-context: qa
With this kubeconfig file, I am able to access resources in the default namespace but not any other namespace. How to access resources in other namespaces as well?
You can operate on a namespace explicitly by using the -n (--namespace) option to kubectl:
$ kubectl -n my-other-namespace get pod
Or by changing your default namespace with the kubectl config command:
$ kubectl config set-context --current --namespace my-other-namespace
With the above command, all future invocations of kubectl will assume the my-other-namespace namespace.
An empty namespace in metadata, defaults to namespace: default and so, your RoleBinding is only applied to the default namespace.
See ObjectMeta.
I suspect (!) you need to apply to RoleBinding to each of the namespaces in which you want the Service Account to be permitted.
I'm trying to grant the default service account in my namespace the ability to read ingress resources. I want to be able to read all ingress resources for the cluster, would that necessitate a ClusterRole? This is the role and binding I've been trying.
The kubectl command kubectl auth can-i list ingress -n my-namespace --as=system:serviceaccount:my-namespace:default also returns "no"
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: my-ingress-reader
rules:
- apiGroups: ["", "networking.k8s.io", "networking", "extensions"] # "" indicates the core API group
resources: ["ingress"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-ingress-reader
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-namespace
roleRef:
kind: Role
name: my-ingress-reader
apiGroup: rbac.authorization.k8s.io
your Role rules is using incorrect api-resources that is resources: ["ingress"], it must be resources: ["ingresses"]
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: my-ingress-reader
rules:
- apiGroups: ["", "networking.k8s.io", "networking", "extensions"] # "" indicates the core API group
resources: ["ingresses"]
verbs: ["get", "watch", "list"]
to check the correct api-resources, you can use below command
root#controlplane:~# kubectl api-resources | grep -i ingress
ingresses ing extensions/v1beta1 true Ingress
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
I have two PodSecurityPolicy:
000-privileged (only kube-system service accounts and admin users)
100-restricted (everything else)
I have a problem with their assignment to pods.
First policy binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:privileged
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- 000-privileged
verbs:
- use
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:privileged-kube-system
namespace: kube-system
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: psp:privileged
apiGroup: rbac.authorization.k8s.io
Second policy binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:restricted
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
- 100-restricted
verbs:
- use
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:restricted
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: psp:restricted
apiGroup: rbac.authorization.k8s.io
Everything works fine in kube-system.
However, in other namespaces, it does not work as expected:
If I create a Deployment (kubectl apply -f deployment.yml), its pod gets tagged with psp 100-restricted.
If I create a Pod (kubectl apply -f pod.yml), it gets tagged with psp 000-privileged. I really don't get why its not 100-restricted.
My kubectl is configured with external authentication token from OpenID Connect (OIDC).
I verified the access and everything seems ok:
kubectl auth can-i use psp/100-restricted
yes
kubectl auth can-i use psp/000-privileged
no
Any clue?
The problem was that my user had access to all verbs (*) of all resources (*) of the 'extensions' apiGroup in its Role.
The documentation is a bit unclear (https://github.com/kubernetes/examples/tree/master/staging/podsecuritypolicy/rbac):
The use verb is a special verb that grants access to use a policy while not permitting any other access. Note that a user with superuser permissions within a namespace (access to * verbs on * resources) would be allowed to use any PodSecurityPolicy within that namespace.
I got confused by the mention of 'within that namespace'. Since PodSecurityGroup are not "namespaced", I assumed they could not be used without a ClusterRole/RoleBinding in the namespace giving explicit access. Seems I was wrong...
I modified my role to specify the following:
rules:
- apiGroups: ["", "apps", "autoscaling", "batch"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["*"]
# Avoid using * here to prevent use of 'use' verb for podsecuritypolicies resource
verbs: ["create", "get", "watch", "list", "patch", "delete", "deletecollection", "update"]
And now it picks up the proper PSP. One interesting thing, it also prevent users from modifying (create/delete/update/etc) podsecuritypolicies.
It seems 'use' verb is quite special after all.....
I'm trying to create a read only user. I want the user to be able to list nodes and pods and view the dashboard. I got the certs created and can connect but I'm getting the following error.
$ kubectl --context minikube-ro get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "erst-operation" cannot list pods at the cluster scope
My cluster role...
$ cat helm/namespace-core/templates/pod-reader-cluster-role.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
My cluster role binding...
$ cat helm/namespace-core/templates/pod-reader-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: erst-operation
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
I'm aware the above shouldn't grant permissions to see the dashboard but how do I get it to just list the pods?
You cluster role should contain Core group as resource pods are in Core group.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps", ""]
resources: ["pods"]
verbs: ["get", "list", "watch"]