Im trying to create a job to list all resources because my connection is terrible. Is there any way to give permission to a pod run the below command?
Here is ClusterRole that I am trying:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system
name: workaround
rules:
- apiGroups: [""]
resources:
- '*'
verbs:
- '*'
- apiGroups: ['*']
resources:
- '*'
verbs:
- '*'
The command is:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n ibm-rancher
If you are just looking to give your workload an admin role, you can use the prebuilt cluster-admin cluster role which should be available on every k8s cluster.
See the docs for more details - https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
Related
I noticed that a new cluster role - "eks:cloud-controller-manager" appeared in our EKS cluster. we never created it.I tried to find origin/creation of this cluster role but not able to find it.
any idea what does "eks:cloud-controller-manager" cluster role does in EKS cluster?
$ kubectl get clusterrole eks:cloud-controller-manager -o yaml
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"eks:cloud-controller-manager"},"rules":[{"apiGroups":[""],"resources":["events"],"verbs":["create","patch","update"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes/status"],"verbs":["patch"]},{"apiGroups":[""],"resources":["services"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["services/status"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["serviceaccounts"],"verbs":["create","get"]},{"apiGroups":[""],"resources":["persistentvolumes"],"verbs":["get","list","update","watch"]},{"apiGroups":[""],"resources":["endpoints"],"verbs":["create","get","list","watch","update"]},{"apiGroups":["coordination.k8s.io"],"resources":["leases"],"verbs":["create","get","list","watch","update"]},{"apiGroups":[""],"resources":["serviceaccounts/token"],"verbs":["create"]}]}
creationTimestamp: "2022-08-02T00:25:52Z"
name: eks:cloud-controller-manager
resourceVersion: "762242250"
uid: 34e568bb-20b5-4c33-8a7b-fcd081ae0a28
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create```
I tried to find this object in our Gitops repo but do not find it.
This role is created by AWS when you provision the cluster. This role is for the AWS cloud-controller-manager to integrate AWS services (eg. CLB/NLB, EBS) with Kubernetes. You will also find other roles like eks:fargate-manager to integrate with Fargate.
In the below manifest yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-operator
rules:
- apiGroups: [apiextensions.k8s.io]
resources: [customresourcedefinitions]
verbs: ['*']
- apiGroups: [monitoring.coreos.com]
resources:
- alertmanagers
- prometheuses
- prometheuses/finalizers
- alertmanagers/finalizers
- servicemonitors
- prometheusrules
verbs: ['*']
What does rules with apiGroups signify?
In Kubernetes resources can be either grouped resources or individual resources .
Example :
kubectl api-resources | grep 'crds\|pods\|NAME'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
pods po v1 true Pod
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
In above output , for pods apiVersion is displayed as v1 so it is an individual resource . where as for customresourcedefinitions apiVersion is displayed as apiextensions.k8s.io/v1 which indicates that it is a grouped resource under the group apiextensions.k8s.io.
When we are defining RBAC rules (roles/clusreroles) for grouped resources we need to mention apiGroups along with resources.
The role namespace-limited should have full access to all resources (of the specified API groups) inside of a namespace. My Role manifest looks like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-limited
namespace: restricted-xample
rules:
- apiGroups:
- core
- apps
- batch
- networking.k8s.io
resources: ["*"] # asterisk to grant access to all resources of the specified api groups
verbs: ["*"]
I associated the Role to a ServiceAccount using a RoleBinding but unfortunately this ServiceAccount has no access to Pod, Service, Secret, ConfigMap and Endpoint Resources. These resources are all part of the core API group. All the other common Workloads work though. Why is that?
The core group, also referred to as the legacy group, is at the REST path /api/v1 and uses apiVersion: v1
You need to use "" for core API group.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: restricted-xample
name: namespace-limited
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"] # "" indicates the core API group
resources: ["*"]
verbs: ["*"]
To test the permission of the service account use below commands
kubectl auth can-i get pods --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get secrets --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get configmaps --as=system:serviceaccount:restricted-xample:default -n restricted-xample
kubectl auth can-i get endpoints --as=system:serviceaccount:restricted-xample:default -n restricted-xample
Just figured out, that it works when I omit the core keyword, like in this example. Following Role manifest works:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-limited
namespace: restricted-xample
rules:
- apiGroups: ["", "apps", "batch", "networking.k8s.io"]
resources: ["*"]
verbs: ["*"]
But why it does not work if I specify the core API group is a mystery to me.
I am trying to allow some users in my org to forward ports to our production namespace in Kubernetes. However, I don't want them to be able to forward ports to all services. I want to restrict access to only certain services. Is this possible?
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "list", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
namespace: production
subjects:
- kind: User
name: "xyz#org.com"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: allow-port-forward-for-deployment-a
apiGroup: rbac.authorization.k8s.io
The above set up allows all services, but I don't want that.
I believe you can't. According to the docs
Resources can also be referred to by name for certain requests through
the resourceNames list. When specified, requests can be restricted to
individual instances of a resource. To restrict a subject to only
“get” and “update” a single configmap, you would write:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
Note that create requests
cannot be restricted by resourceName, as the object name is not known
at authorization time. The other exception is deletecollection.
Since you want to give the user permissions to create the forward ports, I don't think you can.
These rules worked for me
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: port-forward
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "create"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
Assuming users already have access to your kubernetes cluster and relevant namespace. They can simply port-forward local port to a pod (resource) port.
How can you do this?
kubectl port-forward <POD_NAME> <LOCAL_PORT>:<POD_PORT>
See Documentation
Quoting from the document - kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to since Kubernetes v1.10.
Refer this article if you wish, this nicely explains when you would need RBAC vs kubectl port-forward
RBAC could have been useful only when, you wanted person or a group of people only to port-forward for any services in a relevant namespace in your kubernetes cluster.
Workaround A: StatefulSets and resourceNames
It is possible to restrict port forwarding to a pod with a specific name. resourceNames refer to resources, not subresources:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
resourceNames: ["my-app"]
verbs: ["create"]
A StatefulSet generates predictable pod names, but is different from a ReplicaSet and might not fit your use case.
Workaround B: Jump pod and NetworkPolicy
Sketch:
A StatefulSet that runs kubectl port-forward services/my-service inside the cluster (JUMP).
A NetworkPolicy which restricts traffic from pods belonging to JUMP to the target service
RBAC which restricts creation of subresource portforward to the pods of JUMP up to a predefined maximum number of replicas resourceNames: ["jump-0", "jump-1", ..., "jump-N"].
I am trying to create a Role and RoleBinding so I can use Helm. What are the equivelant kubectl commands to create the following resources? Using the command line makes dev-ops simpler in my scenario.
Role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager-foo
namespace: foo
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-binding-foo
namespace: foo
subjects:
- kind: ServiceAccount
name: tiller-foo
namespace: foo
roleRef:
kind: Role
name: tiller-manager-foo
apiGroup: rbac.authorization.k8s.io
Update
According to #nightfury1204 I can run the following to create the Role:
kubectl create role tiller-manager-foo --namespace foo --verb=* --resource=.,.apps,.batch,
.extensions -n foo --dry-run -o yaml
This outputs:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: tiller-manager-foo
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
- apiGroups:
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- '*'
verbs:
- '*'
- apiGroups:
- extensions
resources:
- '*'
verbs:
- '*'
The namespace is missing and secondly, is this equivelant?
For Role:
kubectl create role tiller-manager-foo --verb=* --resource=*.batch,*.extensions,*.apps,*. -n foo
--resource=* support added on kubectl 1.12 version
For Rolebinding:
kubectl create rolebinding tiller-binding-foo --role=tiller-manager-foo --serviceaccount=foo:tiller-foo -n foo
kubectl apply -f can submit an arbitrary Kubernetes YAML file like what you have in the question.
I’d specifically suggest this here because you can commit these YAML files to source control, and if you’re using Helm anyways then this is far from the only Kubernetes YAML file you have. That gives you a consistent path even to bootstrap your Helm setup.