apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: xi-{{instanceId}}
name: deployment-creation
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
In the above example, I permit various operations on pods and jobs.
For pods, the apiGroup is blank. For jobs, the apiGroup may be batch or extensions.
Where can I find all the possible resources, and which apiGroup I should use with each resource?
kubectl api-resources will list all the supported resource types and api-group. Here is the table of resource-types
just to add to #suresh's answer, here is a list of apiGroups
Related
I want to restrict users under RBAC AKS/kubernetes cluster namespace to fetch only secrets but not TLS secrets. I have my cluster role with the following api permissions. But it does not work iam unable to restrict users from fetching only secrets and not TLS secrets.
Code:
---
#ClusterRole-NamespaceAdmin-RoleGranter
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: clusterrole-ns-admin
rules:
# "Pods" rules
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
# "Nodes" rules - Node rules are effective only on cluster-role-binding
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
# "Secrets" rules
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch", "create","update", "delete"]
# "TLS Secrets" rules
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes.io/tls"]
verbs: ["get", "watch", "list"]
Thanks in advance!
Short answer is it's not possible. There is only kind Secret resource in Kubernetes and you can apply RBAC on a kind. There is no separate kind for TLS secret.
I set up a Ceph cluster and mounted manually using the sudo mount -t command following the official documentation, and I checked the status of my Ceph cluster - no problems there. Now I am trying to mount my CephFS on Kubernetes but my pod is stuck in ContainerCreating when I run the kubectl create command because it is failing to mount. I looked at many related problems/solutions online but nothing works.
As reference, I am following this guide: https://medium.com/velotio-perspectives/an-innovators-guide-to-kubernetes-storage-using-ceph-a4b919f4e469
My setup consists of 5 AWS instances, and they are as follows:
Node 1: Ceph Mon
Node 2: OSD1 + MDS
Node 3: OSD2 + K8s Master
Node 4: OSD3 + K8s Worker1
Node 5: CephFS + K8s Worker2
Is it okay to stack K8s on top of the same instance as Ceph? I am pretty sure that is allowed, but if that is not allowed, please let me know.
In the describe pod logs, this is the error/warning:
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30 --scope -- mount -t ceph -o name=kubernetes-dynamic-user-4d05a2df-3639-11ea-b2d3-5a4147fda646,secret=AQC4whxeqQ9ZERADD2nUgxxOktLE1OIGXThBmw== 172.31.15.110:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-4d05a269-3639-11ea-b2d3-5a4147fda646 /root/userone/kubelet/pods/bbf28924-3639-11ea-879d-0a6b51accf30/volumes/kubernetes.io~cephfs/pvc-4777686c-3639-11ea-879d-0a6b51accf30
Output: Running scope as unit run-2382233.scope.
couldn't finalize options: -34
These are my .yaml files:
Provisioner:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update", "create"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-provisioner-dt
namespace: test-dt
subjects:
- kind: ServiceAccount
name: test-provisioner-dt
namespace: test-dt
roleRef:
kind: ClusterRole
name: test-provisioner-dt
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-provisioner-dt
namespace: test-dt
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgres-pv
namespace: test-dt
provisioner: ceph.com/cephfs
parameters:
monitors: 172.31.15.110:6789
adminId: admin
adminSecretName: ceph-secret-admin-dt
adminSecretNamespace: test-dt
claimRoot: /pvc-volumes
PVC:
apiVersion: v1
metadata:
name: postgres-pvc
namespace: test-dt
spec:
storageClassName: postgres-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Output of kubectl get pv and kubectl get pvc show the volumes are bound and claimed, no errors.
Output of the provisioner pod logs all show success/no errors.
Please help!
Is it possible to restrict the ability of particular users to dynamically provision disks from storageclasses? Or, for example, only allowing particular namespaces to be able to use a storageclass?
Fair warning: I haven't tested this!
StorageClass is just an API endpoint, and RBAC works by restricting access to those endpoints, so in theory this should work just fine:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: sc_access
rules:
- apiGroups: ["storage.k8s.io", "core" ]
resources: [ "storageclass" ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
If that doesn't work, you might be able to restrict access directly via the NonResourceUrls option:
rules:
- nonResourceURLs: ["/storage.k8s.io/v1/storageclasses"]
verbs: ["get", "post"]
Storage resource quota can be used to restrict usage of storage classes
I my 1.9 cluster created this deployment role for the dev user. Deployment works as expected. Now I want to give exec and logs access to developer. What role I need to add for exec to the pod?
kind: Role
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Error message:
kubectl exec nginx -it -- sh
Error from server (Forbidden): pods "nginx" is forbidden: User "dev" cannot create pods/exec in the namespace "dev"
Thanks
SR
The RBAC docs say that
Most resources are represented by a string representation of their name, such as “pods”, just as it appears in the URL for the relevant API endpoint. However, some Kubernetes APIs involve a “subresource”, such as the logs for a pod. [...] To represent this in an RBAC role, use a slash to delimit the resource and subresource.
To allow a subject to read both pods and pod logs, and be able to exec into the pod, you would write:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
Some client libraries may do an http GET to negotiate a websocket first, which would require the "get" verb. kubectl sends an http POST instead, that's why it requires the "create" verb in that case.
I have the following clusterrole
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: test
rules:
- apiGroups: [""]
resources: ["crontabs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
with the following error on create
jonathan#sc-test-cluster:~$ kubectl create clusterrole role.yml
error: at least one verb must be specified
You either create it from file using -f or by specifying the options using clusterrole, see also the docs, but not both. Try the following:
$ kubectl create -f role.yml