When listing all the API resources in K8s you get:
$ kubectl api-resources -owide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
limitranges limits true LimitRange [create delete deletecollection get list patch update watch]
namespaces ns false Namespace [create delete get list patch update watch]
nodes no false Node [create delete deletecollection get list patch update watch]
persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch]
persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch]
pods po true Pod [create delete deletecollection get list patch update watch]
podtemplates true PodTemplate [create delete deletecollection get list patch update watch]
replicationcontrollers rc true ReplicationController [create delete deletecollection get list patch update watch]
resourcequotas quota true ResourceQuota [create delete deletecollection get list patch update watch]
secrets true Secret [create delete deletecollection get list patch update watch]
serviceaccounts sa true ServiceAccount [create delete deletecollection get list patch update watch]
services svc true Service [create delete get list patch update watch]
mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration [create delete deletecollection get list patch update watch]
... etc ...
Many list the verb deletecollection which sounds useful, but I can't run it e.g.
$ kubectl deletecollection
Error: unknown command "deletecollection" for "kubectl"
Run 'kubectl --help' for usage.
unknown command "deletecollection" for "kubectl"
Nor can I find it in the docs except where it appears in the api-resources output above or mentioned as a verb.
Is there a way to deletecollection?
It sounds like it would be better than the sequence of grep/awk/xargs that I normally end up doing if it does do what I think it should do. i.e. delete all the pods of a certain type.
The delete verb refers to deleting a single resource, for example a single Pod. The deletecollection verb refers to deleting multiple resources at the same time, for example multiple Pods using a label or field selector or all Pods in a namespace.
To give some examples from the API documentation:
To delete a single Pod: DELETE /api/v1/namespaces/{namespace}/pods/{name}
To delete multiple Pods (or, deletecollection):
All pods in a namespace DELETE /api/v1/namespaces/{namespace}/pods
All pods in a namespace matching a given label selector: DELETE /api/v1/namespaces/{namespace}/pods?labelSelector=someLabel%3dsomeValue
Regarding kubectl: You cannot invoke deletecollection explicitly with kubectl.
Instead, kubectl will infer on its own whether to use delete or deletecollection depending on how you invoke kubectl delete. When deleting a single source (kubectl delete pod $POD_NAME), kubectl will use a delete call and when using a label selector or simply deleting all Pods (kubectl delete pods -l $LABEL=$VALUE or kubectl delete pods --all), it will use the deletecollection verb.
DeleteCollection it's not a kubectl command parameter.
When RBAC is active, it use verbs to define what type of access you have over a class of kubernetes objects.
DeleteCollection is a verb used in a RBAC Role definition to authorize or not a deletion of objects of the same kind like pods or deployments or services.
Example of a yaml Role definition using verbs .
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-admin
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list","delete", "deletecollection"]
Related
I have a Kubernetes cluster (v1.14.10) which contains the kubernetes ingress controller (quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0). I have the following log message at the nginx pods, when trying to update the IC to 0.30.0:
error updating ingress rule: ingresses.networking.k8s.io "test" is forbidden: User "system:serviceaccount:ingress:nginx" cannot update resource "ingresses/status" in API group "networking.k8s.io" in the namespace "test"
the clusterrolebinding and role of nginx contain the following permissions:
#kubectl describe clusterrolebinding nginx-role
Name: nginx-role
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: nginx-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount nginx ingress
#kubectl describe clusterrole nginx-role
Name: nginx-role
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
events [] [] [create patch]
services [] [] [get list update watch]
ingresses.extensions [] [] [get list watch update]
ingresses.networking.k8s.io [] [] [get list watch]
namespaces [] [] [get update]
configmaps [] [] [list watch get create update]
nodes [] [] [list watch get]
endpoints [] [] [list watch]
pods [] [] [list watch]
secrets [] [] [list watch]
ingresses.extensions/status [] [] [update]
ingresses.networking.k8s.io/status [] [] [update]
The ingress configuration contains the following apiVersion, which I don't know if this is the issue due to new networking.k8s.io/v1beta1 package ([4127]https://github.com/kubernetes/ingress-nginx/pull/4127)
apiVersion: extensions/v1beta1
kind: Ingress
Can you please inform me whether this is a kubernetes ingress configuration or clusterrole issue?
thank you.
This question is a bit old but just in case anyone else finds this issue when looking up the error message, I solved it by adding the networking.k8s.io API group to ingresses/status resources in the nginx role configuration.
Sample:
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
The error actually describes the issue well, it API group "networking.k8s.io" is missing for resource "ingresses/status", you may encounter it on other resources as well, the fix is similar. As to the cause, I beleive this is a Kubernetes update issue.
How do I determine which apiGroup any given resource belongs in?
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: thing
rules:
- apiGroups: ["<wtf goes here>"]
resources: ["deployments"]
verbs: ["get", "list"]
resourceNames: []
To get API resources - supported by your Kubernetes cluster:
kubectl api-resources -o wide
example:
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
deployments deploy extensions true Deployment [create delete deletecollection get list patch update watch]
To get API versions - supported by your Kubernetes cluster:
kubectl api-versions
You can verify f.e. deployment:
kubectl explain deploy
KIND: Deployment
VERSION: extensions/v1beta1
DESCRIPTION:
DEPRECATED - This group version of Deployment is deprecated by
apps/v1beta2/Deployment.
Furthermore you can investigate with api-version:
kubectl explain deploy --api-version apps/v1
Shortly you an specify in you apiGroups like:
apiGroups: ["extensions", "apps"]
You can also configure those settings for your cluster using (for example to test it will work with next 1.16 release) by passing options into --runtime-config in kube-apiserver.
Additional resources:
api Resources:
Kubernetes Deprecation Policy
Additional Notable Feature Updates for specific release please follow like:
Continued deprecation of extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs; these extensions will be retired in 1.16!
kubectl api-resources -o wide provide the supported API resources on the system.
[suresh.vishnoi#xxx1309 ~]$ kubectl api-resources -o wide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
controllerrevisions apps true ControllerRevision [create delete deletecollection get list patch update watch]
daemonsets ds apps true DaemonSet [create delete deletecollection get list patch update watch]
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
replicasets rs apps true ReplicaSet [create delete deletecollection get list patch update watch]
kubectl api-resources -o wide | grep -i deployment will provide the relevant information
apps is the apiGroup for the deployment resource
DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16.
Migrate to the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API./api-deprecations-in-1-16
This is a little tricky, because both groups apps and extensions are in use in recent kubernetes versions, for example
kubectl get deployments # It is still requested via extensions api group by default.
kubectl get deployments.apps # request via apps group
so until deployments are removed from the extensions apigroup you have to use both apigroups in your role.
apiGroups: ["apps","extensions"]
https://github.com/kubernetes/kubernetes/issues/67439
In later k8s version, apigroup is deprecated, and the command kubectl api-resources -o wide will show apiversion instead, which is a combination of apigroup/version
It is included in the online API documentation.
In your example, if you click through and find the documentation for Role, it lists the group and version in both the sidebar ("Role v1 rbac.authorization.k8s.io") and as the first line in the actual API documentation. Similarly, Deployment is in group "apps" with version "v1".
In the Role specification you only put the group, and it applies to all versions. So to control access to Deployments, you'd specify apiGroups: [apps], resources: [deployments]. (This is actually one of the examples in the RBAC documentation.)
You can run below command to get apiVersion and other details.
kubectl explain <Resource Name>
kubectl explain deployment
I have two kubernetes clusters that were set up by kops. They are both running v1.10.8. I have done by best to mirror the configuration between the two. They both have RBAC enabled. I have kubernetes-dashboard running on both. They both have a /srv/kubernetes/known_tokens.csv with an admin and a kube user:
$ sudo cat /srv/kubernetes/known_tokens.csv
ABCD,admin,admin,system:masters
DEFG,kube,kube
(... other users ...)
My question is how do these users get authorized with consideration to RBAC? When authenticating to kubernetes-dashboard using tokens, the admin user's token works on both clusters and has full access. But the kube user's token only has access on one of the clusters. On one cluster, I get the following errors in the dashboard.
configmaps is forbidden: User "kube" cannot list configmaps in the namespace "default"
persistentvolumeclaims is forbidden: User "kube" cannot list persistentvolumeclaims in the namespace "default"
secrets is forbidden: User "kube" cannot list secrets in the namespace "default"
services is forbidden: User "kube" cannot list services in the namespace "default"
ingresses.extensions is forbidden: User "kube" cannot list ingresses.extensions in the namespace "default"
daemonsets.apps is forbidden: User "kube" cannot list daemonsets.apps in the namespace "default"
pods is forbidden: User "kube" cannot list pods in the namespace "default"
events is forbidden: User "kube" cannot list events in the namespace "default"
deployments.apps is forbidden: User "kube" cannot list deployments.apps in the namespace "default"
replicasets.apps is forbidden: User "kube" cannot list replicasets.apps in the namespace "default"
jobs.batch is forbidden: User "kube" cannot list jobs.batch in the namespace "default"
cronjobs.batch is forbidden: User "kube" cannot list cronjobs.batch in the namespace "default"
replicationcontrollers is forbidden: User "kube" cannot list replicationcontrollers in the namespace "default"
statefulsets.apps is forbidden: User "kube" cannot list statefulsets.apps in the namespace "default"
As per the official docs, "Kubernetes does not have objects which represent normal user accounts".
I can't find anywhere on the working cluster that would give authorization to kube. Likewise, I can't find anything that would restrict kube on the other cluster. I've checked all ClusterRoleBinding resources in the default and kube-system namespace. None of these reference the kube user. So why the discrepancy in access to the dashboard and how can I adjust it?
Some other questions:
How do I debug authorization issues such as this? The dashboard logs just say this user doesn't have access. Is there somewhere I can see which serviceAccount a particular request or token is mapped to?
What are groups in k8s? The k8s docs mention groups a lot. Even the static token users can be assigned a group such as system:masters which looks like arole/clusterrolebut there is nosystem:mastersrole in my cluster? What exactly aregroups`? As per Create user group using RBAC API?, it appears groups are simply arbitrary labels that can be defined per user. What's the point of them? Can I map a group to a RBAC serviceAccount?
Update
I restarted the working cluster and it no longer works. I get the same authorization errors as the working cluster. Looks like it was some sort of cached access. Sorry for the bogus question. I'm still curious on my follow-up questions but they can be made into separate questions.
Hard to tell without access to the cluster, but my guess is that you have a Role and a RoleBinding somewhere for the kube user on the cluster that works. Not a ClusterRole with ClusterRoleBinding.
Something like this:
kind: Role
metadata:
name: my-role
namespace: default
rules:
- apiGroups: [""]
Resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-role-binding
namespace: default
subjects:
- kind: User
name: "kube"
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
How do I debug authorization issues such as this? The dashboard logs
just say this user doesn't have access. Is there somewhere I can see
which serviceAccount a particular request or token is mapped to?
You can look at the kube-apiserver logs under /var/log/kube-apiserver.log on your leader master. Or if it's running in a container docker logs <container-id-of-kube-apiserver>
kubernetes seems to have lot of objects. I can't seem to find the full list of objects anywhere. After briefly searching on google, I can find results which mention a subset of kubernetes objects. Is the full list of objects documented somewhere, perhaps in source code? Thank you.
Following command successfully display all kubernetes objects
kubectl api-resources
Example
[root#hsk-controller ~]# kubectl api-resources
NAME SHORTNAMES KIND
bindings Binding
componentstatuses cs ComponentStatus
configmaps cm ConfigMap
endpoints ep Endpoints
events ev Event
limitranges limits LimitRange
namespaces ns Namespace
nodes no Node
persistentvolumeclaims pvc PersistentVolumeClaim
persistentvolumes pv PersistentVolume
pods po Pod
podtemplates PodTemplate
replicationcontrollers rc ReplicationController
resourcequotas quota ResourceQuota
secrets Secret
serviceaccounts sa ServiceAccount
services svc Service
initializerconfigurations InitializerConfiguration
mutatingwebhookconfigurations MutatingWebhookConfiguration
validatingwebhookconfigurations ValidatingWebhookConfiguration
customresourcedefinitions crd,crds CustomResourceDefinition
apiservices APIService
controllerrevisions ControllerRevision
daemonsets ds DaemonSet
deployments deploy Deployment
replicasets rs ReplicaSet
statefulsets sts StatefulSet
tokenreviews TokenReview
localsubjectaccessreviews LocalSubjectAccessReview
selfsubjectaccessreviews SelfSubjectAccessReview
selfsubjectrulesreviews SelfSubjectRulesReview
subjectaccessreviews SubjectAccessReview
horizontalpodautoscalers hpa HorizontalPodAutoscaler
cronjobs cj CronJob
jobs Job
brpolices br,bp BrPolicy
clusters rcc Cluster
filesystems rcfs Filesystem
objectstores rco ObjectStore
pools rcp Pool
certificatesigningrequests csr CertificateSigningRequest
leases Lease
events ev Event
daemonsets ds DaemonSet
deployments deploy Deployment
ingresses ing Ingress
networkpolicies netpol NetworkPolicy
podsecuritypolicies psp PodSecurityPolicy
replicasets rs ReplicaSet
nodes NodeMetrics
pods PodMetrics
networkpolicies netpol NetworkPolicy
poddisruptionbudgets pdb PodDisruptionBudget
podsecuritypolicies psp PodSecurityPolicy
clusterrolebindings ClusterRoleBinding
clusterroles ClusterRole
rolebindings RoleBinding
roles Role
volumes rv Volume
priorityclasses pc PriorityClass
storageclasses sc StorageClass
volumeattachments VolumeAttachment
Note: kubernate version is v1.12*
kubectl version
The following command list all supported API versions:
$ kubectl api-versions
You can have a bit detailed information from kube-apiserver REST API:
Open connection to kube-apiserver
$ kubectl proxy &
Now you can discover API resources:
This request gives you all existed paths on apiserver (in JSON format):
$ curl http://localhost:8001/
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
...
"/version"
]
}
You can request details about particular path:
curl http://localhost:8001/api/v1
...
{
"name": "configmaps",
"singularName": "",
"namespaced": true,
"kind": "ConfigMap",
"verbs": [
"create",
"delete",
"deletecollection",
"get",
"list",
"patch",
"update",
"watch"
],
"shortNames": [
"cm"
]
},
...
This information helps you to write kubectl commands, e.g.:
$ kubectl get configmaps
$ kubectl get cm
But you may find more convenient to use built-in documentation provided by kubectl explain.
For example, this command shows you a list of Kubernetes objects:
$ kubectl explain
You can have detailed information about any of listed resources:
$ kubectl explain rc
$ kubectl explain rc.spec
$ kubectl explain rc.spec.selector
Or you can print full blown YAML template(or part) of the object by adding --recursive flag:
$ kubectl explain rc --recursive
$ kubectl explain rc.metadata --recursive
Links in the desctiption points to the documentation about particular object. E.g.:
DESCRIPTION:
If the Labels of a ReplicationController are empty, they are defaulted to
be the same as the Pod(s) that the replication controller manages. Standard
object's metadata. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
ObjectMeta is metadata that all persisted resources must have, which
includes all objects users must create.
If you need complete description with examples you can always find it in the official API Reference (or the older version), mentioned by Matthew L Daniel
You also might find helpful kubectl Reference or kubectl Cheatsheet
Update: Using the following one-liner you can list all objects grouped by API versions (including CRDs). It may be useful to check if an object is present in more than one API group and therefore more than one apiVersion is applicable in its manifest. (For different apiVersions object configuration may be slightly different.)
a=$(kubectl api-versions) ; for n in $a ; do echo ; echo "apiVersion: $n" ; kubectl api-resources --api-group="${n%/*}" ; done
Partial example output:
...
apiVersion: autoscaling/v1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
apiVersion: batch/v1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
cronjobs cj batch true CronJob
jobs batch true Job
apiVersion: batch/v1beta1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
cronjobs cj batch true CronJob
jobs batch true Job
...
web
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/
resources list
$ kubectl api-resources -o wide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
limitranges limits true LimitRange [create delete deletecollection get list patch update watch]
namespaces ns false Namespace [create delete get list patch update watch]
nodes no false Node [create delete deletecollection get list patch update watch]
persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch]
persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch]
pods po true Pod [create delete deletecollection get list patch update watch]
podtemplates true PodTemplate [create delete deletecollection get list patch update watch]
replicationcontrollers rc true ReplicationController [create delete deletecollection get list patch update watch]
resourcequotas quota true ResourceQuota [create delete deletecollection get list patch update watch]
secrets true Secret [create delete deletecollection get list patch update watch]
serviceaccounts sa true ServiceAccount [create delete deletecollection get list patch update watch]
services svc true Service [create delete get list patch update watch]
mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration [create delete deletecollection get list patch update watch]
validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration [create delete deletecollection get list patch update watch]
customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition [create delete deletecollection get list patch update watch]
apiservices apiregistration.k8s.io false APIService [create delete deletecollection get list patch update watch]
controllerrevisions apps true ControllerRevision [create delete deletecollection get list patch update watch]
daemonsets ds apps true DaemonSet [create delete deletecollection get list patch update watch]
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
replicasets rs apps true ReplicaSet [create delete deletecollection get list patch update watch]
statefulsets sts apps true StatefulSet [create delete deletecollection get list patch update watch]
tokenreviews authentication.k8s.io false TokenReview [create]
localsubjectaccessreviews authorization.k8s.io true LocalSubjectAccessReview [create]
selfsubjectaccessreviews authorization.k8s.io false SelfSubjectAccessReview [create]
selfsubjectrulesreviews authorization.k8s.io false SelfSubjectRulesReview [create]
subjectaccessreviews authorization.k8s.io false SubjectAccessReview [create]
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler [create delete deletecollection get list patch update watch]
cronjobs cj batch true CronJob [create delete deletecollection get list patch update watch]
jobs batch true Job [create delete deletecollection get list patch update watch]
certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest [create delete deletecollection get list patch update watch]
leases coordination.k8s.io true Lease [create delete deletecollection get list patch update watch]
events ev events.k8s.io true Event [create delete deletecollection get list patch update watch]
ingresses ing extensions true Ingress [create delete deletecollection get list patch update watch]
ingresses ing networking.k8s.io true Ingress [create delete deletecollection get list patch update watch]
networkpolicies netpol networking.k8s.io true NetworkPolicy [create delete deletecollection get list patch update watch]
runtimeclasses node.k8s.io false RuntimeClass [create delete deletecollection get list patch update watch]
poddisruptionbudgets pdb policy true PodDisruptionBudget [create delete deletecollection get list patch update watch]
podsecuritypolicies psp policy false PodSecurityPolicy [create delete deletecollection get list patch update watch]
clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding [create delete deletecollection get list patch update watch]
clusterroles rbac.authorization.k8s.io false ClusterRole [create delete deletecollection get list patch update watch]
rolebindings rbac.authorization.k8s.io true RoleBinding [create delete deletecollection get list patch update watch]
roles rbac.authorization.k8s.io true Role [create delete deletecollection get list patch update watch]
priorityclasses pc scheduling.k8s.io false PriorityClass [create delete deletecollection get list patch update watch]
csidrivers storage.k8s.io false CSIDriver [create delete deletecollection get list patch update watch]
csinodes storage.k8s.io false CSINode [create delete deletecollection get list patch update watch]
storageclasses sc storage.k8s.io false StorageClass [create delete deletecollection get list patch update watch]
volumeattachments storage.k8s.io false VolumeAttachment [create delete deletecollection get list patch update watch]
details about each object
$ kubectl explain --help
List the fields for supported resources
This command describes the fields associated with each supported API resource. Fields are identified via a simple JSONPath identifier:
<type>.<fieldName>[.<fieldName>]
Add the --recursive flag to display all of the fields at once without descriptions. Information about each field is retrieved from the server in OpenAPI format.
Use "kubectl api-resources" for a complete list of supported resources.
Examples:
# Get the documentation of the resource and its fields
kubectl explain pods
# Get the documentation of a specific field of a resource
kubectl explain pods.spec.containers
Options:
--api-version='': Get different explanations for particular API version
--recursive=false: Print the fields of fields (Currently only 1 level deep)
Usage:
kubectl explain RESOURCE [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
I've been frustrated by the same issue. While you've got some good answers, I wanted something that 1) Was grouped by api version 2) was just a list of names instead of a book of documentation. I've been sorting out our RBAC, and it's a bit tricky without that. Couldn't find one, so here's the one I made (v1.18.0):
v1
bindings
componentstatuses
configmaps
endpoints
events
limitranges
namespaces
namespaces/finalize
namespaces/status
nodes
nodes/proxy
nodes/status
persistentvolumeclaims
persistentvolumeclaims/status
persistentvolumes
persistentvolumes/status
pods
pods/attach
pods/binding
pods/eviction
pods/exec
pods/log
pods/portforward
pods/proxy
pods/status
podtemplates
replicationcontrollers
replicationcontrollers/scale
replicationcontrollers/status
resourcequotas
resourcequotas/status
secrets
serviceaccounts
serviceaccounts/token
services
services/proxy
services/status
admissionregistration.k8s.io/v1
mutatingwebhookconfigurations
validatingwebhookconfigurations
admissionregistration.k8s.io/v1beta1
mutatingwebhookconfigurations
validatingwebhookconfigurations
apiextensions.k8s.io/v1
customresourcedefinitions
customresourcedefinitions/status
apiextensions.k8s.io/v1beta1
customresourcedefinitions
customresourcedefinitions/status
apiregistration.k8s.io/v1
apiservices
apiservices/status
apiregistration.k8s.io/v1beta1
apiservices
apiservices/status
apps/v1
controllerrevisions
daemonsets
daemonsets/status
deployments
deployments/scale
deployments/status
replicasets
replicasets/scale
replicasets/status
statefulsets
statefulsets/scale
statefulsets/status
authentication.k8s.io/v1
tokenreviews
authentication.k8s.io/v1beta1
tokenreviews
authorization.k8s.io/v1
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
authorization.k8s.io/v1beta1
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
autoscaling/v1
horizontalpodautoscalers
horizontalpodautoscalers/status
autoscaling/v2beta1
horizontalpodautoscalers
horizontalpodautoscalers/status
autoscaling/v2beta2
horizontalpodautoscalers
horizontalpodautoscalers/status
batch/v1
jobs
jobs/status
batch/v1beta1
cronjobs
cronjobs/status
certificates.k8s.io/v1beta1
certificatesigningrequests
certificatesigningrequests/approval
certificatesigningrequests/status
coordination.k8s.io/v1
leases
coordination.k8s.io/v1beta1
leases
crd.k8s.amazonaws.com/v1alpha1
eniconfigs
events.k8s.io/v1beta1
events
extensions/v1beta1
ingresses
ingresses/status
metrics.k8s.io/v1beta1
nodes
pods
networking.k8s.io/v1
networkpolicies
networking.k8s.io/v1beta1
ingresses
ingresses/status
node.k8s.io/v1beta1
runtimeclasses
policy/v1beta1
poddisruptionbudgets
poddisruptionbudgets/status
podsecuritypolicies
rbac.authorization.k8s.io/v1
clusterrolebindings
clusterroles
rolebindings
roles
rbac.authorization.k8s.io/v1beta1
clusterrolebindings
clusterroles
rolebindings
roles
scheduling.k8s.io/v1
priorityclasses
scheduling.k8s.io/v1beta1
priorityclasses
storage.k8s.io/v1
storageclasses
volumeattachments
volumeattachments/status
storage.k8s.io/v1beta1
csidrivers
csinodes
storageclasses
volumeattachments
I am using kubernetes on bare-metal (v1.10.2) and latest traefik (v1.6.2) as ingress. I am seeing following issue when I want to enable traefik to route to a httpS service.
Error configuring TLS for ingress default/cheese: secret default/traefik-cert does not exist
The secret exists ! why does it report that it doesnt ?
On the basis of comment: secret is inaccessible from traefik service account. But I dont understand why.
Details as follows:
kubectl get secret dex-tls -oyaml --as gem-lb-traefik
Error from server (Forbidden): secrets "dex-tls" is forbidden: User "gem-lb-traefik" cannot get secrets in the namespace "default"
$ kubectl describe clusterrolebinding gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: gem-lb-traefik
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount gem-lb-traefik default
$ kubectl describe clusterrole gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
secrets [] [] [get list watch]
services [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
I still dont understand why I am getting error of secret inaccessibility from the service account
First of all, in this case, you cannot check the access to the secret using --as gem-lb-traefik key because it tries to run the command as user gem-lb-traefik, but you have no such user, you only have ServiceAccount with ClusterRole gem-lb-traefik. Moreover, using --as <user> key with any nonexistent user provides an error similar to yours:
Error from server (Forbidden): secrets "<secretname>" is forbidden: User "<user>" cannot get secrets in the namespace "<namespace>"
So, as #Ignacio Millán mentioned, you need to check your settings for Traefik and fix them according to the official documentation. Possibly, you missed your ServiceAccount in Traefik DaemonSet description. Also, you need to check if Traefik DaemonSet is located in the same namespace as ServiceAccount for which you use ClusterRoleBinding.