kubernetes seems to have lot of objects. I can't seem to find the full list of objects anywhere. After briefly searching on google, I can find results which mention a subset of kubernetes objects. Is the full list of objects documented somewhere, perhaps in source code? Thank you.
Following command successfully display all kubernetes objects
kubectl api-resources
Example
[root#hsk-controller ~]# kubectl api-resources
NAME SHORTNAMES KIND
bindings Binding
componentstatuses cs ComponentStatus
configmaps cm ConfigMap
endpoints ep Endpoints
events ev Event
limitranges limits LimitRange
namespaces ns Namespace
nodes no Node
persistentvolumeclaims pvc PersistentVolumeClaim
persistentvolumes pv PersistentVolume
pods po Pod
podtemplates PodTemplate
replicationcontrollers rc ReplicationController
resourcequotas quota ResourceQuota
secrets Secret
serviceaccounts sa ServiceAccount
services svc Service
initializerconfigurations InitializerConfiguration
mutatingwebhookconfigurations MutatingWebhookConfiguration
validatingwebhookconfigurations ValidatingWebhookConfiguration
customresourcedefinitions crd,crds CustomResourceDefinition
apiservices APIService
controllerrevisions ControllerRevision
daemonsets ds DaemonSet
deployments deploy Deployment
replicasets rs ReplicaSet
statefulsets sts StatefulSet
tokenreviews TokenReview
localsubjectaccessreviews LocalSubjectAccessReview
selfsubjectaccessreviews SelfSubjectAccessReview
selfsubjectrulesreviews SelfSubjectRulesReview
subjectaccessreviews SubjectAccessReview
horizontalpodautoscalers hpa HorizontalPodAutoscaler
cronjobs cj CronJob
jobs Job
brpolices br,bp BrPolicy
clusters rcc Cluster
filesystems rcfs Filesystem
objectstores rco ObjectStore
pools rcp Pool
certificatesigningrequests csr CertificateSigningRequest
leases Lease
events ev Event
daemonsets ds DaemonSet
deployments deploy Deployment
ingresses ing Ingress
networkpolicies netpol NetworkPolicy
podsecuritypolicies psp PodSecurityPolicy
replicasets rs ReplicaSet
nodes NodeMetrics
pods PodMetrics
networkpolicies netpol NetworkPolicy
poddisruptionbudgets pdb PodDisruptionBudget
podsecuritypolicies psp PodSecurityPolicy
clusterrolebindings ClusterRoleBinding
clusterroles ClusterRole
rolebindings RoleBinding
roles Role
volumes rv Volume
priorityclasses pc PriorityClass
storageclasses sc StorageClass
volumeattachments VolumeAttachment
Note: kubernate version is v1.12*
kubectl version
The following command list all supported API versions:
$ kubectl api-versions
You can have a bit detailed information from kube-apiserver REST API:
Open connection to kube-apiserver
$ kubectl proxy &
Now you can discover API resources:
This request gives you all existed paths on apiserver (in JSON format):
$ curl http://localhost:8001/
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
...
"/version"
]
}
You can request details about particular path:
curl http://localhost:8001/api/v1
...
{
"name": "configmaps",
"singularName": "",
"namespaced": true,
"kind": "ConfigMap",
"verbs": [
"create",
"delete",
"deletecollection",
"get",
"list",
"patch",
"update",
"watch"
],
"shortNames": [
"cm"
]
},
...
This information helps you to write kubectl commands, e.g.:
$ kubectl get configmaps
$ kubectl get cm
But you may find more convenient to use built-in documentation provided by kubectl explain.
For example, this command shows you a list of Kubernetes objects:
$ kubectl explain
You can have detailed information about any of listed resources:
$ kubectl explain rc
$ kubectl explain rc.spec
$ kubectl explain rc.spec.selector
Or you can print full blown YAML template(or part) of the object by adding --recursive flag:
$ kubectl explain rc --recursive
$ kubectl explain rc.metadata --recursive
Links in the desctiption points to the documentation about particular object. E.g.:
DESCRIPTION:
If the Labels of a ReplicationController are empty, they are defaulted to
be the same as the Pod(s) that the replication controller manages. Standard
object's metadata. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
ObjectMeta is metadata that all persisted resources must have, which
includes all objects users must create.
If you need complete description with examples you can always find it in the official API Reference (or the older version), mentioned by Matthew L Daniel
You also might find helpful kubectl Reference or kubectl Cheatsheet
Update: Using the following one-liner you can list all objects grouped by API versions (including CRDs). It may be useful to check if an object is present in more than one API group and therefore more than one apiVersion is applicable in its manifest. (For different apiVersions object configuration may be slightly different.)
a=$(kubectl api-versions) ; for n in $a ; do echo ; echo "apiVersion: $n" ; kubectl api-resources --api-group="${n%/*}" ; done
Partial example output:
...
apiVersion: autoscaling/v1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
NAME SHORTNAMES APIGROUP NAMESPACED KIND
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler
apiVersion: batch/v1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
cronjobs cj batch true CronJob
jobs batch true Job
apiVersion: batch/v1beta1
NAME SHORTNAMES APIGROUP NAMESPACED KIND
cronjobs cj batch true CronJob
jobs batch true Job
...
web
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/
resources list
$ kubectl api-resources -o wide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
limitranges limits true LimitRange [create delete deletecollection get list patch update watch]
namespaces ns false Namespace [create delete get list patch update watch]
nodes no false Node [create delete deletecollection get list patch update watch]
persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch]
persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch]
pods po true Pod [create delete deletecollection get list patch update watch]
podtemplates true PodTemplate [create delete deletecollection get list patch update watch]
replicationcontrollers rc true ReplicationController [create delete deletecollection get list patch update watch]
resourcequotas quota true ResourceQuota [create delete deletecollection get list patch update watch]
secrets true Secret [create delete deletecollection get list patch update watch]
serviceaccounts sa true ServiceAccount [create delete deletecollection get list patch update watch]
services svc true Service [create delete get list patch update watch]
mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration [create delete deletecollection get list patch update watch]
validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration [create delete deletecollection get list patch update watch]
customresourcedefinitions crd,crds apiextensions.k8s.io false CustomResourceDefinition [create delete deletecollection get list patch update watch]
apiservices apiregistration.k8s.io false APIService [create delete deletecollection get list patch update watch]
controllerrevisions apps true ControllerRevision [create delete deletecollection get list patch update watch]
daemonsets ds apps true DaemonSet [create delete deletecollection get list patch update watch]
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
replicasets rs apps true ReplicaSet [create delete deletecollection get list patch update watch]
statefulsets sts apps true StatefulSet [create delete deletecollection get list patch update watch]
tokenreviews authentication.k8s.io false TokenReview [create]
localsubjectaccessreviews authorization.k8s.io true LocalSubjectAccessReview [create]
selfsubjectaccessreviews authorization.k8s.io false SelfSubjectAccessReview [create]
selfsubjectrulesreviews authorization.k8s.io false SelfSubjectRulesReview [create]
subjectaccessreviews authorization.k8s.io false SubjectAccessReview [create]
horizontalpodautoscalers hpa autoscaling true HorizontalPodAutoscaler [create delete deletecollection get list patch update watch]
cronjobs cj batch true CronJob [create delete deletecollection get list patch update watch]
jobs batch true Job [create delete deletecollection get list patch update watch]
certificatesigningrequests csr certificates.k8s.io false CertificateSigningRequest [create delete deletecollection get list patch update watch]
leases coordination.k8s.io true Lease [create delete deletecollection get list patch update watch]
events ev events.k8s.io true Event [create delete deletecollection get list patch update watch]
ingresses ing extensions true Ingress [create delete deletecollection get list patch update watch]
ingresses ing networking.k8s.io true Ingress [create delete deletecollection get list patch update watch]
networkpolicies netpol networking.k8s.io true NetworkPolicy [create delete deletecollection get list patch update watch]
runtimeclasses node.k8s.io false RuntimeClass [create delete deletecollection get list patch update watch]
poddisruptionbudgets pdb policy true PodDisruptionBudget [create delete deletecollection get list patch update watch]
podsecuritypolicies psp policy false PodSecurityPolicy [create delete deletecollection get list patch update watch]
clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding [create delete deletecollection get list patch update watch]
clusterroles rbac.authorization.k8s.io false ClusterRole [create delete deletecollection get list patch update watch]
rolebindings rbac.authorization.k8s.io true RoleBinding [create delete deletecollection get list patch update watch]
roles rbac.authorization.k8s.io true Role [create delete deletecollection get list patch update watch]
priorityclasses pc scheduling.k8s.io false PriorityClass [create delete deletecollection get list patch update watch]
csidrivers storage.k8s.io false CSIDriver [create delete deletecollection get list patch update watch]
csinodes storage.k8s.io false CSINode [create delete deletecollection get list patch update watch]
storageclasses sc storage.k8s.io false StorageClass [create delete deletecollection get list patch update watch]
volumeattachments storage.k8s.io false VolumeAttachment [create delete deletecollection get list patch update watch]
details about each object
$ kubectl explain --help
List the fields for supported resources
This command describes the fields associated with each supported API resource. Fields are identified via a simple JSONPath identifier:
<type>.<fieldName>[.<fieldName>]
Add the --recursive flag to display all of the fields at once without descriptions. Information about each field is retrieved from the server in OpenAPI format.
Use "kubectl api-resources" for a complete list of supported resources.
Examples:
# Get the documentation of the resource and its fields
kubectl explain pods
# Get the documentation of a specific field of a resource
kubectl explain pods.spec.containers
Options:
--api-version='': Get different explanations for particular API version
--recursive=false: Print the fields of fields (Currently only 1 level deep)
Usage:
kubectl explain RESOURCE [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
I've been frustrated by the same issue. While you've got some good answers, I wanted something that 1) Was grouped by api version 2) was just a list of names instead of a book of documentation. I've been sorting out our RBAC, and it's a bit tricky without that. Couldn't find one, so here's the one I made (v1.18.0):
v1
bindings
componentstatuses
configmaps
endpoints
events
limitranges
namespaces
namespaces/finalize
namespaces/status
nodes
nodes/proxy
nodes/status
persistentvolumeclaims
persistentvolumeclaims/status
persistentvolumes
persistentvolumes/status
pods
pods/attach
pods/binding
pods/eviction
pods/exec
pods/log
pods/portforward
pods/proxy
pods/status
podtemplates
replicationcontrollers
replicationcontrollers/scale
replicationcontrollers/status
resourcequotas
resourcequotas/status
secrets
serviceaccounts
serviceaccounts/token
services
services/proxy
services/status
admissionregistration.k8s.io/v1
mutatingwebhookconfigurations
validatingwebhookconfigurations
admissionregistration.k8s.io/v1beta1
mutatingwebhookconfigurations
validatingwebhookconfigurations
apiextensions.k8s.io/v1
customresourcedefinitions
customresourcedefinitions/status
apiextensions.k8s.io/v1beta1
customresourcedefinitions
customresourcedefinitions/status
apiregistration.k8s.io/v1
apiservices
apiservices/status
apiregistration.k8s.io/v1beta1
apiservices
apiservices/status
apps/v1
controllerrevisions
daemonsets
daemonsets/status
deployments
deployments/scale
deployments/status
replicasets
replicasets/scale
replicasets/status
statefulsets
statefulsets/scale
statefulsets/status
authentication.k8s.io/v1
tokenreviews
authentication.k8s.io/v1beta1
tokenreviews
authorization.k8s.io/v1
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
authorization.k8s.io/v1beta1
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
autoscaling/v1
horizontalpodautoscalers
horizontalpodautoscalers/status
autoscaling/v2beta1
horizontalpodautoscalers
horizontalpodautoscalers/status
autoscaling/v2beta2
horizontalpodautoscalers
horizontalpodautoscalers/status
batch/v1
jobs
jobs/status
batch/v1beta1
cronjobs
cronjobs/status
certificates.k8s.io/v1beta1
certificatesigningrequests
certificatesigningrequests/approval
certificatesigningrequests/status
coordination.k8s.io/v1
leases
coordination.k8s.io/v1beta1
leases
crd.k8s.amazonaws.com/v1alpha1
eniconfigs
events.k8s.io/v1beta1
events
extensions/v1beta1
ingresses
ingresses/status
metrics.k8s.io/v1beta1
nodes
pods
networking.k8s.io/v1
networkpolicies
networking.k8s.io/v1beta1
ingresses
ingresses/status
node.k8s.io/v1beta1
runtimeclasses
policy/v1beta1
poddisruptionbudgets
poddisruptionbudgets/status
podsecuritypolicies
rbac.authorization.k8s.io/v1
clusterrolebindings
clusterroles
rolebindings
roles
rbac.authorization.k8s.io/v1beta1
clusterrolebindings
clusterroles
rolebindings
roles
scheduling.k8s.io/v1
priorityclasses
scheduling.k8s.io/v1beta1
priorityclasses
storage.k8s.io/v1
storageclasses
volumeattachments
volumeattachments/status
storage.k8s.io/v1beta1
csidrivers
csinodes
storageclasses
volumeattachments
Related
I'm using docker desktop on an MacBook Air M1 and I just installed the MongoDBCommunity operator following the guide here https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/install-upgrade.md using the kubectl method.
Then I went to VS Code project and pasted a ReplicaSet following the guide here https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md in a .yaml file but I get an error:
This apiVersion and/or kind does not reference a schema known by Cloud Code. Please ensure you are using a valid apiVersion and kind.
if run the command kubectl api-resources I see it is installed
mongodbcommunity mdbc mongodbcommunity.mongodb.com/v1 true MongoDBCommunity
As a test I commented out everything and re added one line at the time.apiVersion: mongodbcommunity.mongodb.com/v1 is accepted, but as soon as kind: MongoDBCommunity is added they both get underlined.
Install procedure used..
Cloned the repo, and cd'ed into it.
(Configure the Operator to watch other namespaces.)
edited the manager file /config/manager/manager.yaml
env:
- name: WATCH_NAMESPACE
value: "*"
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
edited the file in /deploy/clusterwide/cluster_role_binding.yaml
- kind: ServiceAccount
namespace: default
name: mongodb-kubernetes-operator
applied it
kubectl apply -f deploy/clusterwide
deploy a Role, RoleBinding and ServiceAccount in every namespace
kubectl apply -k config/rbac --namespace default
Left untouched image and url for mongo in /config/manager/manager.yaml
- name: MONGODB_IMAGE
value: mongo
- name: MONGODB_REPO_URL
value: docker.io
(Install)
edited config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.4 (this was a different version)
creationTimestamp: null
name: mongodbcommunity.mongodbcommunity.mongodb.com
applied it
kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
Installed the necessary roles and role-bindings
kubectl apply -k config/rbac/ --namespace default
Install the Operator
kubectl create -f config/manager/manager.yaml --namespace default
I went through the install process again and at each step verified the the step was applied correctly
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get crd/mongodbcommunity.mongodbcommunity.mongodb.com
NAME CREATED AT
mongodbcommunity.mongodbcommunity.mongodb.com 2022-07-28T13:09:48Z
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get role mongodb-kubernetes-operator
NAME CREATED AT
mongodb-kubernetes-operator 2022-07-28T13:10:28Z
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get role mongodb-kubernetes-operator --namespace default
NAME CREATED AT
mongodb-kubernetes-operator 2022-07-28T13:10:28Z
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get rolebinding mongodb-kubernetes-operator
NAME ROLE AGE
mongodb-kubernetes-operator Role/mongodb-kubernetes-operator 4h34m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get rolebinding mongodb-kubernetes-operator --namespace default
NAME ROLE AGE
mongodb-kubernetes-operator Role/mongodb-kubernetes-operator 4h34m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get serviceaccount mongodb-kubernetes-operator
NAME SECRETS AGE
mongodb-kubernetes-operator 0 4h34m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get serviceaccount mongodb-kubernetes-operator --namespace default
NAME SECRETS AGE
mongodb-kubernetes-operator 0 4h34m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
mongodb-kubernetes-operator-7646658db4-8cnvn 1/1 Running 0 63m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pods --namespace default
NAME READY STATUS RESTARTS AGE
mongodb-kubernetes-operator-7646658db4-8cnvn 1/1 Running 0 63m
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get sts
No resources found in default namespace.
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration
mongodbcommunity mdbc mongodbcommunity.mongodb.com/v1 true MongoDBCommunity
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
it looks all good but still get the error..
Now there is one step
For each namespace that you want the Operator to watch, run the following commands to deploy a Role, RoleBinding and ServiceAccount in that namespace:
kubectl apply -k config/rbac --namespace <my-namespace>
I did apply it only to default namespace but as these are my spacenames
default Active 12d
ingress-nginx Active 10d
kube-node-lease Active 12d
kube-public Active 12d
kube-system Active 12d
should apply it to all of them??
What can I check to see if I did install it properly or not?
Thank you very much.
Cheers
This is a partial answer as the issue is not 100% solved..
So I could easily test settings and remove applied files
added all yaml files in the skaffold deploy: kubectl: manifests:
deploy:
kubectl:
manifests:
# # - ./infra/k8s/*
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/rbac/role_binding_database.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/rbac/role_binding.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/rbac/role_database.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/rbac/role.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/rbac/service_account_database.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/rbac/service_account.yaml
- /Users/vincenzocalia/mongodb-kubernetes-operator/config/manager/manager.yaml
As first test I just added a namespace: default in /config/rbac/role_binding_database.yaml, /config/rbac/role_binding.yaml,
/config/rbac/role_database.yaml,
/config/rbac/role.yaml,
/config/rbac/service_account_database.yaml,
/config/rbac/service_account.yaml files.
No difference..Pod are still not been scheduled even if I add a namespace: default to the MongoDBCommunity ..
I found that I wrongly edited config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml file so I put back the controller-gen.kubebuilder.io/version: to correct v0.4.1 value, dough it didn't solve my issue. Desired Pod are still not been scheduled.
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.4.1 #v0.7.4
creationTimestamp: null
name: mongodbcommunity.mongodbcommunity.mongodb.com
I decided to try and edited back also the manager file /config/manager/manager.yaml to leave operator watching the the specified namespace instead of any as I set it initially
env:
- name: WATCH_NAMESPACE
# value: "*"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
and BINGO! now pod get scheduled even without the namespace: default parameter in the MongoDBCommunity resource..not sure why they didn't with value: "*" but the do start now.
Still I get the error: This apiVersion and/or kind does not reference a schema known by Cloud Code. Please ensure you are using a valid apiVersion and kind. warning dough..
I'll update my answer as soon as I do solve the apiVersion/kind issue..
I have a Kubernetes cluster (v1.14.10) which contains the kubernetes ingress controller (quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0). I have the following log message at the nginx pods, when trying to update the IC to 0.30.0:
error updating ingress rule: ingresses.networking.k8s.io "test" is forbidden: User "system:serviceaccount:ingress:nginx" cannot update resource "ingresses/status" in API group "networking.k8s.io" in the namespace "test"
the clusterrolebinding and role of nginx contain the following permissions:
#kubectl describe clusterrolebinding nginx-role
Name: nginx-role
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: nginx-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount nginx ingress
#kubectl describe clusterrole nginx-role
Name: nginx-role
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
events [] [] [create patch]
services [] [] [get list update watch]
ingresses.extensions [] [] [get list watch update]
ingresses.networking.k8s.io [] [] [get list watch]
namespaces [] [] [get update]
configmaps [] [] [list watch get create update]
nodes [] [] [list watch get]
endpoints [] [] [list watch]
pods [] [] [list watch]
secrets [] [] [list watch]
ingresses.extensions/status [] [] [update]
ingresses.networking.k8s.io/status [] [] [update]
The ingress configuration contains the following apiVersion, which I don't know if this is the issue due to new networking.k8s.io/v1beta1 package ([4127]https://github.com/kubernetes/ingress-nginx/pull/4127)
apiVersion: extensions/v1beta1
kind: Ingress
Can you please inform me whether this is a kubernetes ingress configuration or clusterrole issue?
thank you.
This question is a bit old but just in case anyone else finds this issue when looking up the error message, I solved it by adding the networking.k8s.io API group to ingresses/status resources in the nginx role configuration.
Sample:
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
The error actually describes the issue well, it API group "networking.k8s.io" is missing for resource "ingresses/status", you may encounter it on other resources as well, the fix is similar. As to the cause, I beleive this is a Kubernetes update issue.
How do I determine which apiGroup any given resource belongs in?
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: thing
rules:
- apiGroups: ["<wtf goes here>"]
resources: ["deployments"]
verbs: ["get", "list"]
resourceNames: []
To get API resources - supported by your Kubernetes cluster:
kubectl api-resources -o wide
example:
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
deployments deploy extensions true Deployment [create delete deletecollection get list patch update watch]
To get API versions - supported by your Kubernetes cluster:
kubectl api-versions
You can verify f.e. deployment:
kubectl explain deploy
KIND: Deployment
VERSION: extensions/v1beta1
DESCRIPTION:
DEPRECATED - This group version of Deployment is deprecated by
apps/v1beta2/Deployment.
Furthermore you can investigate with api-version:
kubectl explain deploy --api-version apps/v1
Shortly you an specify in you apiGroups like:
apiGroups: ["extensions", "apps"]
You can also configure those settings for your cluster using (for example to test it will work with next 1.16 release) by passing options into --runtime-config in kube-apiserver.
Additional resources:
api Resources:
Kubernetes Deprecation Policy
Additional Notable Feature Updates for specific release please follow like:
Continued deprecation of extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs; these extensions will be retired in 1.16!
kubectl api-resources -o wide provide the supported API resources on the system.
[suresh.vishnoi#xxx1309 ~]$ kubectl api-resources -o wide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
controllerrevisions apps true ControllerRevision [create delete deletecollection get list patch update watch]
daemonsets ds apps true DaemonSet [create delete deletecollection get list patch update watch]
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
replicasets rs apps true ReplicaSet [create delete deletecollection get list patch update watch]
kubectl api-resources -o wide | grep -i deployment will provide the relevant information
apps is the apiGroup for the deployment resource
DaemonSet, Deployment, StatefulSet, and ReplicaSet: will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 in v1.16.
Migrate to the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API./api-deprecations-in-1-16
This is a little tricky, because both groups apps and extensions are in use in recent kubernetes versions, for example
kubectl get deployments # It is still requested via extensions api group by default.
kubectl get deployments.apps # request via apps group
so until deployments are removed from the extensions apigroup you have to use both apigroups in your role.
apiGroups: ["apps","extensions"]
https://github.com/kubernetes/kubernetes/issues/67439
In later k8s version, apigroup is deprecated, and the command kubectl api-resources -o wide will show apiversion instead, which is a combination of apigroup/version
It is included in the online API documentation.
In your example, if you click through and find the documentation for Role, it lists the group and version in both the sidebar ("Role v1 rbac.authorization.k8s.io") and as the first line in the actual API documentation. Similarly, Deployment is in group "apps" with version "v1".
In the Role specification you only put the group, and it applies to all versions. So to control access to Deployments, you'd specify apiGroups: [apps], resources: [deployments]. (This is actually one of the examples in the RBAC documentation.)
You can run below command to get apiVersion and other details.
kubectl explain <Resource Name>
kubectl explain deployment
When listing all the API resources in K8s you get:
$ kubectl api-resources -owide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
limitranges limits true LimitRange [create delete deletecollection get list patch update watch]
namespaces ns false Namespace [create delete get list patch update watch]
nodes no false Node [create delete deletecollection get list patch update watch]
persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch]
persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch]
pods po true Pod [create delete deletecollection get list patch update watch]
podtemplates true PodTemplate [create delete deletecollection get list patch update watch]
replicationcontrollers rc true ReplicationController [create delete deletecollection get list patch update watch]
resourcequotas quota true ResourceQuota [create delete deletecollection get list patch update watch]
secrets true Secret [create delete deletecollection get list patch update watch]
serviceaccounts sa true ServiceAccount [create delete deletecollection get list patch update watch]
services svc true Service [create delete get list patch update watch]
mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration [create delete deletecollection get list patch update watch]
... etc ...
Many list the verb deletecollection which sounds useful, but I can't run it e.g.
$ kubectl deletecollection
Error: unknown command "deletecollection" for "kubectl"
Run 'kubectl --help' for usage.
unknown command "deletecollection" for "kubectl"
Nor can I find it in the docs except where it appears in the api-resources output above or mentioned as a verb.
Is there a way to deletecollection?
It sounds like it would be better than the sequence of grep/awk/xargs that I normally end up doing if it does do what I think it should do. i.e. delete all the pods of a certain type.
The delete verb refers to deleting a single resource, for example a single Pod. The deletecollection verb refers to deleting multiple resources at the same time, for example multiple Pods using a label or field selector or all Pods in a namespace.
To give some examples from the API documentation:
To delete a single Pod: DELETE /api/v1/namespaces/{namespace}/pods/{name}
To delete multiple Pods (or, deletecollection):
All pods in a namespace DELETE /api/v1/namespaces/{namespace}/pods
All pods in a namespace matching a given label selector: DELETE /api/v1/namespaces/{namespace}/pods?labelSelector=someLabel%3dsomeValue
Regarding kubectl: You cannot invoke deletecollection explicitly with kubectl.
Instead, kubectl will infer on its own whether to use delete or deletecollection depending on how you invoke kubectl delete. When deleting a single source (kubectl delete pod $POD_NAME), kubectl will use a delete call and when using a label selector or simply deleting all Pods (kubectl delete pods -l $LABEL=$VALUE or kubectl delete pods --all), it will use the deletecollection verb.
DeleteCollection it's not a kubectl command parameter.
When RBAC is active, it use verbs to define what type of access you have over a class of kubernetes objects.
DeleteCollection is a verb used in a RBAC Role definition to authorize or not a deletion of objects of the same kind like pods or deployments or services.
Example of a yaml Role definition using verbs .
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-admin
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list","delete", "deletecollection"]
I am using kubernetes on bare-metal (v1.10.2) and latest traefik (v1.6.2) as ingress. I am seeing following issue when I want to enable traefik to route to a httpS service.
Error configuring TLS for ingress default/cheese: secret default/traefik-cert does not exist
The secret exists ! why does it report that it doesnt ?
On the basis of comment: secret is inaccessible from traefik service account. But I dont understand why.
Details as follows:
kubectl get secret dex-tls -oyaml --as gem-lb-traefik
Error from server (Forbidden): secrets "dex-tls" is forbidden: User "gem-lb-traefik" cannot get secrets in the namespace "default"
$ kubectl describe clusterrolebinding gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: gem-lb-traefik
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount gem-lb-traefik default
$ kubectl describe clusterrole gem-lb-traefik
Name: gem-lb-traefik
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
secrets [] [] [get list watch]
services [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
I still dont understand why I am getting error of secret inaccessibility from the service account
First of all, in this case, you cannot check the access to the secret using --as gem-lb-traefik key because it tries to run the command as user gem-lb-traefik, but you have no such user, you only have ServiceAccount with ClusterRole gem-lb-traefik. Moreover, using --as <user> key with any nonexistent user provides an error similar to yours:
Error from server (Forbidden): secrets "<secretname>" is forbidden: User "<user>" cannot get secrets in the namespace "<namespace>"
So, as #Ignacio Millán mentioned, you need to check your settings for Traefik and fix them according to the official documentation. Possibly, you missed your ServiceAccount in Traefik DaemonSet description. Also, you need to check if Traefik DaemonSet is located in the same namespace as ServiceAccount for which you use ClusterRoleBinding.