I am running a Kubernates Cluster in bare metal of three nodes.
I have applied a couple of yaml files of different services.
Now I would like to make order in the cluster and clean some orphaned kube objects.
To do that I need to understand the set of pods or other entities which use or refer a certain ServiceAccount.
For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:
kubectl get сlusterrolebinding admin-user
But is there a good kubectl options combination to find all the usages/references of some ServiceAccount?
You can list all resources using a service account with the following command:
kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(#.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n"
You just need to replace YOUR_SERVICE_ACCOUNT_NAME to the one you are investigating.
I tested this command on my cluster and it works.
Let me know if this solution helped you.
Take a look at this project. After installing via homebrew or krew you can use it find a service account and look at its role, scope, source. It does not tell which pods are referring to it but still a useful tool.
rbac-lookup serviceaccountname --output wide --kind serviceaccount
Related
Okay first let me say please don't judge. Believe me, I am kicking myself in the ass.
So I lost my hard disk on my laptop which held the Kubernetes yaml files that I ran against a Kubernetes Cloud cluster. I don't have the latest backup which is the problem.
does anyone know how to get just the yaml I ran against the K8s cloud server. I can get to the cluster and run kubectl get pod my-pod -o yaml but of course, it adds a lot of things. I am just looking for the yaml that I ran.
I am stressing here and have learned my lesson. Backup, Backup and verify Backup.
You can use this and extend it to your needs:
kubectl get [resource type] -n [namespace] [resource Name] -o yaml > [output.yaml]
The -o yaml will do the job
Note
You will get some extra information provided by your cloud providers like history, version, and more.
Lens
https://k8slens.dev/
You can use Lens which will allow you to view & edit your resources so you will be able to copy the YAML from it.
I am using helm to deploy my applications which has deployments, pods and jobs and others.
Is there any way to get "kubectl describe" output of all objects loaded by "helm install" ?
Tell me if it is working, but I tried with my helm charts (custom one, ES and kibana).
TL;DR
kubectl get all -l chart=myb5 -n myb5
-n stands for namespace
-l stands for label
Explanations
Labeling your kubernetes objects is really important, and most of the helm charts out there are using labels to easily access and select objects.
When you install a chart, it adds a label such chart=my-chart-name. If the chart is not using it (maybe you are creating one for yourself), it is a good practice to add it.
So querying all resources with get all should retrieve all the resources created in the default namespace.
Depending where you installed your helm chart, it is good to add the namespace field in your query.
Note that if you use 1 namespace for only 1 helm chart resources, you do not need to filter with labels.
PS: should work the same with describe ;)
Best,
Since you're using helm install, i assume your Chart's resources are installed into a specific namespace.
In that case, you can simply use the command kubectl describe all -n <your-namespace>.
Its output should be the same as using kubectl describe on each resource of your Helm Chart.
kubectl describe all -l chart=<chartName> -n namespace
or
kubectl get events -n namespace -w
How to find the location of a kubernetes object's definition file.
I know the name of a kubernetes deployment and want to make some changes directly to its definition file instead of using 'kubernetes edit deployment '
The object definitions are stored internally in Kubernetes in replicated storage that's not directly accessible. If you do change an object definition, you would still need to trigger the rest of the Kubernetes update sequence when an object changes.
Typical practice is to keep the Kubernetes YAML files in source control. You can then edit these locally, and use kubectl apply -f to send them to the cluster. If you don't have them then you can run commands like kubectl get deployment depl-name -o yaml to get them out, and then check in the results to your source control repository.
If you really want to edit YAML definitions in an imperative, non-reproducible way, kubectl edit is the most direct thing you can do.
You could execute kubectl get deployment <deployment-name> -o yaml to get the deployment definition in a yaml format (or -o json to get in a json format), save that to a file, edit the file and apply the changes.
In a step-by-step guide would be:
Run kubectl get deployment deployment-name -o yaml > deployment-name.yaml
Edit and save the deployment-name.yaml using the editor of your preference
Run kubectl apply -f deployment-name.yaml to apply the changes
It's all stored in etcd
Nodes
Namespaces
ServiceAccounts
Roles and RoleBindings, ClusterRoles / ClusterRoleBindings
ConfigMaps
Secrets
Workloads: Deployments, DaemonSets, Pods, …
Cluster’s certificates
The resources within each apiVersion
The events that bring the cluster in the current state
Take a look at this blog post
First off, I'm completely new with Kubernetes so I may have missed something completely obvious but the documentation is exactly helping, so I'm turning to you guys for help.
I'm trying to figure out just how many types of "deployment files" there are for Kubernetes. I call them "deployment files" because I really don't know what else to call them and they're usually associated with a deployment.
So far, every yml/yaml file I've seen start like this:
apiVersion:
kind: << this is what I'm asking about >>
metadata:
And so far I have seen this many "kind"(s)
ClusterConfig
ClusterRole
ClusterRoleBinding
CronJob
Deployment
Job
PersistentVolumeClaim
Pod
ReplicationController
Role
RoleBinding
Secret
Service
ServiceAccount
I'm sure there are many more. But I can't seem to find a location where they are listed and the contexts broken down.
So what I want to know is this,
Where can I find an explanation for these yaml files?
Where can I learn about the different kinds?
Where can I get a broken down explanation of the minimum required fields/values are for any of these?
Are there templates for these files?
Thanks
This question will need a blog to answer but still in short you can try these options and command to learn from your kubectl CLI.
Learn to use kubectl explain command which shows you a list of Kubernetes objects:
$ kubectl explain
You can get detailed information about any of listed resources using this syntax
`$ kubectl explain pod
$ kubectl explain pod.spec
$ kubectl explain pod.spec.containers`
Or you can get yam template of the object by adding --recursive flag to explain command.
$ kubectl explain pod --recursive
This will also give you official document link.
So in short running kubectl explain with recursive option will list every thing.
When you are talking about specific yaml file containing the definition of specific kubernetes object, you can call them yaml manifests or simply yaml definition files. Using word Deployment for all of them isn't a good idea as there is already specific resource type defined and called by this name in kubernetes. So it's better you don't call them all deployments for consistency.
I'm sure there are many more. But I can't seem to find a location
where they are listed and the contexts broken down.
Yes, there are a lot more of them and you can list those which are available by running:
kubectl api-resources
These different objects are actually called api-resources. As you can see they are listed in three columns: NAME, SHORTNAMES, APIGROUP, NAMESPACED and KIND
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
nodes no false Node
Note that the name of resource corresponds to its KIND but it is slightly different. NAME simply describes resource types as we are referring to them e.g. using kubectl command line utility. Just to give one example, when you want to list pods available in your cluster you simply type kubectl get pods. You don't have to use resource kind i.e. Pod in this context. You can but you don't have to. So kubectl get Pod or kubectl get ConfigMap will also return desired result. You can also refer to them by their shournames so kubectl get daemonsets and kubectl get ds are equivalent.
It's totally different when it comes to specific resource/object definition. In context of yaml definition file we must to use proper KIND of the resource. They are mostly start with capital letter and are written by co called CamelCase but there are exceptions from this rule.
I really recommend you to familiarize with kubernetes documentation. It is very user-friendly and nicely explains both key kubernetes concepts as well as all very tiny details.
Here you have even more useful commands for exploring API resources:
kubectl api-resources --namespaced=true # All namespaced resources
kubectl api-resources --namespaced=false # All non-namespaced resources
kubectl api-resources -o name # All resources with simple output (just the resource name)
kubectl api-resources -o wide # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group
As #wargre already suggested in his comment, kubernetes official documentetion is definitely the best place to start as you will find there very detailed description of every resource.
Understanding Kubernetes Objects
You may start from reading this article: Understanding Kubernetes Objects
Kubernetes Objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
What containerized applications are running (and on which nodes)
The resources available to those applications
The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
A Kubernetes object is a “record of intent”–once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s desired state.
K8s API reference
A detailed description of all objects can be found in the Kubernetes API reference guide.
One of the points in the kubectl best practices section in Kubernetes Docs state below:
Pin to a specific generator version, such as kubectl run
--generator=deployment/v1beta1
But then a little down in the doc, we get to learn that except for Pod, the use of --generator option is deprecated and that it would be removed in future versions.
Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(
kubectl create is the recommended alternative if you want to use more than just a pod (like deployment).
https://kubernetes.io/docs/reference/kubectl/conventions/#generators says:
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.
This pull request has the reason why generators (except run-pod/v1) were deprecated:
The direction is that we want to move away from kubectl run because it's over bloated and complicated for both users and developers. We want to mimic docker run with kubectl run so that it only creates a pod, and if you're interested in other resources kubectl create is the intended replacement.
For deployment you can try
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
and
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.