Figure out where ConfigMap values are used in the cluster - kubernetes

In my team we have a single huge ConfigMap Resource that holds all the important variables distributed to all our pods.
After some time we realized that it is very hard to follow up where those variables are finally used. I was wondering if there is any way with Helm or kubectl to figure out where the values of the ConfigMap are actually used. Like a list of all the pods being supplied by the ConfigMap etc.
I researched for it but somehow it seems nobody is talking about that. Therefore I might understand the concept wrong here?
Am thankful for any guidance here.

You cannot directly use kubectl field selectors to get the result. You can output all the pods in json and use jq to query from the output. For example, this query outputs name of all pods that uses configMap "kube-proxy" as volumes
$ kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.volumes[].configMap.name=="kube-proxy")' | jq .metadata.name

Related

How to get the full name of a pod by both its creation date and part of its name?

In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of kubectl get pods would look something like this:
service-job-12345abc
service-job-abc54321
other-job-54321cba
I need to find the nameof the most recently created pod starting with "service-job-".
I found this thread, which helps getting the name of the most recent pod in general. This one gets me the complete names of pods starting with a specific prefix.
What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one.
Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state.
This works as you expect:
kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1

How to collect Kubernetes Metadata?

I am looking for a way to get all object's metadata within a k8s cluster and send it out to an external server.
By metadata, I refer to objects Name, Kind, Labels, Annotations, etc.
The intention is to build an offline inventory of a cluster.
What would be the best approach to build it? Is there any tool that already does something similar?
Thanks
Posting this as a community wiki, feel free to edit and expand.
There are different ways to achieve it.
From this GitHub issue comment it's possible to iterate through all resources to get all available objects.
in yaml:
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o yaml
in json:
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o json
And then parse the output.
Use kubernetes clients.
There are already developed kubernetes clients (available for different languages) which can be used to get required information and work with it later.
Use kubectl plugin - ketall (didn't test it)
There's a developed plugin for kubectl which returns all cluster resources. Please find github repo - ketall. Again after cluster objects are gotten, you will need to parse/work with them.
Try this commands
kubectl get all --all-namespaces -o yaml
or
kubectl get all --all-namespaces -o json
you can parse and use as you find fit

Get names of all deployment configs with no running pods

Is there a simple method (that won't require googling at every use) to get names of all deployment configs with no running pods (scaled to 0) in Kubernetes / Openshift? Methods without JSON tokens and awk please.
The docs of oc get dc --help are way too long to decipher for the occasional need.
The only CLI arg for advanced filtering without working with JSON is a --field-selector, but it has a limited scope which not include spec.replicas field.
So, there will be some magic around JSON with other flag - jsonpath.
Here is a command to filter and print names of all deployments which are scaled to 0:
kubectl get deployments --all-namespaces -o=jsonpath='{range .items[?(#.spec.replicas==0)]}{.metadata.name}{"\n"}{end}'
Jsonpath reference is here.

Find usage of ServceAccount in Kubernates cluster

I am running a Kubernates Cluster in bare metal of three nodes.
I have applied a couple of yaml files of different services.
Now I would like to make order in the cluster and clean some orphaned kube objects.
To do that I need to understand the set of pods or other entities which use or refer a certain ServiceAccount.
For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:
kubectl get сlusterrolebinding admin-user
But is there a good kubectl options combination to find all the usages/references of some ServiceAccount?
You can list all resources using a service account with the following command:
kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(#.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n"
You just need to replace YOUR_SERVICE_ACCOUNT_NAME to the one you are investigating.
I tested this command on my cluster and it works.
Let me know if this solution helped you.
Take a look at this project. After installing via homebrew or krew you can use it find a service account and look at its role, scope, source. It does not tell which pods are referring to it but still a useful tool.
rbac-lookup serviceaccountname --output wide --kind serviceaccount

Can somene explain the different Kubernetes yaml files and types?

First off, I'm completely new with Kubernetes so I may have missed something completely obvious but the documentation is exactly helping, so I'm turning to you guys for help.
I'm trying to figure out just how many types of "deployment files" there are for Kubernetes. I call them "deployment files" because I really don't know what else to call them and they're usually associated with a deployment.
So far, every yml/yaml file I've seen start like this:
apiVersion:
kind: << this is what I'm asking about >>
metadata:
And so far I have seen this many "kind"(s)
ClusterConfig
ClusterRole
ClusterRoleBinding
CronJob
Deployment
Job
PersistentVolumeClaim
Pod
ReplicationController
Role
RoleBinding
Secret
Service
ServiceAccount
I'm sure there are many more. But I can't seem to find a location where they are listed and the contexts broken down.
So what I want to know is this,
Where can I find an explanation for these yaml files?
Where can I learn about the different kinds?
Where can I get a broken down explanation of the minimum required fields/values are for any of these?
Are there templates for these files?
Thanks
This question will need a blog to answer but still in short you can try these options and command to learn from your kubectl CLI.
Learn to use kubectl explain command which shows you a list of Kubernetes objects:
$ kubectl explain
You can get detailed information about any of listed resources using this syntax
`$ kubectl explain pod
$ kubectl explain pod.spec
$ kubectl explain pod.spec.containers`
Or you can get yam template of the object by adding --recursive flag to explain command.
$ kubectl explain pod --recursive
This will also give you official document link.
So in short running kubectl explain with recursive option will list every thing.
When you are talking about specific yaml file containing the definition of specific kubernetes object, you can call them yaml manifests or simply yaml definition files. Using word Deployment for all of them isn't a good idea as there is already specific resource type defined and called by this name in kubernetes. So it's better you don't call them all deployments for consistency.
I'm sure there are many more. But I can't seem to find a location
where they are listed and the contexts broken down.
Yes, there are a lot more of them and you can list those which are available by running:
kubectl api-resources
These different objects are actually called api-resources. As you can see they are listed in three columns: NAME, SHORTNAMES, APIGROUP, NAMESPACED and KIND
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
nodes no false Node
Note that the name of resource corresponds to its KIND but it is slightly different. NAME simply describes resource types as we are referring to them e.g. using kubectl command line utility. Just to give one example, when you want to list pods available in your cluster you simply type kubectl get pods. You don't have to use resource kind i.e. Pod in this context. You can but you don't have to. So kubectl get Pod or kubectl get ConfigMap will also return desired result. You can also refer to them by their shournames so kubectl get daemonsets and kubectl get ds are equivalent.
It's totally different when it comes to specific resource/object definition. In context of yaml definition file we must to use proper KIND of the resource. They are mostly start with capital letter and are written by co called CamelCase but there are exceptions from this rule.
I really recommend you to familiarize with kubernetes documentation. It is very user-friendly and nicely explains both key kubernetes concepts as well as all very tiny details.
Here you have even more useful commands for exploring API resources:
kubectl api-resources --namespaced=true # All namespaced resources
kubectl api-resources --namespaced=false # All non-namespaced resources
kubectl api-resources -o name # All resources with simple output (just the resource name)
kubectl api-resources -o wide # All resources with expanded (aka "wide") output
kubectl api-resources --verbs=list,get # All resources that support the "list" and "get" request verbs
kubectl api-resources --api-group=extensions # All resources in the "extensions" API group
As #wargre already suggested in his comment, kubernetes official documentetion is definitely the best place to start as you will find there very detailed description of every resource.
Understanding Kubernetes Objects
You may start from reading this article: Understanding Kubernetes Objects
Kubernetes Objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:
What containerized applications are running (and on which nodes)
The resources available to those applications
The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
A Kubernetes object is a “record of intent”–once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s desired state.
K8s API reference
A detailed description of all objects can be found in the Kubernetes API reference guide.