Kubernetes increase resources for all deployments - kubernetes

I am new to Kubernetes. I have a K8 cluster with multiple deployments (more than 150), each having more than 4 pods scaled.
I have a requirement to increase resource limits for all deployments in the cluster; and I'm aware I can increase this directly via my deployment YAML.
However, I'm thinking if there is any way I can increase the resources for all deployments at one go.
Thanks for your help in advance.

There are few things to point out here:
There is a kubectl patch command that allows you to:
Update field(s) of a resource using strategic merge patch, a JSON
merge patch, or a JSON patch.
JSON and YAML formats are accepted.
See examples below:
kubectl patch deploy deploy1 deploy2 --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
or:
kubectl patch deploy $(kubectl get deploy -o go-template --template '{{range .items}}{{.metadata.name}}{{" "}}{{end}}') --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
For further reference see this doc.
You can add proper labels into deployment via kubectl set command:
kubectl set resources deployment -l key=value --limits memory=120Mi
Also, you can use some additional CLI like sed, awk or xargs. For example:
kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]'
or:
kubectl get deployments -o name | awk '{print $1 }' | xargs kubectl patch deployment $0 -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
It is also worth noting that configuration files should be stored in version control before being pushed to the cluster. See the Configuration Best Practices for more details.

You can use kustomize's "components" system if you want to set them all to the same thing. But that's unlikely. Better solution is probably write a little Python (or whatever lang you prefer) script to modify all the YAML files and push them back into source control.

Related

How to collect Kubernetes Metadata?

I am looking for a way to get all object's metadata within a k8s cluster and send it out to an external server.
By metadata, I refer to objects Name, Kind, Labels, Annotations, etc.
The intention is to build an offline inventory of a cluster.
What would be the best approach to build it? Is there any tool that already does something similar?
Thanks
Posting this as a community wiki, feel free to edit and expand.
There are different ways to achieve it.
From this GitHub issue comment it's possible to iterate through all resources to get all available objects.
in yaml:
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o yaml
in json:
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o json
And then parse the output.
Use kubernetes clients.
There are already developed kubernetes clients (available for different languages) which can be used to get required information and work with it later.
Use kubectl plugin - ketall (didn't test it)
There's a developed plugin for kubectl which returns all cluster resources. Please find github repo - ketall. Again after cluster objects are gotten, you will need to parse/work with them.
Try this commands
kubectl get all --all-namespaces -o yaml
or
kubectl get all --all-namespaces -o json
you can parse and use as you find fit

Find the history of commands applied to the kubernetes cluster

Is there a way to find the history of commands applied to the kubernetes cluster by kubectl?
For example, I want to know the last applied command was
kubectl apply -f x.yaml
or
kubectl apply -f y.yaml
You can use kubectl apply view-last-applied command to find the last applied configuration:
➜ ~ kubectl apply view-last-applied --help
View the latest last-applied-configuration annotations by type/name or file.
The default output will be printed to stdout in YAML format. One can use -o option to change output format.
Examples:
# View the last-applied-configuration annotations by type/name in YAML.
kubectl apply view-last-applied deployment/nginx
# View the last-applied-configuration annotations by file in JSON
kubectl apply view-last-applied -f deploy.yaml -o json
[...]
To get the full history from the beginning of a cluster creation you should use audit logs as already mentioned in comments by #Jonas.
additionally, if you adopt gitops you could have all your cluster state under version control. It will allow you to trace back all the changes made to your cluster.

How to write Kubernetes annotations to the underlying YAML files?

I am looking to apply existing annotations on a Kubernetes resource to the underlying YAML configuration files. For example, this command will successfully find all pods with a label of "app=helloworld" or "app=testapp" and annotate them with "xyz=test_anno":
kubectl annotate pods -l 'app in (helloworld, testapp)' xyz=test_anno
However, this only applies the annotations to the running pods and doesn't change the YAML files. How do I force those changes to the YAML files so they're permanent, either after the fact or as part of kubectl annotate to start with?
You could use the kubectl patch command with a little tricks
kubectl patch $(k get po -l 'app in (helloworld, testapp)') -p '{"metadata":{"annotations":{"xyz":"test_anno"}}}'

kubectl diff fails on AKS

I'd like to diff a Kubernetes YAML template against the actual deployed ressources. This should be possible using kubectl diff. However, on my Kubernetes cluster in Azure, I get the following error:
Error from server (InternalError): Internal error occurred: admission webhook "aks-webhook-admission-controller.azmk8s.io" does not support dry run
Is there something I can enable on AKS to let this work or is there some other way of achieving the diff?
As a workaround you can use standard GNU/Linux diff command in the following way:
diff -uN <(kubectl get pods nginx-pod -o yaml) example_pod.yaml
I know this is not a solution but just workaround but I think it still can be considered as full-fledged replacement tool.
Thanks, but that doesn't work for me, because it's not just one pod
I'm interested in, it's a whole Helm release with deployment,
services, jobs, etc. – dploeger
But anyway you won't compare everything at once, will you ?
You can use it for any resource you like, not only for Pods. Just substitute Pod by any other resource you like.
Anyway, under the hood kubectl diff uses diff command
In kubectl diff --help you can read:
KUBECTL_EXTERNAL_DIFF environment variable can be used to select your
own diff command. By default, the "diff" command available in your
path will be run with "-u" (unified diff) and "-N" (treat absent files
as empty) options.
The real problem in your case is that you cannot use for some reason --dry-run on your AKS Cluster, which is question to AKS users/experts. Maybe it can be enabled somehow but unfortunately I have no idea how.
Basically kubectl diff compares already deployed resource, which we can get by:
kubectl get resource-type resource-name -o yaml
with the result of:
kubectl apply -f nginx.yaml --dry-run --output yaml
and not with actual content of your yaml file (simple cat nginx.yaml would be ok for that purpose).
You can additionally use:
kubectl get all -l "app.kubernetes.io/instance=<helm_release_name>" -o yaml
to get yamls of all resources belonging to specific helm release.
As you can read in man diff it has following options:
--from-file=FILE1
compare FILE1 to all operands; FILE1 can be a directory
--to-file=FILE2
compare all operands to FILE2; FILE2 can be a directory
so we are not limited to comparing single files but also files located in specific directory. Only we can't use these two options together.
So the full diff command for comparing all resources belonging to specific helm release currently deployed on our kubernetes cluster with yaml files from a specific directory may look like this:
diff -uN <(kubectl get all -l "app.kubernetes.io/instance=<helm_release_name>" -o yaml) --to-file=directory_containing_yamls/

How to clone Google Container Cluster / Kubernetes cluster?

As in title. I want to clone (create a copy of existing cluster).
If it's not possible to copy/clone Google Container Engine cluster, then how to clone Kubernetes cluster?
If that's not possible, is there a way to dump the whole cluster config?
Note:
I try to modify the cluster's configs by calling:
kubectl apply -f some-resource.yaml
But nothing stops me/other employee modifying the cluster by running:
kubectl edit service/resource
Or setting properties from command line kubectl calls.
I'm using a bash script from CoreOS team, with small adjustments, that works pretty good. By default it's excluding the kube-system namespace, but you can adjust this if you need. Also you can add or remove the resources you want to copy.
for ns in $(kubectl get ns --no-headers | cut -d " " -f1); do
if { [ "$ns" != "kube-system" ]; }; then
kubectl --namespace="${ns}" get --export -o=json svc,rc,rs,deployments,cm,secrets,ds,statefulsets,ing | \
jq '.items[] |
select(.type!="kubernetes.io/service-account-token") |
del(
.spec.clusterIP,
.metadata.uid,
.metadata.selfLink,
.metadata.resourceVersion,
.metadata.creationTimestamp,
.metadata.generation,
.status,
.spec.template.spec.securityContext,
.spec.template.spec.dnsPolicy,
.spec.template.spec.terminationGracePeriodSeconds,
.spec.template.spec.restartPolicy
)' >> "./my-cluster.json"
fi
done
To restore it on another cluster, you have to execute kubectl create -f ./my-cluster.json
You can now create/clone an existing cluster,
On the Clusters page, click on create cluster and choose an existing cluster. But remember, this will not clone the api-resources you may have to use a third party tool such as Velero to help you backup the resources.
Here are some useful links
Cluster Creation
Velero
Medium Article on How to use Velero