kubectl diff fails on AKS - kubernetes

I'd like to diff a Kubernetes YAML template against the actual deployed ressources. This should be possible using kubectl diff. However, on my Kubernetes cluster in Azure, I get the following error:
Error from server (InternalError): Internal error occurred: admission webhook "aks-webhook-admission-controller.azmk8s.io" does not support dry run
Is there something I can enable on AKS to let this work or is there some other way of achieving the diff?

As a workaround you can use standard GNU/Linux diff command in the following way:
diff -uN <(kubectl get pods nginx-pod -o yaml) example_pod.yaml
I know this is not a solution but just workaround but I think it still can be considered as full-fledged replacement tool.
Thanks, but that doesn't work for me, because it's not just one pod
I'm interested in, it's a whole Helm release with deployment,
services, jobs, etc. – dploeger
But anyway you won't compare everything at once, will you ?
You can use it for any resource you like, not only for Pods. Just substitute Pod by any other resource you like.
Anyway, under the hood kubectl diff uses diff command
In kubectl diff --help you can read:
KUBECTL_EXTERNAL_DIFF environment variable can be used to select your
own diff command. By default, the "diff" command available in your
path will be run with "-u" (unified diff) and "-N" (treat absent files
as empty) options.
The real problem in your case is that you cannot use for some reason --dry-run on your AKS Cluster, which is question to AKS users/experts. Maybe it can be enabled somehow but unfortunately I have no idea how.
Basically kubectl diff compares already deployed resource, which we can get by:
kubectl get resource-type resource-name -o yaml
with the result of:
kubectl apply -f nginx.yaml --dry-run --output yaml
and not with actual content of your yaml file (simple cat nginx.yaml would be ok for that purpose).
You can additionally use:
kubectl get all -l "app.kubernetes.io/instance=<helm_release_name>" -o yaml
to get yamls of all resources belonging to specific helm release.
As you can read in man diff it has following options:
--from-file=FILE1
compare FILE1 to all operands; FILE1 can be a directory
--to-file=FILE2
compare all operands to FILE2; FILE2 can be a directory
so we are not limited to comparing single files but also files located in specific directory. Only we can't use these two options together.
So the full diff command for comparing all resources belonging to specific helm release currently deployed on our kubernetes cluster with yaml files from a specific directory may look like this:
diff -uN <(kubectl get all -l "app.kubernetes.io/instance=<helm_release_name>" -o yaml) --to-file=directory_containing_yamls/

Related

K8s get back my yaml files from running cluster

Okay first let me say please don't judge. Believe me, I am kicking myself in the ass.
So I lost my hard disk on my laptop which held the Kubernetes yaml files that I ran against a Kubernetes Cloud cluster. I don't have the latest backup which is the problem.
does anyone know how to get just the yaml I ran against the K8s cloud server. I can get to the cluster and run kubectl get pod my-pod -o yaml but of course, it adds a lot of things. I am just looking for the yaml that I ran.
I am stressing here and have learned my lesson. Backup, Backup and verify Backup.
You can use this and extend it to your needs:
kubectl get [resource type] -n [namespace] [resource Name] -o yaml > [output.yaml]
The -o yaml will do the job
Note
You will get some extra information provided by your cloud providers like history, version, and more.
Lens
https://k8slens.dev/
You can use Lens which will allow you to view & edit your resources so you will be able to copy the YAML from it.

How to write Kubernetes annotations to the underlying YAML files?

I am looking to apply existing annotations on a Kubernetes resource to the underlying YAML configuration files. For example, this command will successfully find all pods with a label of "app=helloworld" or "app=testapp" and annotate them with "xyz=test_anno":
kubectl annotate pods -l 'app in (helloworld, testapp)' xyz=test_anno
However, this only applies the annotations to the running pods and doesn't change the YAML files. How do I force those changes to the YAML files so they're permanent, either after the fact or as part of kubectl annotate to start with?
You could use the kubectl patch command with a little tricks
kubectl patch $(k get po -l 'app in (helloworld, testapp)') -p '{"metadata":{"annotations":{"xyz":"test_anno"}}}'

How to view the manifest file used to create a Kubenetes resource?

I have K8s deployed on an EC2 based cluster,
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,
There were deployment, service and ingress files used to create the App setup.
I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like lastTransitionTime, lastUpdateTime and status-
kubectl get deployment -o yaml
What is the correct command to view the manifest yaml files of an existing deployed resource?
There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use kubectl apply that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.
You can try using the --export flag, but it is deprecated and may not work perfectly.
kubectl get deployment -o yaml --export
Refer: https://github.com/kubernetes/kubernetes/pull/73787
KUBE_EDITOR="cat" kubectl edit secrets rook-ceph-mon -o yaml -n rook-ceph 2>/dev/null >user.yaml

Where is the full Kubernetes YAML spec?

There must be "full-configuration" and example templates of Kubernetes YAML configs somewhere with comments itemizing what parameters do what with runnable examples somewhere.
Does anyone know where something like this might be? Or where the "full API" of the most commonly used Kubernetes components are?
There is documentation for every k8s api version available, for example check this link.
The way I found what every key in yaml file represent and what does it mean is via kubectl explain command.
For example:
$kubectl explain deploy.spec
Trick I use while doing CKAD to see full list could be:
$kubectl explain deploy --recursive > deployment_spec.txt
This will list all available options for kubernetes deployment that could you use in yaml file.
To generate some template there is option to use --dry-run and -o yaml in kubectl command, for example to create template for CronJob:
$kubectl run cron_job_name --image=busybox --restart=OnFailure --schedule="*/1 * * * * " --dry-run -o yaml > cron_job_name.yaml

How can I configure kubectl to interact with both minikube and a deployed cluster?

When you use minikube, it automatically creates the local configurations, so it's ready to use. And it appears there is support for multiple clusters in the kubectl command based on the reference for kubectl config.
In the docs for setting up clusters, there's a reference to copying the relevant files to your local machine to access the cluster. I also found an SO Q&A about editing your .kube/config to leverage azure remotely that talked to editing the kube/config file.
It looks like the environment variable $KUBECONFIG can reference multiple locations of these configuration files, with the built-in default being ~/.kube/config (which is what minikube creates).
If I want to be able to use kubectl to invoke commands to multiple clusters, should I download the relevant config file into a new location (for example into ~/gcloud/config, set the KUBECONFIG environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig option when invoking kubectl to specify a configuration for the cluster?
I wasn't sure if there was some way of merging the configuration files that would be better, and leverage the kubectl config set-context or kubectl config set-cluster commands instead. The documentation at Kubernetes on "Configure Access to Multiple Clusters" seems to imply a different means of using --kubeconfig along with these kubectl config commands.
In short, what's the best way to interact with multiple separate kubernetes clusters and what are the tradeoffs?
If I want to be able to use kubectl to invoke commands to multiple
clusters, should I download the relevant config file into a new
location (for example into ~/gcloud/config, set the KUBECONFIG
environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig option when
invoking kubectl to specify a configuration for the cluster?
That would probably depend on the approach you find simpler and more convenient, and if having security and access management concerns in mind is needed.
From our experience merging various kubeconfig files is very useful for multi-cluster operations, in order to carry out maintenance tasks, and incident management over a group of clusters (contexts & namespaces) simplifying troubleshooting issues based on the possibility to compare configs, manifests, resources and states of K8s services, pods, volumes, namespaces, rs, etc.
However, when automation and deployment (w/ tools like Jenkins, Spinnaker or Helm) are involved most likely having separate kubeconfig files could be a good idea. A hybrid approach can be merging kubeconfig files based on a division by Service tier -> Using files to partition development landscapes (dev, qa, stg, prod) clusters or for Teams -> Roles and Responsibilities in an Enterprise (teamA, teamB, …, teamN) can be understood also within good alternatives.
For multi-cluster merged kubeconfig files scenarios consider kubectx + kubens, which are very powerful tools for kubectlt that let you see the current context (cluster) and namespace, likewise to switch between them.
In short, what's the best way to interact with multiple separate
kubernetes clusters and what are the trade offs?
The trade offs should possibly be analyzed considering the most important factors for your project. Having a single merged kubeconfig file seems simpler, even simple if you merge it with ~/.kube/config to be used by default by kubectl and just switching between cluster/namespaces with --context kubectl flag. On the other hand if limiting the scope of the kubeconfig is a must, having them segregated and using --kubeconfig=file1 sounds like the best way to go.
Probably there is NOT a best way for every case and scenario, knowing how to configure kubeconfig file knowing its precedence will help though.
In this article -> https://www.nrmitchi.com/2019/01/managing-kubeconfig-files/ you'll find a complementary and valuable opinion:
While having all of the contexts you may need in one file is nice, it
is difficult to maintain, and seldom the default case. Multiple tools
which provide you with access credentials will provide a fresh
kubeconfig to use. While you can merge the configs together into
~/.kube/config, it is manual, and makes removing contexts more
difficult (having to explicitly remove the context, cluster, and
user). There is an open issue in Kubernetes tracking this. However by
keeping each provided config file separate, and just loading all of
them, removal is much easier (just remove the file). To me, this
seems like a much more manageable approach.
I prefer to keep all individual config files under ~/.kube/configs, and by taking advantage of the multiple-path aspect of the $KUBECONFIG environment variable option, we can make this happen.
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
Bonus (extra points!)
Using multiple kubeconfigs at once
You can save AKS (Azure Container Service), or AWS EKS (Elastic Container Service for K8s) or GKE (Google Container Engine) cluster contexts to separate files and set the KUBECONFIG env var to reference both file locations.
For instance, when you create a GKE cluster (or retrieve its credentials) through the gcloud command, it normally modifies your default ~/.kube/config file. However, you can set $KUBECONFIG for gcloud to save cluster credentials to a file:
KUBECONFIG=c1.yaml gcloud container clusters get-credentials "cluster-1"
Then as we mentioned before using multiple kubeconfigs at once can be very useful to work with multiple contexts at the same time.
To do that, you need a “merged” kubeconfig file. In the section "Merging kubeconfig files" below, we explain how you can merge the kubeconfigs into a single file, but you can also merge them in-memory.
By specifying multiple files in KUBECONFIG environment variable, you can temporarily stitch kubeconfig files together and use them all in kubectl .
#
# Kubeconfig in-memory merge
#
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
#
# For your example
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2: kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Merging kubeconfig files
Since kubeconfig files are structured YAML files, you can’t just append them to get one big kubeconfig file, but kubectl can help you merge these files:
#
# Merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG=$HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
ref article 1: https://ahmet.im/blog/mastering-kubeconfig/
ref article 2: https://github.com/kubernetes/kubernetes/issues/46381
I have a series of shell functions that boil down to kubectl --context=$CTX --namespace=$NS, allowing me to contextualize each shell [1]. But if you are cool with that approach, then rather than rolling your own, https://github.com/Comcast/k8sh will likely interest you. I just wish it was shell functions instead of a sub-shell
But otherwise, yes, I keep all the config values in the one ~/.kube/config
footnote 1: if you weren't already aware, one can also change the title of terminal windows via title() { printf '\033]0;%s\007' "$*"; } which I do in order to remind me which cluster/namespace/etc is in effect for that tab/window
kubectl get pods --kubeconfig file1.yaml
kubectl get pods --kubeconfig file2.yaml
you can use (--kubeconfig) flag to tell the kubectl that you want to run kubectl based on file1 or file2. in the note, the file is kubernetes config