kustomize's docs provides a nice one-liner that compares two different overlays...
diff \
<(kustomize build $OVERLAYS/staging) \
<(kustomize build $OVERLAYS/production)
is there a way to do the same but against what is running within a specific kubernetes namespace and that of a defined overlay on disk?
more specifically, knowing what an kubectl apply -k . would do without actually doing it? using --dry-run just says spits out a list of the objects rather than a real diff.
kustomize build ./ | kubectl diff -f -
In Kustomize version 4.x.x
If you're looking for a way to do this visually, I highly recommend trying the Compare & Sync feature from Monokle:
In the picture above you can see an example where I'm comparing the output of the cluster-install kustomization to the objects in my minikube cluster.
You can easily determine which resources are missing in your cluster and which ones are different.
On top of that, you're not limited to only comparing kustomizations to clusters. You can also compare two clusters, two kustomizations, helm charts, etc.
I'm not sure if this is what you are looking for, but in Kubernetes you have kubectl diff.
It's nicely explained on APIServer dry-run and kubectl diff.
You can use option -k, --kustomize which does:
Process the kustomization directory. This flag can't be used together with -f or -R.
Or maybe something similar to one-liner to set context for specific namespace:
$ kubectl config set-context staging --user=cluster-admin --namespace=staging
$ kubectl config set-context prod --user=cluster-admin --namespace=prod
Once you have context setup you could use them maybe in a following way:
kubectl config use-context staging; cat patched_k8s.yaml | kubectl config use-context prod; kubectl diff -f -
This is just an example which I did not tested.
Try this kustomize command, currently in alpha:
KUSTOMIZE_ENABLE_ALPHA_COMMANDS=true kustomize resources diff -k your/kustomize/overlay
via https://kubernetes.slack.com/archives/C9A5ALABG/p1582738327027200?thread_ts=1582695987.023600&cid=C9A5ALABG
I have a small function on my shell config to do this:
kdiff() {
overlay="${1}"
kustomize build ${overlay} \
| kubectl diff -f - ${#:2} \
| sed '/kubectl.kubernetes.io\/last-applied-configuration/,+1 d' \
| sed -r "s/(^\+[^\+].*|^\+$)/$(printf '\e[0;32m')\1$(printf '\e[0m')/g" \
| sed -r "s/(^\-[^\-].*|^\-$)/$(printf '\e[0;31m')\1$(printf '\e[0m')/g"
}
It drops the last-applied-configuration annotation and adds some color.
Related
I have a Spring boot application and I deploy the application to Kubernetes using a single k8s.yml manifest file via Github actions. This k8s.yml manifest contains Secrets, Service, Ingress, Deployment configurations. I was able to deploy the application successfully as well. Now I plan to separate the Secrets, Service, Ingress, Deployment configurations into a separate file as secrets.yml, service.yml, ingress.yml and deployment.yml.
Previously I use the below command for deployment
kubectl: 1.5.4
command: |
sed -e 's/$SEC/${{ secrets.SEC }}/g' -e 's/$APP/${{ env.APP_NAME }}/g' -e 's/$ES/${{ env.ES }}/g' deployment.yml | kubectl apply -f -
Now after the separation I use the below commands
kubectl: 1.5.4
command: |
kubectl apply -f secrets.yml
kubectl apply -f service.yml
sed -e 's/$ES/${{ env.ES }}/g' ingress.yml | kubectl apply -f -
sed -e 's/$SEC/${{ secrets.SEC }}/g' -e 's/$APP/${{ env.APP_NAME }}/g' deployment.yml | kubectl apply -f -
But some how the application is not deploying correctly, I would like to know if the command which I am using is correct or not
You can consider perform the sed first, then apply all files kubectl apply -f . instead of going one by one. Append --recursive if you have files in sub folder to apply, too.
Like the other answer says, you can ask kubectl to apply all files recursively in a directory.
Now the sed replaces are soon going to become overwhelming as the resources and configuration grow.
That is why kubctl comes with integrated kustomize support:
https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
In short:
Break up all kubernetes resources into smaller files/components, put them in a directory.
Place a file called kustomization.yaml in same directory.
In the kustomization.yaml configure which file you want to apply, in which order, and also do some on-the-fly edits.
And apply with -k flag:
kubectl apply -k <kustomization_directory>
Is there some command for this? It irks me that Openshift takes pride in having "-o yaml" and "-o json" commands to avoid having to use cut/grep/awk, but for listing the current project this seems to be the only way to do it:
[root#bart-master ~]# oc project
Using project "default" on server "https://api.bart.mycluster.com:6443".
[root#bart-master ~]# oc project | cut -d '"' -f2
default
You can get the current project(namespace) using each oc and kubectl CLIs as follows
$ oc config view --minify -o 'jsonpath={..namespace}'
$ kubectl config view --minify -o 'jsonpath={..namespace}'
The oc project CLI command already has this built in. You can pass the -q or --short arguments to oc project in order to get the namespace name alone.
In general, oc has great help support that you can get by appending -h to the end of any command (including oc project) to get helpful arguments like this.
Just wondering, let's say I have X Kubernetes deployment.yaml, pod.yaml, persistedvolumecliam.yaml and service.yaml files inside a directory.
The tutorials would tell us to do the following:
kubectl apply -f frontend-service.yaml,redis-master-service.yaml,redis-slave-service.yaml,frontend-deployment.yaml,redis-master-deployment.yaml,redis-slave-deployment.yaml
Is there a way just to do something like:
kubectl apply all
or
kubectl apply -f *
or some variation thereof to spin all of the kube stuffs within on directory?
You can apply everything inside a directory with kubectl apply -f /path/to/dir. To include subdirectories use the paramter -R, like kubectl apply -R -f /path/to/dir
# Apply resources from a directory
kubectl apply -f dir/
# Process the directory used in -f recursively
kubectl apply -R -f dir/
For more details check the reference documentation.
You can apply a YAML file using kubectl apply recursively in all folders by using the --recursive flag.
Here is the basic syntax once you are in the root folder:
kubectl apply --recursive -f .
I have a configMap and I want to create a backup configMap by using the last applied configuration from that.
I use the following command to get the last applied configuration:
kubectl get cm app-map -n app-space \
-o go-template \
--template='{{index .metadata "annotations" "kubectl.kubernetes.io/last-applied-configuration"}}' > backup.json
It returns something like this [the content of backup.json]:
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"app-map","creationTimestamp":null},"data":{"app.yml":"xxxxxxxxx","config.yml":"yyyyyyyyy"}}
Now, I want my backup configMap to have a different name. So, I want to change the .metadata.name from app-map to app-map-backup.
Is there a way I can achieve that with kubectl and -o go-template? I want to have the name changed before I write it to the backup.json file.
I know I can do that using jq but I do not have permission to install jq on the server where I am using kubectl.
you could use kubectl bulk plugin. The below command will replicate your config map
# get resource(s) and create with field(name) change
kubectl bulk configmap app-map -n app-space create name app-mapp-backup
Kubectl bulk is very powerful to use, I suggest to check samples.
You cannot do this just using kubectl. But there are other ways.
You can download statically linked jq binary from official jq website:
wget https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x jq-linux64
and then you can use this binary like following:
kubectl -o go-template [...] | ./jq-linux64 ...
or you can use sed:
kubectl -o go-template [...] | sed 's/"name":"app-map"/"name":"app-map-backup"/'
I have an admin.conf file containing info about a cluster, so that the following command works fine:
kubectl --kubeconfig ./admin.conf get nodes
How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:
kubectl get nodes
Here are the official documentation for how to configure kubectl
http://kubernetes.io/docs/user-guide/kubeconfig-file/
You have a few options, specifically to this question, you can just copy your admin.conf to ~/.kube/config
The best way I've found was to use an environment variable:
export KUBECONFIG=/path/to/admin.conf
I just alias the kubectl command into separate ones for my dev and production environments via .bashrc
alias k8='kubectl'
alias k8prd='kubectl --kubeconfig ~/.kube/config_prd.conf'
I prefer this method as it requires me to define the environment for each command.. whereas using an environment variable could potentially lead you to running a command within the wrong environment
Before answers have been very solid and informative, I will try to add
my 2 cents here
Configure kubeconfig file knowing its precedence
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
For your example
kubectl get pods --kubeconfig=/path/to/admin.conf
#
# or:
#
KUBECONFIG=/path/to/admin.conf kubectl get pods
#
# or:
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date)
KUBECONFIG= $HOME/.kube/config:/path/to/admin.conf kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Although this precedence list not officially specified in the documentation it is codified here. If you’re developing client tools for Kubernetes, you should consider using cli-runtime library which will bring the standard --kubeconfig flag and $KUBECONFIG detection to your program.
ref article: https://ahmet.im/blog/mastering-kubeconfig/
I name all cluster configs as .kubeconfig and this lives in project directory.
Then in .bashrc or .bash_profile I have the following export:
export KUBECONFIG=.kubeconfig:$HOME/.kube/config
This way when I'm in the project directory kubectl will load local .kubeconfig.
Hope that helps
kubectl uses ~/.kube/config as the default configuration file. So you could just copy your admin.conf over it.
Because there is no built-in kubectl config merge command at the moment (follow this) you can add this function to your .bashrc (or .zshrc):
function kmerge() {
if [ $# -eq 0 ]
then
echo "Please pass the location of the kubeconfig you wish to merge"
fi
KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}
Then you can just run from termial:
kmerge /path/to/admin.conf
and the config file will be merged to ~/.kube/config.
You can now switch to the new context with:
kubectl config use-context <new-context-name>
Or if you're using kubectx (recommended) you can run: kubectx <new-context-name>.
(The kmerge function is based on #MichaelSp answer at this post).
Kubernetes keeps the path to search for config files in $KUBECONFIG
If you want to add one more config path on top of the existing KUBECONFIG without overriding it (and keeping ~/.kube/config as the default path to search).
Just run the following each time you want to add a conf file to the KUBECONFIG path
export KUBECONFIG=${KUBECONFIG:-~/.kube/config}:/path/to/admin.conf
You can check it worked by listing the available contexts
kubectl config get-contexts
Then select the one you want to use
kubectl config use-context <context-name>
Manage your config files proper,place below in your profile file, source the .profile / .bash_profile
for kconfig in $HOME/.kube/config $(find $HOME/.kube/ -iname "*.config")
do
if [ -f "$kconfig" ];then
export KUBECONFIG=$KUBECONFIG:$kconfig
fi
done
switch the contexts from kubectl
When you type kubectl I guess you prefer to know which cluster you are pointing. Maybe it's worth creating an alias for that?
alias kube-mycluster='kubectl --kubeconfig ~/.kube/mycluster.conf'
This is possible:
export KUBECONFIG=~/.kube/config:~/.kube/cluster0:~/.kube/cluster1:~/.kube/cluster3
and:
kubectl config use-context cluster0