How to configure kubectl with cluster information from a .conf file? - kubernetes

I have an admin.conf file containing info about a cluster, so that the following command works fine:
kubectl --kubeconfig ./admin.conf get nodes
How can I config kubectl to use the cluster, user and authentication from this file as default in one command? I only see separate set-cluster, set-credentials, set-context, use-context etc. I want to get the same output when I simply run:
kubectl get nodes

Here are the official documentation for how to configure kubectl
http://kubernetes.io/docs/user-guide/kubeconfig-file/
You have a few options, specifically to this question, you can just copy your admin.conf to ~/.kube/config

The best way I've found was to use an environment variable:
export KUBECONFIG=/path/to/admin.conf

I just alias the kubectl command into separate ones for my dev and production environments via .bashrc
alias k8='kubectl'
alias k8prd='kubectl --kubeconfig ~/.kube/config_prd.conf'
I prefer this method as it requires me to define the environment for each command.. whereas using an environment variable could potentially lead you to running a command within the wrong environment

Before answers have been very solid and informative, I will try to add
my 2 cents here
Configure kubeconfig file knowing its precedence
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
For your example
kubectl get pods --kubeconfig=/path/to/admin.conf
#
# or:
#
KUBECONFIG=/path/to/admin.conf kubectl get pods
#
# or:
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date)
KUBECONFIG= $HOME/.kube/config:/path/to/admin.conf kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Although this precedence list not officially specified in the documentation it is codified here. If you’re developing client tools for Kubernetes, you should consider using cli-runtime library which will bring the standard --kubeconfig flag and $KUBECONFIG detection to your program.
ref article: https://ahmet.im/blog/mastering-kubeconfig/

I name all cluster configs as .kubeconfig and this lives in project directory.
Then in .bashrc or .bash_profile I have the following export:
export KUBECONFIG=.kubeconfig:$HOME/.kube/config
This way when I'm in the project directory kubectl will load local .kubeconfig.
Hope that helps

kubectl uses ~/.kube/config as the default configuration file. So you could just copy your admin.conf over it.

Because there is no built-in kubectl config merge command at the moment (follow this) you can add this function to your .bashrc (or .zshrc):
function kmerge() {
if [ $# -eq 0 ]
then
echo "Please pass the location of the kubeconfig you wish to merge"
fi
KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/mergedkub ~/.kube/config
}
Then you can just run from termial:
kmerge /path/to/admin.conf
and the config file will be merged to ~/.kube/config.
You can now switch to the new context with:
kubectl config use-context <new-context-name>
Or if you're using kubectx (recommended) you can run: kubectx <new-context-name>.
(The kmerge function is based on #MichaelSp answer at this post).

Kubernetes keeps the path to search for config files in $KUBECONFIG
If you want to add one more config path on top of the existing KUBECONFIG without overriding it (and keeping ~/.kube/config as the default path to search).
Just run the following each time you want to add a conf file to the KUBECONFIG path
export KUBECONFIG=${KUBECONFIG:-~/.kube/config}:/path/to/admin.conf
You can check it worked by listing the available contexts
kubectl config get-contexts
Then select the one you want to use
kubectl config use-context <context-name>

Manage your config files proper,place below in your profile file, source the .profile / .bash_profile
for kconfig in $HOME/.kube/config $(find $HOME/.kube/ -iname "*.config")
do
if [ -f "$kconfig" ];then
export KUBECONFIG=$KUBECONFIG:$kconfig
fi
done
switch the contexts from kubectl

When you type kubectl I guess you prefer to know which cluster you are pointing. Maybe it's worth creating an alias for that?
alias kube-mycluster='kubectl --kubeconfig ~/.kube/mycluster.conf'

This is possible:
export KUBECONFIG=~/.kube/config:~/.kube/cluster0:~/.kube/cluster1:~/.kube/cluster3
and:
kubectl config use-context cluster0

Related

How to add certificates to Kube config file

I have an local Kubernetes environment and I basically copy .kube/config file to my local and added "context", "users", and "cluster" informations to my current ".kube/config" file. That's ok, I can connect to my local file.
But I want to add these informations to my local config file with commands.
So regarding to this page, I can use "certificate-authority-data" as parameter like below: ---> https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
PS C:\Users\user\.kube> kubectl config --kubeconfig=config set-cluster local-kubernetes --server=https://10.10.10.10:6443 --certificate-authority-data=LS0tLSAASDASDADAXXXSDETRDFDJHFJWEtGCmx0YVR2SE45Rm9IVjAvQkdwRUM2bnFNTjg0akd2a3R4VUpabQotLS0tLUVORCBDADADADDAADS0tXXXCg==
Error: unknown flag: --certificate-authority-data
See 'kubectl config set-cluster --help' for usage.
PS C:\Users\user\.kube>
But it throws error like above. I'm using kubernetes latest version.
How can I add these informations to my local file with kubectl config command?
Thanks!
Possible solution for that is to use --flatten flag with config command:
➜ ~ kubectl config view --flatten=true
flatten the resulting kubeconfig file into self contained output
(useful for creating portable kubeconfig files)
That can be also exported to a file (portable config):
kubectl config view --flatten > out.txt
You can read more about kube config in Mastering the KUBECONFIG file document.
Once you run this command on the server where the appropriate certificate are present you will receive base64 encoded keys: certificate-authority-data, client-certificate-data and client-key-data.
Then you can use the command provided in the official config document:
➜ ~ kubectl config set clusters.my-cluster.certificate-authority-data $(echo "cert_data_here" | base64 -i -)
Then you have to replace (echo "cert_data_here" | base64 -i -) with data from flatten config file.
Worth to mention that this info is also available with -help flag for kubectl config:
kubectl config set --help
Sets an individual value in a kubeconfig file
PROPERTY_VALUE is the new value you wish to set. Binary fields such as 'certificate-authority-data'
expect a base64 encoded string unless the --set-raw-bytes flag is used.
Specifying a attribute name that already exists will merge new fields on top of existing values.
Examples:
# Set certificate-authority-data field on the my-cluster cluster.
kubectl config set clusters.my-cluster.certificate-authority-data $(echo "cert_data_here" | base64
-i -)

Simple command or environment variable to print the current namespace in openshift/kubernetes

Is there some command for this? It irks me that Openshift takes pride in having "-o yaml" and "-o json" commands to avoid having to use cut/grep/awk, but for listing the current project this seems to be the only way to do it:
[root#bart-master ~]# oc project
Using project "default" on server "https://api.bart.mycluster.com:6443".
[root#bart-master ~]# oc project | cut -d '"' -f2
default
You can get the current project(namespace) using each oc and kubectl CLIs as follows
$ oc config view --minify -o 'jsonpath={..namespace}'
$ kubectl config view --minify -o 'jsonpath={..namespace}'
The oc project CLI command already has this built in. You can pass the -q or --short arguments to oc project in order to get the namespace name alone.
In general, oc has great help support that you can get by appending -h to the end of any command (including oc project) to get helpful arguments like this.

diff between whats active on cluster versus kustomize

kustomize's docs provides a nice one-liner that compares two different overlays...
diff \
<(kustomize build $OVERLAYS/staging) \
<(kustomize build $OVERLAYS/production)
is there a way to do the same but against what is running within a specific kubernetes namespace and that of a defined overlay on disk?
more specifically, knowing what an kubectl apply -k . would do without actually doing it? using --dry-run just says spits out a list of the objects rather than a real diff.
kustomize build ./ | kubectl diff -f -
In Kustomize version 4.x.x
If you're looking for a way to do this visually, I highly recommend trying the Compare & Sync feature from Monokle:
In the picture above you can see an example where I'm comparing the output of the cluster-install kustomization to the objects in my minikube cluster.
You can easily determine which resources are missing in your cluster and which ones are different.
On top of that, you're not limited to only comparing kustomizations to clusters. You can also compare two clusters, two kustomizations, helm charts, etc.
I'm not sure if this is what you are looking for, but in Kubernetes you have kubectl diff.
It's nicely explained on APIServer dry-run and kubectl diff.
You can use option -k, --kustomize which does:
Process the kustomization directory. This flag can't be used together with -f or -R.
Or maybe something similar to one-liner to set context for specific namespace:
$ kubectl config set-context staging --user=cluster-admin --namespace=staging
$ kubectl config set-context prod --user=cluster-admin --namespace=prod
Once you have context setup you could use them maybe in a following way:
kubectl config use-context staging; cat patched_k8s.yaml | kubectl config use-context prod; kubectl diff -f -
This is just an example which I did not tested.
Try this kustomize command, currently in alpha:
KUSTOMIZE_ENABLE_ALPHA_COMMANDS=true kustomize resources diff -k your/kustomize/overlay
via https://kubernetes.slack.com/archives/C9A5ALABG/p1582738327027200?thread_ts=1582695987.023600&cid=C9A5ALABG
I have a small function on my shell config to do this:
kdiff() {
overlay="${1}"
kustomize build ${overlay} \
| kubectl diff -f - ${#:2} \
| sed '/kubectl.kubernetes.io\/last-applied-configuration/,+1 d' \
| sed -r "s/(^\+[^\+].*|^\+$)/$(printf '\e[0;32m')\1$(printf '\e[0m')/g" \
| sed -r "s/(^\-[^\-].*|^\-$)/$(printf '\e[0;31m')\1$(printf '\e[0m')/g"
}
It drops the last-applied-configuration annotation and adds some color.

How to copy files from pod in one namespace to pod in another namespace

Does kubectl provide a way to copy files from pod in one namespace to another? I see we can copy files from pod to local machine and then copy them on another pod of different namespace. But can we copy directly from one namespace to another?
I tried:
kubectl cp <namespace1>/<pod1>:/tmp/foo.txt <namespace2>/<pod1>:/tmp/foo.txt
Looking at kubectl cp command help options I don't think there is any way to do that.
Not really kubectl cp can only copy remote/local or local/remote, so unfortunately it's a 2 step process:
$ kubectl cp <namespace1>/<pod1>:/tmp/foo.txt foo.txt
$ kubectl cp foo.txt <namespace2>/<pod1>:/tmp/foo.txt
I would be nice to have a 1 step process, like rsync, but it is what it is as of this writing. I opened this issue to track it.

How to merge kubectl config file with ~/.kube/config?

Is there a simple kubectl command to take a kubeconfig file (that contains a cluster+context+user) and merge it into the ~/.kube/config file as an additional context?
Do this:
export KUBECONFIG=~/.kube/config:~/someotherconfig
kubectl config view --flatten
You can then pipe that out to a new file if needed.
If you find yourself doing this a lot... There is now also the krew plugin package manager for kubectl.
The krew plugin "konfig" can help you manage your ~/.kube/config file.
Using the konfig plugin the syntax will be:
kubectl konfig import -s new.yaml
To install krew: https://github.com/kubernetes-sigs/krew
To install konfig: kubectl krew install konfig
Using multiple kubeconfigs at once
Sometimes you have a bunch of small kubeconfig files (e.g. one per cluster) but you want to use them all at once, with tools like kubectl or kubectx that work with multiple contexts at once.
To do that, you need a “merged” kubeconfig file. In the section "Merging kubeconfig files" below, we explain how you can merge the kubeconfigs into a single file, but you can also merge them in-memory.
By specifying multiple files in KUBECONFIG environment variable, you can temporarily stitch kubeconfig files together and use them all in kubectl .
#
# Kubeconfig in-memory merge
#
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
#
# For your example
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2: kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Merging kubeconfig files
Since kubeconfig files are structured YAML files, you can’t just append them to get one big kubeconfig file, but kubectl can help you merge these files:
#
# Merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG=$HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Extracting a context from a kubeconfig file
Let’s say you followed the before merging kubeconfig files and have a merged kubeconfig file in $HOME/.kube/config. Now you want to extract a cluster’s information to a portable kubeconfig file that only has the parts you need to connect to that cluster.
Run:
KUBECONFIG=$HOME/.kube/config kubectl config view \
--minify --flatten --context=context-1 > $HOME/.kube/config-context-1
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=$HOME/.kube/config-context-1
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=$HOME/.kube/config-context-1 kubectl get pods
#
# or
# keep using kubeconfig file at $HOME/.kube/config (which has the merged context)
#
kubectl get pods --context=cluster-1
In this command, we extract data about context-1 from $HOME/.kube/config to config-context-1 file. The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
ref article: https://ahmet.im/blog/mastering-kubeconfig/
If you want to merge two config files in a single one
I found this way (not sure if this is the simplest)
# Add the two config files to the env var
export KUBECONFIG=~/.kube/config:~/Desktop/configFile2.yaml
# Review that you have two configurations in one view
kubectl config view
# View the raw config and output to a new file
kubectl config view --raw > /tmp/config
Then copy the new config file where you want, also do not forget to unset KUBECONFIG the env variable
It's possible, follow the steps:
create a backup from your config file:
cp ~/.kube/config-bkp
create a file with your new config file:
vi ~/.kube/new-config
merge them into config:
KUBECONFIG=~/.kube/config:~/.kube/new-config kubectl config view --flatten > ~/.kube/config
To see the available contexts use:
kubectl config get-contexts
To change the context use:
kubectl config use-context YOUR-CONTEXT-NAME
You can follow these instruction if you want to have some structure in your ~/.kube directory.
Add the following snippet to your ~/.bashrc
Add config files under ~/.kube/config.d separately.
call update_kubeconfigs, or open new terminal
update_kubeconfigs just looks at ~/.kube/config.d directory and if there were any files newer than the current config file under ~/.kube/config, it updates it.
function update_kubeconfigs() {
[ ! -d "$HOME/.kube/config.d" ] && mkdir $HOME/.kube/config.d -p -v
# Will run only if there are new files in the config directory
local new_files=$(find $HOME/.kube/config.d/ -newer $HOME/.kube/config -type f | wc -l)
if [[ $new_files -ne "0" ]]; then
local current_context=$(kubectl config current-context) # Save last context
local kubeconfigfile="$HOME/.kube/config" # New config file
cp -a $kubeconfigfile "${kubeconfigfile}_$(date +"%Y%m%d%H%M%S")" # Backup
local kubeconfig_files="$kubeconfigfile:$(ls $HOME/.kube/config.d/* | tr '\n' ':')"
KUBECONFIG=$kubeconfig_files kubectl config view --merge --flatten > "$HOME/.kube/tmp"
mv "$HOME/.kube/tmp" $kubeconfigfile && chmod 600 $kubeconfigfile
export KUBECONFIG=$kubeconfigfile
kubectl config use-context $current_context --namespace=default
fi
}
# This will call each time you source .bashrc, remove it if you want to call it manually each time
update_kubeconfigs
Removing files form ~/.kube/confing.d will not invoke the script again. Also as #rafaelrezend pointed out, checkout for name conflicts in the config files, that might cause issues.
Github gist includes a fix for updating credentials
Since I have many Kubernetes conf files in the ~/.kube directory, I simply chain them tot the KUBECONFIG env variable in the ~/.zshrc file:
export KUBECONFIG=$HOME/.kube/config
for conf in ~/.kube/*.conf; do
export KUBECONFIG=$KUBECONFIG:$conf
done
Going forward, I do not recommend merging kubeconfig files using kubectl.
it is a manual effort as seen above (set environment variables etc.)
has the disadvantage that when using this context via "kubectl
config use-context" it WRITES the current context to the kubeconfig file. Thus,
this influences other terminal sessions that use a context from the same
kubeconfig file (they all suddenly point to the same context).
Instead, I would recommend using a tool that circumvents such issues by e.g recursively search & displays all available contexts for kubeconfig files and works on a temporary copy.
Checkout kubeswitch (the tool I wrote to deal with > 1000 kubeconfig files) and this section explaining how it works.
If you look for a tool that also does namespace switching and other related things, take a look at "kubie".
As mentioned in the comment by #talarczykco on the top answer, piping back to the same ~/.kube/config will only write the second file and you will lose the original content!
Here is a safer way to first capture the full output then pipe.
Note must surround the variable kubeconfig with " otherwise you lose all newlines!
konfig=$(KUBECONFIG=~/.kube/config:new-config.yaml kubectl config view --flatten)
echo "$konfig" > ~/.kube/config
If you prefer a CLI tool I can highly recommend: KubeCM, which is also able to merge, switch, add...
kubecm add -f ./your_new_config
You would be asked either to merge into ~/.kube/config or create a .yml file in your current folder.
https://github.com/sunny0826/kubecm
To dynamically merge multiple config files in you .bashrc:
export KUBECONFIG=/Users/<user>/.kube/config:/Users/<user>/.kube/other.config
source <(kubectl completion bash)
After fresh source, verify:
kubectl config view
If you use bash you can use this to simply add the configs:
function kmerge() {
DATE=$(date +"%Y%m%d%H%M")
KUBECONFIG=~/.kube/config:$1 kubectl config view --flatten > ~/.kube/mergedkub && mv ~/.kube/config ~/.kube/config-$DATE && mv ~/.kube/mergedkub ~/.kube/config
}
Then just use "kmerge $newConfigfile" to add this.
Be aware the clusternames etc. should be different from existing config entries!
I keep the yaml files for each cluster separate and then combine them with this python script:
import argparse
import yaml
parser = argparse.ArgumentParser()
parser.add_argument('files', metavar='YAMLFILES', type=argparse.FileType('r'), nargs='*')
args = parser.parse_args()
y = {'apiVersion': 'v1', 'kind': 'Config', 'clusters': [],'contexts': [],
'current-context': None, 'preferences': {}, 'users': []}
for a in args.files:
f = yaml.load(a, Loader=yaml.Loader)
y['clusters'].append(f['clusters'][0])
y['contexts'].append(f['contexts'][0])
y['users'].append(f['users'][0])
y['current-context'] = f['contexts'][0]['name']
print(yaml.dump(y, Dumper=yaml.Dumper))
Zsh users can use a =(...) process substitution to generate a temporary merged configuration file, and copy it to ~/.kube/config all in one line:
cp =(KUBECONFIG=~/.kube/config:~/.kube/config.other kubectl config view --flatten) ~/.kube/config
Going a step further, we can hold the configuration file in our system clipboard and use a nested process substitution that reads it. Just make sure that the clipboard content is the actual configuration file before pressing Enter:
cp =(KUBECONFIG=~/.kube/config:<(pbpaste) kubectl config view --flatten) ~/.kube/config