I am encountering a weird behavior when i try to configure several KUBECONFIG environment entries concatenated with : such in the example here :
export KUBECONFIG=/Users/user/Work/company/project/setup/secrets/dev-qz/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-wer/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-wer/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-jg/users/admin.conf:/Users/user/Work/company/project/setup/secrets/preprod/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev/users/admin.conf:/Users/user/Work/company/project/setup/secrets/dev-fxc/users/admin.conf:/Users/user/Work/company/project/setup/secrets/cluster-setup/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-fxc/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-jg/users/admin.conf:/Users/user/Work/company/project/setup/secrets/test-qz/users/admin.conf
This is what is happening: if i choose with kubectx the cluster (not every cluster from the list, but just any), when i try kubectl get po i receive : error: You must be logged in to the server (Unauthorized) .
But, if try to reach the same cluster passing it directly to the kubectl command with --kubeconfig=<path to the config> it works.
I am pretty struggling with this and just wanna know if anyone else is facing this kind of issues as well and how have solved it
Eventually i found the problem. The flatten command that suggested to me #mario, helped me to debug better the situation.
Basically, the in memory or in file merge, makes what it supposed to do: create a kubeconfig with all uniq parameters of each kubeconfig files. This works perfectly unless on or more kubeconfig has the same labels that identify the same component. In this case the last in order wins. So if you have the following example:
grep -Rn 'name: kubernetes-admin$' infra/secrets/*/users/admin.conf
infra/secrets/cluster1/users/admin.conf:16:- name: kubernetes-admin
infra/secrets/cluster2/users/admin.conf:17:- name: kubernetes-admin
infra/secrets/cluster3/users/admin.conf:16:- name: kubernetes-admin
cluster1 and cluster2 won't work, while cluster3 will work perfectly, incidentally due to the order.
The solution to this problem is to avoid non uniq fields, by renaming the label that identifies the user (for the example above). Once is done this change, everything will work perfectly.
I agree with #Bernard. This doesn't look like anything specific to kubectx as it is just a bash script, which under the hood uses kubectl binary. You can see its code here. I guess that it will also fail in kubectl if you don't provide the
But, if try to reach the same cluster passing it directly to the
kubectl command with --kubeconfig= it works.
There is a bit of inconsistency in the way you're testing it as you don't provide the specific kubeconfig file to both commands. When you use kubectx it relies on your multiple in-memory merged kubeconfig files and you compare it with working kubectl example in which you directly specify the kubeconfig file that should be used. To make this comparison consistent you should also use kubectx with this particular kubeconfig file. And what happens if you run kubectl command without specifying --kubeconfig=<path to the config> ? I guess you get similar error to the one you get when running kubectx. Please correct me if I'm wrong.
There is a really good article written by Ahmet Alp Balkan - kubectx author, which nicely explains how you can work with multiple kubeconfig files. As you can read in the article:
Tip 2: Using multiple kubeconfigs at once
Sometimes you have a bunch of small kubeconfig files (e.g. one per
cluster) but you want to use them all at once, with tools like
kubectl or kubectx that
work with multiple contexts at once.
To do that, you need a “merged” kubeconfig file. Tip #3 explains how
you can merge the kubeconfigs into a single file, but you can also
merge them in-memory.
By specifying multiple files in KUBECONFIG environment variable,
you can temporarily stitch kubeconfig files together and use them all
in kubectl.
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Tip 3: Merging kubeconfig files
Since kubeconfig files are structured YAML files, you can't just
append them to get one big kubeconfig file, but kubectl can help you
merge these files:
KUBECONFIG=file1:file2:file3 kubectl config view --merge --flatten > out.txt
Possible solutions:
Try to merge your multiple kubeconfig files to a single one like in the example above to see if it's possibly problem only with in-memory merge:
KUBECONFIG=file1:file2:file3 kubectl config view --merge --flatten > out.txt
Review all your kubeconfigs and test them individually just to make sure if they're working properly when specified in KUBECONFIG env variable separately. There might be some error in one of them which causes the issue.
I've setup a K8s cluster and got the config file which I have placed in the
username/.kube directory
I can't seem to workout how to link my Powershell Kubectl to this config file by default. For instance if I try and run the following command I don't get the cluster I've setup.
kubectl config get-contexts
If however, I run the following command, I do get a list of my current nodes
kubectl --kubeconfig="cluster-kubeconfig.yaml" get nodes
Copy contents of cluster-kubeconfig.yaml file to $HOME/.kube/config
This will be the default kubernetes context file.
You can also override and point to any other custom kubernetes context using
$Env:KUBECONFIG=("/path/to/cluster-kubeconfig.yaml")
as mentioned here.
For more info check this out.
Hope this helps.
I am running into a few issues when trying to get my local kubectl to point to clusters created with kubeadm:
The kubectl config files generated from kubeadm use the same user name, cluster name, and context name, so I cannot simply download them and add them to $KUBECONFIG.
There is no kubectl command for renaming a cluster or user.
The config file generated from kubeadm has the properties client-key-data and client-certificate-data. These are not fields recognized by kubectl when creating a new user or cluster.
Clusters created through kubeadm don't seem to allow access through simple username and password. It seems to require the certificate infos.
It seems like I am limited to modifying the contents of the ~/.kube/config file through string manipulation (gross), which I would like to avoid!! Does anyone have a solution for this?
One option you have is to use different config files for your clusters.
Create one file for each cluster and put them in a directory (I use ~/.kube) giving them meaningful names that help you distinguish them (you can use a cluster identifier for instance).
Then, you can set the KUBECONFIG environment variable to choose a different configuration file when you run kubectl, such as:
KUBECONFIG=/path/to/the/config/file kubectl get po
You can also create aliases in your favourite shell to avoid writing all of the above command.
alias mykube="KUBECONFIG=/path/to/the/config/file kubectl get po"
mykube get po
At the moment, as far as I am aware, there is no tool that would automatically merge different kube config files into one, which is effectively what you need. Personally I do manipulate the .kube/config manually with a text editor. It's not that much of work in the end.
When you use minikube, it automatically creates the local configurations, so it's ready to use. And it appears there is support for multiple clusters in the kubectl command based on the reference for kubectl config.
In the docs for setting up clusters, there's a reference to copying the relevant files to your local machine to access the cluster. I also found an SO Q&A about editing your .kube/config to leverage azure remotely that talked to editing the kube/config file.
It looks like the environment variable $KUBECONFIG can reference multiple locations of these configuration files, with the built-in default being ~/.kube/config (which is what minikube creates).
If I want to be able to use kubectl to invoke commands to multiple clusters, should I download the relevant config file into a new location (for example into ~/gcloud/config, set the KUBECONFIG environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig option when invoking kubectl to specify a configuration for the cluster?
I wasn't sure if there was some way of merging the configuration files that would be better, and leverage the kubectl config set-context or kubectl config set-cluster commands instead. The documentation at Kubernetes on "Configure Access to Multiple Clusters" seems to imply a different means of using --kubeconfig along with these kubectl config commands.
In short, what's the best way to interact with multiple separate kubernetes clusters and what are the tradeoffs?
If I want to be able to use kubectl to invoke commands to multiple
clusters, should I download the relevant config file into a new
location (for example into ~/gcloud/config, set the KUBECONFIG
environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig option when
invoking kubectl to specify a configuration for the cluster?
That would probably depend on the approach you find simpler and more convenient, and if having security and access management concerns in mind is needed.
From our experience merging various kubeconfig files is very useful for multi-cluster operations, in order to carry out maintenance tasks, and incident management over a group of clusters (contexts & namespaces) simplifying troubleshooting issues based on the possibility to compare configs, manifests, resources and states of K8s services, pods, volumes, namespaces, rs, etc.
However, when automation and deployment (w/ tools like Jenkins, Spinnaker or Helm) are involved most likely having separate kubeconfig files could be a good idea. A hybrid approach can be merging kubeconfig files based on a division by Service tier -> Using files to partition development landscapes (dev, qa, stg, prod) clusters or for Teams -> Roles and Responsibilities in an Enterprise (teamA, teamB, …, teamN) can be understood also within good alternatives.
For multi-cluster merged kubeconfig files scenarios consider kubectx + kubens, which are very powerful tools for kubectlt that let you see the current context (cluster) and namespace, likewise to switch between them.
In short, what's the best way to interact with multiple separate
kubernetes clusters and what are the trade offs?
The trade offs should possibly be analyzed considering the most important factors for your project. Having a single merged kubeconfig file seems simpler, even simple if you merge it with ~/.kube/config to be used by default by kubectl and just switching between cluster/namespaces with --context kubectl flag. On the other hand if limiting the scope of the kubeconfig is a must, having them segregated and using --kubeconfig=file1 sounds like the best way to go.
Probably there is NOT a best way for every case and scenario, knowing how to configure kubeconfig file knowing its precedence will help though.
In this article -> https://www.nrmitchi.com/2019/01/managing-kubeconfig-files/ you'll find a complementary and valuable opinion:
While having all of the contexts you may need in one file is nice, it
is difficult to maintain, and seldom the default case. Multiple tools
which provide you with access credentials will provide a fresh
kubeconfig to use. While you can merge the configs together into
~/.kube/config, it is manual, and makes removing contexts more
difficult (having to explicitly remove the context, cluster, and
user). There is an open issue in Kubernetes tracking this. However by
keeping each provided config file separate, and just loading all of
them, removal is much easier (just remove the file). To me, this
seems like a much more manageable approach.
I prefer to keep all individual config files under ~/.kube/configs, and by taking advantage of the multiple-path aspect of the $KUBECONFIG environment variable option, we can make this happen.
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
Bonus (extra points!)
Using multiple kubeconfigs at once
You can save AKS (Azure Container Service), or AWS EKS (Elastic Container Service for K8s) or GKE (Google Container Engine) cluster contexts to separate files and set the KUBECONFIG env var to reference both file locations.
For instance, when you create a GKE cluster (or retrieve its credentials) through the gcloud command, it normally modifies your default ~/.kube/config file. However, you can set $KUBECONFIG for gcloud to save cluster credentials to a file:
KUBECONFIG=c1.yaml gcloud container clusters get-credentials "cluster-1"
Then as we mentioned before using multiple kubeconfigs at once can be very useful to work with multiple contexts at the same time.
To do that, you need a “merged” kubeconfig file. In the section "Merging kubeconfig files" below, we explain how you can merge the kubeconfigs into a single file, but you can also merge them in-memory.
By specifying multiple files in KUBECONFIG environment variable, you can temporarily stitch kubeconfig files together and use them all in kubectl .
#
# Kubeconfig in-memory merge
#
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
#
# For your example
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2: kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Merging kubeconfig files
Since kubeconfig files are structured YAML files, you can’t just append them to get one big kubeconfig file, but kubectl can help you merge these files:
#
# Merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG=$HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
ref article 1: https://ahmet.im/blog/mastering-kubeconfig/
ref article 2: https://github.com/kubernetes/kubernetes/issues/46381
I have a series of shell functions that boil down to kubectl --context=$CTX --namespace=$NS, allowing me to contextualize each shell [1]. But if you are cool with that approach, then rather than rolling your own, https://github.com/Comcast/k8sh will likely interest you. I just wish it was shell functions instead of a sub-shell
But otherwise, yes, I keep all the config values in the one ~/.kube/config
footnote 1: if you weren't already aware, one can also change the title of terminal windows via title() { printf '\033]0;%s\007' "$*"; } which I do in order to remind me which cluster/namespace/etc is in effect for that tab/window
kubectl get pods --kubeconfig file1.yaml
kubectl get pods --kubeconfig file2.yaml
you can use (--kubeconfig) flag to tell the kubectl that you want to run kubectl based on file1 or file2. in the note, the file is kubernetes config
There are multiple admins who accesses k8s clusters.
What is the recommended way to share the config file?
I know,
kubectl config view --minify
but certification part is REDACTED by this command.
You can add the --flatten flag, which is described in the document to "flatten the resulting kubeconfig file into self contained output (useful for creating portable kubeconfig files)"