How to switch kubectl clusters between gcloud and minikube - kubernetes

I have Kubernetes working well in two different environments, namely in my local environment (MacBook running minikube) and as well as on Google's Container Engine (GCE, Kubernetes on Google Cloud). I use the MacBook/local environment to develop and test my YAML files and then, upon completion, try them on GCE.
Currently I need to work with each environment individually: I need to edit the YAML files in my local environment and, when ready, (git) clone them to a GCE environment and then use/deploy them. This is a somewhat cumbersome process.
Ideally, I would like to use kubectl from my Macbook to easily switch between the local minikube or GCE Kubernetes environments and to easily determine where the YAML files are used. Is there a simple way to switch contexts to do this?

You can switch from local (minikube) to gcloud and back with:
kubectl config use-context CONTEXT_NAME
to list all contexts:
kubectl config get-contexts
You can create different enviroments for local and gcloud and put it in separate yaml files.

List contexts
kubectl config get-contexts
Switch contexts
kubectl config set current-context MY-CONTEXT

A faster shortcut to the standard kubectl commands is to use kubectx:
List contexts: kubectx
Equivalent to kubectl config get-contexts
Switch context (to foo): kubectx foo
Equivalent to kubectl config use-context foo
To install on macOS: brew install kubectx
The kubectx package also includes a similar tool for switching namespaces called kubens.
These two are super convenient if you work in multiple contexts and namespaces regularly.
More info: https://ahmet.im/blog/kubectx/

If you're looking for a GUI-based solution for Mac and have the Docker Desktop installed, you can use the Docker Menu Bar icon. Here you can find "Kubernetes" menu with all the contexts you have in your kubeconfig and easily switch between them.

To get all context
C:\Users\arun>kubectl config get-contexts
To get current context
C:\Users\arun>kubectl config current-context
To switch context
C:\Users\arun>kubectl config use-context <any context name from above list>

Latest 2020 answer is here,
A simple way to switch between kubectl context,
kubectl top nodes **--context=**context01name
kubectl top nodes --context=context02name
You can also store the context name as env like
context01name=gke_${GOOGLE_CLOUD_PROJECT}_us-central1-a_standard-cluster-1

I got bored of typing this over and over so I wrote a simple bash utility to switch contexts
You can find it here https://github.com/josefkorbel/kube-switch

The canonical answer of switching/reading/manipulating different kubernetes environments (aka kubernetes contexts) is, as Mark mentioned, to use kubectl config, see below:
$ kubectl config
Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"
Available Commands:
current-context Displays the current-context
delete-cluster Delete the specified cluster from the kubeconfig
delete-context Delete the specified context from the kubeconfig
get-clusters Display clusters defined in the kubeconfig
get-contexts Describe one or many contexts
rename-context Renames a context from the kubeconfig file.
set Sets an individual value in a kubeconfig file
set-cluster Sets a cluster entry in kubeconfig
set-context Sets a context entry in kubeconfig
set-credentials Sets a user entry in kubeconfig
unset Unsets an individual value in a kubeconfig file
use-context Sets the current-context in a kubeconfig file
view Display merged kubeconfig settings or a specified kubeconfig file
Usage:
kubectl config SUBCOMMAND [options]
Behind the scene, there is a ~/.kube/config YAML file that stores all the available contexts with their corresponding credentials and endpoints for each contexts.
Kubectl off the shelf doesn't make it easy to manage different kubernetes contexts as you probably already know. Rather than rolling your own script to manage all that, a better approach is to use a mature tool called kubectx, created by a Googler named "Ahmet Alp Balkan" who's on Kubernetes / Google Cloud Platform developer experiences Team that builds tooling like this. I highly recommend it.
https://github.com/ahmetb/kubectx
$ kctx --help
USAGE:
kubectx : list the contexts
kubectx <NAME> : switch to context <NAME>
kubectx - : switch to the previous context
kubectx <NEW_NAME>=<NAME> : rename context <NAME> to <NEW_NAME>
kubectx <NEW_NAME>=. : rename current-context to <NEW_NAME>
kubectx -d <NAME> [<NAME...>] : delete context <NAME> ('.' for current-context)
(this command won't delete the user/cluster entry
that is used by the context)
kubectx -h,--help : show this message

TL;DR: I created a GUI to switch Kubernetes contexts via AppleScript. I activate it via shift-cmd-x.
I too had the same issue. It was a pain switching contexts by the command line. I used FastScripts to set a key combo (shift-cmd-x) to run the following AppleScript (placed in this directory: $(HOME)/Library/Scripts/Applications/Terminal).
use AppleScript version "2.4" -- Yosemite (10.10) or later
use scripting additions
do shell script "/usr/local/bin/kubectl config current-context"
set curcontext to result
do shell script "/usr/local/bin/kubectl config get-contexts -o name"
set contexts to paragraphs of result
choose from list contexts with prompt "Select Context:" with title "K8s Context Selector" default items {curcontext}
set scriptArguments to item 1 of result
do shell script "/usr/local/bin/kubectl config use-context " & scriptArguments
display dialog "Switched to " & scriptArguments buttons {"ok"} default button 1

Cloning the YAML files across repos for different environments is definitely ideal. What you to do is templatize your YAML files - by extracting the parameters which differ from environment to environment.
You can, of course, use some templating engine and separate the values in a YAML and produce the YAML for a specific environment. But this is easily doable if you adopt the Helm Charts. To take a look at some sample charts go to stable directory at this Github repo
To take an example of the Wordpress chart, you could have two different commands for two environments:
For Dev:
helm install --name dev-release --set \
wordpressUsername=dev_admin, \
wordpressPassword=dev_password, \
mariadb.mariadbRootPassword=dev_secretpassword \
stable/wordpress
It is not necessary to pass these values on CLI though, you can store the values in a file called aptly values.yml and you could have different files for different environments
You will need some work in converting to Helm chart standards, but the effort will be worth it.

Check also the latest (docker 19.03) docker context command.
Ajeet Singh Raina ) illustrates it in "Docker 19.03.0 Pre-Release: Fast Context Switching, Rootless Docker, Sysctl support for Swarm Services"
A context is essentially the configuration that you use to access a particular cluster.
Say, for example, in my particular case, I have 4 different clusters – mix of Swarm and Kubernetes running locally and remotely.
Assume that I have a default cluster running on my Desktop machine , 2 node Swarm Cluster running on Google Cloud Platform, 5-Node Cluster running on Play with Docker playground and a single-node Kubernetes cluster running on Minikube and that I need to access pretty regularly.
Using docker context CLI I can easily switch from one cluster(which could be my development cluster) to test to production cluster in seconds.
$ sudo docker context --help
Usage: docker context COMMAND
Manage contexts
Commands:
create Create a context
export Export a context to a tar or kubeconfig file
import Import a context from a tar file
inspect Display detailed information on one or more contexts
ls List contexts
rm Remove one or more contexts
update Update a context
use Set the current docker context
Run 'docker context COMMAND --help' for more information on a command.
For example:
[:)Captain'sBay=>sudo docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://127.0.0.1:16443 (default) swarm
swarm-context1

I use kubeswitch (disclaimer: I wrote the tool) that can be used just like kubectx, but is designed for a large number of kubeconfig files.
If you have to deal with hundreds or thousands of kubeconfig files, this tool might be useful to you, otherwise kubectx or kubectl config use-context might be sufficient.
For instance, it adds capabilities like reading from vault, hot reload while searching, and an index to speed up subsequent searches.
You can install it from here.
EDIT: now also includes support for GKE directly. So you can use and discover kubeconfig files without having to manually download and update them.

In case you might be looking for a simple way to switch between different contexts maybe this will be of help.
I got inspired by kubectx and kswitch scripts already mentioned, which I can recommend for most use-cases. They are helping with solving the switching task, but are breaking for me on some bigger or less standard configurations of ~/.kube/config. So I created a sys-exec invocation wrapper and a short-hand around kubectl.
If you call k without params you would see an intercepted prompt to switch context.
Switch kubectl to a different context/cluster/namespace.
Found following options to select from:
>>> context: [1] franz
>>> context: [2] gke_foo_us-central1-a_live-v1
>>> context: [3] minikube
--> new num [?/q]:
Further, k continues to act as a short-hand. The following is equivalent:
kubectl get pods --all-namespaces
k get pods -A
k p -A

yes, i think this is what your asking about. To view your current config, use kubectl config view. kubectl loads and merges config from the following locations (in order)
--kubeconfig=/path/to/.kube/config command line flag
KUBECONFIG=/path/to/.kube/config env variable
$HOME/.kube/config - The DEFAULT
i use --kubeconfig since i switch alot between multiple clusters. its slightly cumbersome but it works well.
see these for more info.
https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/ and https://kubernetes.io/docs/concepts/cluster-administration/authenticate-across-clusters-kubeconfig/

Related

Any easy way to keep track/ maintain kubectl config (./kube/config) if you are connecting to multiple clusters

Any suggestions on how track of kubectl configs(~/.kube/config) which allows you to access the kubernetes clusters? I have clusters running on different environments(local/prod) where i connect to the same namespace where project is deployed on and whenever i need to connect to a particular cluster, i run the below to configure ( different commands on aws/gcp/ microk8s etc) and the configuration gets attached to ~/.kube/config. Is there any easy way to know where you are connected or track which config is being used? Its a disaster waiting to happen unless you do a explicit check.
aws eks --region region update-kubeconfig --name cluster_name
Current method used:
Either (cat ~/.kube/config) i check the to see what cluster im connecting to.
move the config to some other directory and move the config back once im done.
kubectl get nodes to see where I'm connected.
Using kubectl
Kubectl has built in support for managing contexts. After you add a context in ~/.kube/config file, manually or, via aws eks update-kubeconfig, you can use the config sub-command to switch between contexts.
To view all saved contexts and highlight the current one:
kubectl config get-contexts
To just view the current context:
kubectl config current-context
To switch to another context
kubectl config use-context <context-name>
To delete a context:
kubectl config delete-context <context-name>
Specific configuration file
Sometimes it might be the case that all the cluster connections cannot be in the same kube config file, but instead, user has separate kube config files per cluster.
To run kubectl with a specific configuration, one can use --kubeconfig argument:
kubectl --kubeconfig ./someConfig -n someNs get pods
Shell Aliases
And when running from Linux shell or windows powershell, one can also use "aliases".
Linux Bash example:
Use bash alias to define commands as aliases:
# Define a kubectl alias for specific cluster
alias myCluster="kubectl --kubeconfig ./myClusterConfig"
# Define a kubectl alias for specific cluster and specific namespace
alias myClusterNs="kubectl --kubeconfig ./myClusterConfig -n myNamespace"
Usage:
# Using cluster kubectl alias
myCluster -n myNamespace get pods
# Using cluster kubectl alias with namespace
myClusterNs get pods
The alias definitions can be saved to ~/.profile for permanent usage.
Windows Powershell example:
In Windows Powershell, a function can be defined as follows:
function myCluster { kubectl --kubeconfig .\myClusterConfig $args }
And used as:
myCluster -n myNamespace get pods
More arguments like -n <namespace> can also be specified in function definition before $args. Make sure to properly quote (") the arguments with special characters on windows.
If you don't mind using a UI tool, lens (https://k8slens.dev/) is really awesome. You can register multiple clusters, give them names and also different pictures.
For command line, there are shell extenstions that add the current cluster + namespace to the shell's prompt, eg. https://github.com/jonmosco/kube-ps1
For organization I store a seperate kubeconfig file for every cluster in my file system in a nested folder structure and access them with functions definedin my .zshrc file (zshell config file), eg:
env-dev-foo() {
KUBECONFIG="/home/user/.kube/otherkubeconfig/dev/foo/config"
}
env-prod-bar() {
KUBECONFIG="/home/user/.kube/otherkubeconfig/prod/bar/config"
}
env-prod-legacy() {
KUBECONFIG="/home/user/.kube/otherkubeconfig/prod/legacy/config"
PATH=$PATH:<path-to-legacy-kubectl
PATH=$PATH:<path-to-legacy-helm
connect-via-vpn
create-ssh-tunnel-to-customer-system
}
You can do all sorts of stuff in that functions beside just switching your kubeconfig. Eg. if you need to deal with legacy clusters, you might want to use a kubectl/helm binary in a different version. Or maybe you need to create an ssh tunnel in order to connect to that cluster or connect via VPN.

Kubernetes: specify cluster context in apply command

I have multiple kubernetes clusters and want to ensure that when I kubectl apply a deployment, I'm targeting the correct cluster.
I have all my clusters configured in contexts in the root /.kube/config file but I don't want to rely on statefully switching my current context to the correct one before running each apply command.
i.e. This is not satisfactory
kubectl config use-context cluster-1-context
kubectl apply ./deploy-to-cluster-1.yml
kubectl config use-context cluster-2-context
kubectl apply ./deploy-to-cluster-2.yml
I read the docs on config for multiple clusters and the only way I can find to do this is by copy/pasting the config for a particular cluster into a custom config file and specifying that with the --kubeconfig option on the apply command.
kubectl apply ./deploy-to-cluster-1.yml --kubeconfig ./config-cluster-1
kubectl apply ./deploy-to-cluster-2.yml --kubeconfig ./config-cluster-2
This works, but it seems really cumbersome.
For such a common requirement I'd expect there to just be a simple option on apply, or perhaps even a field in the deployment yml, that lets you specify (or restrict) the deployment to a particular context/cluster name, but I've read through a lot of the relevant documentation and can't find any such option.
Is there a better way to do this?
There seems to be --context=... option.
kubectl options
The following options can be passed to any command:
...
--context='': The name of the kubeconfig context to use
at least in version v1.18.6
kubectl takes a --context option:
kubectl --context cluster-1-context apply -f ./deploy-to-cluster-1.yml
There's no way to specify or enforce this in resource YAML files; it's still possible to have accidents.
If you have multiple .kube/config files, you can also set the $KUBECONFIG environment variable to point at one of those. This is understood by the standard Kubernetes SDKs, so almost all tools should support it.
export KUBECONFIG=./cluster-1-config.yml
kubectly apply -f ./deploy-to-cluster-1.yml
(Given the choice I would prefer this approach, because environment variables are shell-local but kubectl config current-context will affect all of my open terminal windows. Standard tooling that configures the .kube/config file tends to default to the single shared global file, though, and teasing it apart can be a little tricky.)
Is there a better way to do this?
You can use kubectx tool to switch between contexts back and forth in a much easier way than kubectl.
USAGE:
kubectx : list the contexts
kubectx <NAME> : switch to context <NAME>
kubectx - : switch to the previous context
kubectx -c, --current : show the current context name
kubectx <NEW_NAME>=<NAME> : rename context <NAME> to <NEW_NAME>
kubectx <NEW_NAME>=. : rename current-context to <NEW_NAME>
kubectx -d <NAME> : delete context <NAME> ('.' for current-context)
(this command won't delete the user/cluster entry
that is used by the context)
kubectx -u, --unset : unset the current context
$ kubectx minikube
Switched to context "minikube".
$ kubectx -
Switched to context "oregon".
$ kubectx -
Switched to context "minikube".
$ kubectx dublin=gke_ahmetb_europe-west1-b_dublin
Context "dublin" set.
Aliased "gke_ahmetb_europe-west1-b_dublin" as "dublin".
As it stands out today there is no way to specify context as part of deployment yaml. You can submit a feature request in kubernetes GitHub repo for this.

How does Helm keep track of which Kubernetes cluster it installs to?

If I am using kubectx and switch kube config contexts into another cluster e.g. "Production" and run a helm uninstall, how does Helm know which cluster I am referring to?
If I run the helm list command is it only referring to what's installed on my local machine and not per Kubernetes cluster?
Helm will default to using whatever your current Kubernetes context is, as specified in the $HOME/.kube/config file.
There is standard support in the Kubernetes API libraries to read data out of this file (or an alternative specified by a $KUBECONFIG environment variable). If you're writing Go, see the documentation for the k8s.io/client-go/tools/clientcmd package. While kubectx does a bunch of things, its core uses that API to do essentially the same thing as running kubectl config use-context ....
If you want Helm to use a non-default context, there is a global option to specify it:
kubectx production
helm list
kubectx development
helm --kube-context production list

Get back docker-for-windows Kuberentes kubeconfig file after deleting it

My Docker for Windows ~/.kube/config file was replaced when setting up access to cloud based K8s cluster.
Is there a way to re-create it without having to restart Docker for Windows Kubernetes?
Update
My current ~/.kube/config file is now set to a GKE cluster. I don't want to reset Docker for Kubernetes and clobber it. Instead I want to create a separate kubeconfig file for Docker for Windows i.e. place it in some other location rather than ~/.kube/config.
You probably want to back up your ~/.kube/config for GKE and then disable/reenable Kubernetes on Docker for Windows. Pull up a Windows command prompt:
copy \<where-your-.kube-is\config \<where-your-.kube-is\config.bak
Then follow this. In essence, uncheck the box, wait for a few minutes and check it again.
You can re-recreate without disabling/reenabling Kubernetes on Docker but you will have to know exactly where your API server and credentials (certificates, etc):
kubectl config set-context ...
kubectl config use-context ...
What's odd is that you are specifying ~/.kube/config where the ~ (tilde) thingy is unix/linux thing, but maybe what you mean is $HOME
I just want to add to this, in case you are using wsl as kubectl/docker client as I am.
You can find your local kubernetes config in C:\Users\username\.kube\config.
You can then use that to create a new kubernetes context for docker.
For instance:
cp /mnt/c/Users/username/.kube/config ~/.kube/docker-k8s.config
docker context create local-k8s --default-stack-orchestrator=kubernetes --kubernetes config-file=/home/username/.kube/docker-k8s.config --docker host=tcp://localhost:2375
Note: I have exposed the docker engine on port 2375. The default settings for the unix sock type of connection can be found on the link above. You need to add the absolute path to the kubeconfig, you can't use '~'.
Then you can use docker context use <context name> to switch between your local docker-desktop kubernetes cluster and an external cloud env cluster with your docker client.
docker context ls will show the local existing contexts.
You basically want to access multiple clusters. One option is to play around with KUBECONFIG environmental variable. Here is the documentation.
The KUBECONFIG environment variable is a list of paths to configuration files. The list is colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have a KUBECONFIG environment variable, familiarize yourself with the configuration files in the list.
Or, you can provide an inline option.
kubectl config --kubeconfig=config-demo set-context dev-frontend --cluster=development --namespace=frontend --user=developer
kubectl config --kubeconfig=config-demo set-context dev-storage --cluster=development --namespace=storage --user=developer
kubectl config --kubeconfig=config-demo set-context exp-scratch --cluster=scratch --namespace=default --user=experimenter
And then use use-context

How can I configure kubectl to interact with both minikube and a deployed cluster?

When you use minikube, it automatically creates the local configurations, so it's ready to use. And it appears there is support for multiple clusters in the kubectl command based on the reference for kubectl config.
In the docs for setting up clusters, there's a reference to copying the relevant files to your local machine to access the cluster. I also found an SO Q&A about editing your .kube/config to leverage azure remotely that talked to editing the kube/config file.
It looks like the environment variable $KUBECONFIG can reference multiple locations of these configuration files, with the built-in default being ~/.kube/config (which is what minikube creates).
If I want to be able to use kubectl to invoke commands to multiple clusters, should I download the relevant config file into a new location (for example into ~/gcloud/config, set the KUBECONFIG environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig option when invoking kubectl to specify a configuration for the cluster?
I wasn't sure if there was some way of merging the configuration files that would be better, and leverage the kubectl config set-context or kubectl config set-cluster commands instead. The documentation at Kubernetes on "Configure Access to Multiple Clusters" seems to imply a different means of using --kubeconfig along with these kubectl config commands.
In short, what's the best way to interact with multiple separate kubernetes clusters and what are the tradeoffs?
If I want to be able to use kubectl to invoke commands to multiple
clusters, should I download the relevant config file into a new
location (for example into ~/gcloud/config, set the KUBECONFIG
environment variable to reference both locations?
Or is it better to just explicitly use the --kubeconfig option when
invoking kubectl to specify a configuration for the cluster?
That would probably depend on the approach you find simpler and more convenient, and if having security and access management concerns in mind is needed.
From our experience merging various kubeconfig files is very useful for multi-cluster operations, in order to carry out maintenance tasks, and incident management over a group of clusters (contexts & namespaces) simplifying troubleshooting issues based on the possibility to compare configs, manifests, resources and states of K8s services, pods, volumes, namespaces, rs, etc.
However, when automation and deployment (w/ tools like Jenkins, Spinnaker or Helm) are involved most likely having separate kubeconfig files could be a good idea. A hybrid approach can be merging kubeconfig files based on a division by Service tier -> Using files to partition development landscapes (dev, qa, stg, prod) clusters or for Teams -> Roles and Responsibilities in an Enterprise (teamA, teamB, …, teamN) can be understood also within good alternatives.
For multi-cluster merged kubeconfig files scenarios consider kubectx + kubens, which are very powerful tools for kubectlt that let you see the current context (cluster) and namespace, likewise to switch between them.
In short, what's the best way to interact with multiple separate
kubernetes clusters and what are the trade offs?
The trade offs should possibly be analyzed considering the most important factors for your project. Having a single merged kubeconfig file seems simpler, even simple if you merge it with ~/.kube/config to be used by default by kubectl and just switching between cluster/namespaces with --context kubectl flag. On the other hand if limiting the scope of the kubeconfig is a must, having them segregated and using --kubeconfig=file1 sounds like the best way to go.
Probably there is NOT a best way for every case and scenario, knowing how to configure kubeconfig file knowing its precedence will help though.
In this article -> https://www.nrmitchi.com/2019/01/managing-kubeconfig-files/ you'll find a complementary and valuable opinion:
While having all of the contexts you may need in one file is nice, it
is difficult to maintain, and seldom the default case. Multiple tools
which provide you with access credentials will provide a fresh
kubeconfig to use. While you can merge the configs together into
~/.kube/config, it is manual, and makes removing contexts more
difficult (having to explicitly remove the context, cluster, and
user). There is an open issue in Kubernetes tracking this. However by
keeping each provided config file separate, and just loading all of
them, removal is much easier (just remove the file). To me, this
seems like a much more manageable approach.
I prefer to keep all individual config files under ~/.kube/configs, and by taking advantage of the multiple-path aspect of the $KUBECONFIG environment variable option, we can make this happen.
If you’re using kubectl, here’s the preference that takes effect while determining which kubeconfig file is used.
use --kubeconfig flag, if specified
use KUBECONFIG environment variable, if specified
use $HOME/.kube/config file
With this, you can easily override kubeconfig file you use per the kubectl command:
#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
NOTE: The --minify flag allows us to extract only info about that context, and the --flatten flag allows us to keep the credentials unredacted.
Bonus (extra points!)
Using multiple kubeconfigs at once
You can save AKS (Azure Container Service), or AWS EKS (Elastic Container Service for K8s) or GKE (Google Container Engine) cluster contexts to separate files and set the KUBECONFIG env var to reference both file locations.
For instance, when you create a GKE cluster (or retrieve its credentials) through the gcloud command, it normally modifies your default ~/.kube/config file. However, you can set $KUBECONFIG for gcloud to save cluster credentials to a file:
KUBECONFIG=c1.yaml gcloud container clusters get-credentials "cluster-1"
Then as we mentioned before using multiple kubeconfigs at once can be very useful to work with multiple contexts at the same time.
To do that, you need a “merged” kubeconfig file. In the section "Merging kubeconfig files" below, we explain how you can merge the kubeconfigs into a single file, but you can also merge them in-memory.
By specifying multiple files in KUBECONFIG environment variable, you can temporarily stitch kubeconfig files together and use them all in kubectl .
#
# Kubeconfig in-memory merge
#
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
#
# For your example
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2: kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
Merging kubeconfig files
Since kubeconfig files are structured YAML files, you can’t just append them to get one big kubeconfig file, but kubectl can help you merge these files:
#
# Merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG=$HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
ref article 1: https://ahmet.im/blog/mastering-kubeconfig/
ref article 2: https://github.com/kubernetes/kubernetes/issues/46381
I have a series of shell functions that boil down to kubectl --context=$CTX --namespace=$NS, allowing me to contextualize each shell [1]. But if you are cool with that approach, then rather than rolling your own, https://github.com/Comcast/k8sh will likely interest you. I just wish it was shell functions instead of a sub-shell
But otherwise, yes, I keep all the config values in the one ~/.kube/config
footnote 1: if you weren't already aware, one can also change the title of terminal windows via title() { printf '\033]0;%s\007' "$*"; } which I do in order to remind me which cluster/namespace/etc is in effect for that tab/window
kubectl get pods --kubeconfig file1.yaml
kubectl get pods --kubeconfig file2.yaml
you can use (--kubeconfig) flag to tell the kubectl that you want to run kubectl based on file1 or file2. in the note, the file is kubernetes config