How to properly access multiple kubernetes cluster using kubectl - kubernetes

I have two clusters and the config files are stored in .kube. I am exporting KUBECONFIG as below
export KUBECONFIG=/home/vagrant/.kube/config-cluster1:/home/vagrant/.kube/config-cluster2
checking the contexts
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 kubernetes-admin
cluster-2 cluster-2 kubernetes-admin
But when I choose cluster-2 as my current context I get an error
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 kubernetes-admin
cluster-2 cluster-2 kubernetes-admin
kubectl config use-context cluster-2
Switched to context "cluster-2".
kubectl get pods -A
error: You must be logged in to the server (Unauthorized)
If I export only the config for cluster-2 and try running kubectl it works fine.
My question is whether I am exporting the config files properly or should I be doing something more.

You need to separate the AUTHINFO (context.user on config file) for each cluster with the respective credentials.
For example:
apiVersion: v1
clusters:
- cluster:
server: https://192.168.10.190:6443
name: cluster-1
- cluster:
server: https://192.168.99.101:8443
name: cluster-2
contexts:
- context:
cluster: cluster-1
user: kubernetes-admin-1
name: cluster-1
- context:
cluster: cluster-2
user: kubernetes-admin-2
name: cluster-2
kind: Config
preferences: {}
users:
- name: kubernetes-admin-1
user:
client-certificate: /home/user/.minikube/credential-for-cluster-1.crt
client-key: /home/user/.minikube/credential-for-cluster-1.key
- name: kubernetes-admin-2
user:
client-certificate: /home/user/.minikube/credential-for-cluster-2.crt
client-key: /home/user/.minikube/credential-for-cluster-2.key
You can find more useful tips in the following article:
Using different kubectl versions with multiple Kubernetes clusters:
When you are working with multiple Kubernetes clusters, it’s easy to
mess up with contexts and run kubectl in the wrong cluster. Beyond
that, Kubernetes has restrictions for versioning mismatch between the
client (kubectl) and server (kubernetes master), so running commands
in the right context does not mean running the right client version.
To overcome this:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
I also recommend the following reads:
Mastering the KUBECONFIG file by Ahmet Alp Balkan (Google Engineer)
How Zalando Manages 140+ Kubernetes Clusters by Henning Jacobs (Zalando Tech)

I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.
. k-use -k <kubeconfig> -n <namespace>
https://github.com/kingonion/k-use

Related

Is "current-context" a mandatory key in a kubeconfig file?

THE PLOT:
I am working on a kubernetes environment where we have PROD and ITG setup. The ITG setup has multi-cluster environment whereas PROD setup is a single-cluster environment.
I am trying to automate some process using Python where I have to deal with kubeconfig file and I am using the kubernetes library for it.
THE PROBLEM:
The kubeconfig file for PROD has "current-context" key available but the same is missing from the kubeconfig file for ITG.
prdconfig:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://cluster3.url.com:3600
name: cluster-ABC
contexts:
- context:
cluster: cluster-LMN
user: cluster-user
name: cluster-LMN-context
current-context: cluster-LMN-context
kind: Config
preferences: {}
users:
- name: cluster-user
user:
exec:
command: kubectl
apiVersion: <clientauth/version>
args:
- kubectl-custom-plugin
- authenticate
- https://cluster.url.com:8080
- --user=user
- --token=/api/v2/session/xxxx
- --token-expiry=1000000000
- --force-reauth=false
- --insecure-skip-tls-verify=true
itgconfig:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://cluster1.url.com:3600
name: cluster-ABC
- cluster:
insecure-skip-tls-verify: true
server: https://cluster2.url.com:3601
name: cluster-XYZ
contexts:
- context:
cluster: cluster-ABC
user: cluster-user
name: cluster-ABC-context
- context:
cluster: cluster-XYZ
user: cluster-user
name: cluster-XYZ-context
kind: Config
preferences: {}
users:
- name: cluster-user
user:
exec:
command: kubectl
apiVersion: <clientauth/version>
args:
- kubectl-custom-plugin
- authenticate
- https://cluster.url.com:8080
- --user=user
- --token=/api/v2/session/xxxx
- --token-expiry=1000000000
- --force-reauth=false
- --insecure-skip-tls-verify=true
When I try loading the kubeconfig file for PROD using config.load_kube_config(os.path.expanduser('~/.kube/prdconfig')) it works.
And when I try loading the kubeconfig file for ITG using config.load_kube_config(os.path.expanduser('~/.kube/itgconfig')), I get the following error:
ConfigException: Invalid kube-config file. Expected key
current-context in C:\Users<username>/.kube/itgconfig
Although it is very clear from the error message that it is considering the kubeconfig file as invalid, as it does not have "current-context" key in it.
THE SUB-PLOT:
When working with kubectl, the missing "current-context" does not make any difference as we can always specify context along with the command. But the 'load_kube_config()' function makes it mandatory to have "current-context" available.
THE QUESTION:
So, is "current-context" a mandatory key in kubeconfig file?
THE DISCLAIMER:
I am very new to kubernetes and have very little experience working with it.
As described in the comments:
If we want to use kubeconfig file to work out of the box by default, with specific cluster using kubectl or python script we can mark one of the contexts in our kubeconfig file as the default by specifying current-context.
Note about Context:
A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster.
In order to mark one of our contexts (f.e. dev-fronted) in our kubeconfig file as the default one please run:
kubectl config use-context dev-fronted
Now whenever you run a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev-frontend context
Please take a look at:
- Mering kubeconfig files:
determine the context to use based on the first hit in this chain:
Use the --context command-line flag if it exists.
Use the current-context from the merged kubeconfig files.
An empty context is allowed at this point.
determine the cluster and user. At this point, there might or might not be a context. Determine the cluster and user based on the first hit in this chain, which is run twice: once for user and once for cluster:
Use a command-line flag if it exists: --user or --cluster.
If the context is non-empty, take the user or cluster from the context.
The user and cluster can be empty at this point.
Whenever we run kubectl commands without specified current-context we should provide additional configuration parameters to tell kubectl which configuration to use, in your example it could be f.e.:
kubectl --kubeconfig=/your_directory/itgconfig get pods --context cluster-ABC-context
As described earlier - to simplify this task we can use configure current-context in kubeconfig file configuration:
kubectl config --kubeconfig=c/your_directory/itgconfig use-context cluster-ABC-context
Going further into errors generated by your script we should notice errors from config/kube_config.py:
config/kube_config.py", line 257, in set_active_context context_name = self._config['current-context']
kubernetes.config.config_exception.ConfigException:: Invalid kube-config file. Expected key current-context in ...
Here is an example with additional context="cluster-ABC-context" parameter:
from kubernetes import client, config
config.load_kube_config(config_file='/example/data/merged/itgconfig', context="cluster-ABC-context")
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
...
Listing pods with their IPs:
10.200.xxx.xxx kube-system coredns-558bd4d5db-qpzb8
192.168.xxx.xxx kube-system etcd-debian-test
...
Additional information
Configure Access to Multiple Clusters
Organizing Cluster Access Using kubeconfig Files

Kubernetes can't see my namespace in cmd line but it exist in my dashboard

I run this command :
kubectl config get-contexts
and I don't get any namespace... but when I go in the dashboard I can see 2 namespaces created ?
config :
apiVersion: v1
clusters:
- cluster:
server: https://name_of_company
name: cluster
contexts:
- context:
cluster: cluster
user: ME
name: ME#cluster
current-context: ME#cluster
kind: Config
preferences: {}
users:
- name: MY NAME
user:
auth-provider:
config:
client-id: MY ID
id-token: MY ID TOKEN
idp-issuer-url: https://name_of_company/multiauth
name: oidc
Please use the command kubectl get namespace to list namespaces in a cluster.
Please check
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
kubectl get ns gives the namespace (check here ), kubectl config get-contexts give the contexts in your kubeconfig which describe clusters, users, and contexts. Read here

rendering env-var inside kubernetes kubeconfig yaml file

I need to use an environment variable inside my kubeconfig file to point the NODE_IP of the Kubernetes API server.
My config is:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://$NODE_IP:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
......
But it seems like the kubeconfig file is not getting rendered variables when I run the command:
kubectl --kubeconfig mykubeConfigFile get pods.
It complains as below:
Unable to connect to the server: dial tcp: lookup $NODE_IP: no such host
Did anyone try to do something like this or is it possible to make it work?
Thanks in advance
This thread contains explanations and answers:
... either wait Implement templates · Issue #23896 · kubernetes/kubernetes for the implementation of the templating proposal in k8s (not merged yet)
... or preprocess your yaml with tools like:
envsubst:
export NODE_IP="127.0.11.1"
envsubst < mykubeConfigFile.yml | kubectl --kubeconfig mykubeConfigFile.yml get pods
sed:
cat mykubeConfigFile.yml | sed s/\$\$EXTERNAL_IP/127.0.11.1/ | kubectl --kubeconfig mykubeConfigFile.yml get pods

Puppet kubernetes module

I installed the puppet kubernetes module to manage pods of my kubernetes cluster with https://github.com/garethr/garethr-kubernetes/blob/master/README.md
I am not able to get any pod information back when I run
puppet resource kubernetes_pod
It just returns an empty line.
I am using a minikube k8s cluster to test the puppet module against.
cat /etc/puppetlabs/puppet/kubernetes.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://<ip address>:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/apiserver.crt
client-key: /root/.minikube/apiserver.key
I am able to use curl with the certs to talk to the K8s REST API
curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://<minikube ip>:844/api/v1/pods/
It looks like the garethr-kubernetes package hasn't been updated since August 2017, so you probably need a version of the kubeclient gem at least that old. It seems kubeclient 3.0 came out quite recently, so you might want to try the latest version from the 2.5 major (currently 2.5.2).
From the requirements, this could be related to a credentials issue.
Or the configuration is set to a namespace with nothing in it.
As show in this issue, check the following:
kubectl get pods works fine at the command line, and my ~/.puppetlabs/etc/puppet/kubernetes.conf file is generated as suggested:
mc0e#xxx:~$ kubectl config view --raw=true
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: test-doc
contexts:
- context:
cluster: test-doc
user: ""
name: test-doc
current-context: test-doc
kind: Config
preferences: {}
users: []

Kubectl Error when accessing Namespaces

I was trying out the Tectonic Kubernetes sandbox setup and according to their documentation:
https://coreos.com/tectonic/docs/latest/tutorials/first-app.html
I did download the kubectl and the corresponding kube-config files, but when I tried to get the namespaces using the following command:
kubectl get namespaces
I get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What is this? From where is it picking up this port locahost:8080?
EDIT:
Joe-MacBook-Pro:~ joe$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
Joe-MacBook-Pro:~ joe$
I'm lacking some details on your setup, but the problem is basically clear - you're not connected to the cluster.
You should have a kubeconfig file containing the cluster connection information i.e. the context, I assume if you run kubectl config view you'll get nothing.
I'm on windows using git bash, if I run the same command I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://platform-svc-integration.net
name: svc-integration
contexts:
- context:
cluster: svc-integration
user: svc-integration-admin
name: svc-integration-system
current-context: svc-integration-system
kind: Config
preferences: {}
users:
- name: svc-integration-admin
user:
client-certificate: <path>/admin/admin.crt
client-key: <path>/admin/admin.key
basically what I'm trying to say is you need to configure your context, start by doing kubectl config --help to list your options, it's pretty straight forward but if don't manage just refer to the documentation.