Merging kubeconfig JSON and YAML - kubernetes

I have two kubeconfigs file, the first one is following which I use to communicate with the cluster and the second one is for Aquasec which is in JSON format. How can I merge these two?
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://656835E69F31E2933asdAFAKE3F5904sadFDDC112dsasa7.yld432.eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:test651666:cluster/Magento
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.142.242.111:6443
name: kubernetes
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:test651666:cluster/testing
user: arn:aws:eks:eu-west-2:test651666:cluster/testing
name: arn:aws:eks:eu-west-2:test651666:cluster/testing
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-for-desktop
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: arn:aws:eks:eu-west-2:test651666:cluster/testing
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:test651666:cluster/testing

You can set the KUBECONFIG environment variable to multiple config files delimited by : and kubectl will automatically merge them behind the scenes.
For example:
export KUBECONFIG=config:my-config.json
In the export above, config is the default config file contained within ~/.kube and my-config.json would be your second config file, which you said is in JSON format.
You can see the merged config using this command, which shows a unified view of the configuration that kubectl is currently using:
kubectl config view
Because kubectl automatically merges multiple configs, you shouldn't need to save the merged config to a file. But if you really want to do that, you can redirect the output, like this:
kubectl config view --flatten > merged-config.yaml
Check out Mastering the KUBECONFIG file, Organizing Cluster Access Using kubeconfig Files for more explanation and to see some other examples.

Related

How to add config to ${HOME}/.kube/config?

I have new Civo cluster,I downloaded config
kubectl config --kubeconfig=civo-enroute-demo-kubeconfig
does not work
Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
Output config view
kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://00.220.23.220:6443
name: enroute-demo
contexts:
- context:
cluster: enroute-demo
user: enroute-demo
name: enroute-demo
current-context: enroute-demo
kind: Config
preferences: {}
users:
- name: enroute-demo
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
How should the kubectl command look like?
You can use a kubeconfig file with kubectl as
kubectl --kubeconfig=config_file.yaml command
Example:
kubectl --kubeconfig=civo-enroute-demo-kubeconfig.yaml get nodes
OR
You can export KUBECONFIG and use kubectl command

kubectl minikube renew certificate

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?
Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.
The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster
minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

Kubernetes can't see my namespace in cmd line but it exist in my dashboard

I run this command :
kubectl config get-contexts
and I don't get any namespace... but when I go in the dashboard I can see 2 namespaces created ?
config :
apiVersion: v1
clusters:
- cluster:
server: https://name_of_company
name: cluster
contexts:
- context:
cluster: cluster
user: ME
name: ME#cluster
current-context: ME#cluster
kind: Config
preferences: {}
users:
- name: MY NAME
user:
auth-provider:
config:
client-id: MY ID
id-token: MY ID TOKEN
idp-issuer-url: https://name_of_company/multiauth
name: oidc
Please use the command kubectl get namespace to list namespaces in a cluster.
Please check
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
kubectl get ns gives the namespace (check here ), kubectl config get-contexts give the contexts in your kubeconfig which describe clusters, users, and contexts. Read here

rendering env-var inside kubernetes kubeconfig yaml file

I need to use an environment variable inside my kubeconfig file to point the NODE_IP of the Kubernetes API server.
My config is:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://$NODE_IP:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
......
But it seems like the kubeconfig file is not getting rendered variables when I run the command:
kubectl --kubeconfig mykubeConfigFile get pods.
It complains as below:
Unable to connect to the server: dial tcp: lookup $NODE_IP: no such host
Did anyone try to do something like this or is it possible to make it work?
Thanks in advance
This thread contains explanations and answers:
... either wait Implement templates · Issue #23896 · kubernetes/kubernetes for the implementation of the templating proposal in k8s (not merged yet)
... or preprocess your yaml with tools like:
envsubst:
export NODE_IP="127.0.11.1"
envsubst < mykubeConfigFile.yml | kubectl --kubeconfig mykubeConfigFile.yml get pods
sed:
cat mykubeConfigFile.yml | sed s/\$\$EXTERNAL_IP/127.0.11.1/ | kubectl --kubeconfig mykubeConfigFile.yml get pods

kubectl config use-context deleting eks user

I'm encountering some really weird behaviour while attempting to switch contexts using kubectl.
My config file declares two contexts; one points to an in-house cluster, while the other points to an Amazon EKS cluster.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <..>
server: <..>
name: in-house
- cluster:
certificate-authority-data: <..>
server: <..>
name: eks
contexts:
- context:
cluster: in-house
user: divesh-in-house
name: in-house-context
- context:
cluster: eks
user: divesh-eks
name: eks-context
current-context: in-house-context
preferences: {}
users:
- name: divesh-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks"
env: null
- name: divesh-in-house
user:
client-certificate-data: <..>
client-key-data: <..>
I'm also using the aws-iam-authenticator to authenticate to the EKS cluster.
My problem is this - as long as I work with the in-house cluster, everything works fine. But, when I execute kubectl config use-context eks-context, I observe the following behaviour.
Any operation I try to perform on the cluster (say, kubectl get pods -n production) shows me a Please enter Username: prompt. I assumed the aws-iam-authenticator should have managed the authentication for me. I can confirm that running the authenticator manually (aws-iam-authenticator token -i eks) works fine for me.
Executing kubectl config view omits the divesh-eks user, so the output looks like
users:
- name: divesh-eks
user: {}
Switching back to the in-house cluster by xecuting kubectl config use-context in-house-context modifies my config file and deletes the divesh-eks-user, so the config file now contains
users:
- name: divesh-eks
user: {}
My colleagues don't seem to face this problem.
Thoughts?
The exec portion of that config was added in 1.10 (https://github.com/kubernetes/kubernetes/pull/59495)
If you use a version of kubectl prior to that version, it will not recognize the exec plugin (resulting in prompts for credentials), and if you use it to make kubeconfig changes, it will drop the exec field when it persists the changes