kubectl config use-context deleting eks user - kubernetes

I'm encountering some really weird behaviour while attempting to switch contexts using kubectl.
My config file declares two contexts; one points to an in-house cluster, while the other points to an Amazon EKS cluster.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <..>
server: <..>
name: in-house
- cluster:
certificate-authority-data: <..>
server: <..>
name: eks
contexts:
- context:
cluster: in-house
user: divesh-in-house
name: in-house-context
- context:
cluster: eks
user: divesh-eks
name: eks-context
current-context: in-house-context
preferences: {}
users:
- name: divesh-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks"
env: null
- name: divesh-in-house
user:
client-certificate-data: <..>
client-key-data: <..>
I'm also using the aws-iam-authenticator to authenticate to the EKS cluster.
My problem is this - as long as I work with the in-house cluster, everything works fine. But, when I execute kubectl config use-context eks-context, I observe the following behaviour.
Any operation I try to perform on the cluster (say, kubectl get pods -n production) shows me a Please enter Username: prompt. I assumed the aws-iam-authenticator should have managed the authentication for me. I can confirm that running the authenticator manually (aws-iam-authenticator token -i eks) works fine for me.
Executing kubectl config view omits the divesh-eks user, so the output looks like
users:
- name: divesh-eks
user: {}
Switching back to the in-house cluster by xecuting kubectl config use-context in-house-context modifies my config file and deletes the divesh-eks-user, so the config file now contains
users:
- name: divesh-eks
user: {}
My colleagues don't seem to face this problem.
Thoughts?

The exec portion of that config was added in 1.10 (https://github.com/kubernetes/kubernetes/pull/59495)
If you use a version of kubectl prior to that version, it will not recognize the exec plugin (resulting in prompts for credentials), and if you use it to make kubeconfig changes, it will drop the exec field when it persists the changes

Related

Getting password prompted when running kubectl commands

I inherited a couple server running Kubernetes. And one of the things secops wants me to do is install an agent on the server. One of the first commands to run is
kubectl create secret generic
Running this, I am prompted for username and password. No one here knows what this is b/c the dev who set up the server is gone. So I don't know how to run this command and get passed the username/password. An obvious suggestion from someone else was using default user/pass but I can't even find that online. Found this to help get info on the server:
kubectl config view
Output of this command:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Server:
CentOS Linux release 7.9.2009
Kernel - 5.17.2-1.el7.elrepo.x86_64
Any help is appreciated.
It is plausible that the kubeconfig file you are using is corrupt. You can reproduce similar symptoms(user/pass prompt) by editing the user name in your kubeconfig file. You need to find out(or create) the right kubeconfig file for the user. If you are an admin, you can find it at /etc/kubernetes/admin.conf in the master node.
Here are steps to reproduce the issue:
// This is my kubeconfig file, working fine
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
// I searched for the user name
kubectl config view |grep 'user: default'
user: default
// corrupted the user name from default to default1
sed -i.bak 's/user: default/user: default1/g' ~/.kube/config
// now getting prompted for user/password
kubectl get pod --kubeconfig .kube/config
Please enter Username:
^C
//reverted the changes done earlier
sed -i 's/user: default1/user: default/g' ~/.kube/config
// commands working fine now
kubectl get pod --kubeconfig .kube/config
No resources found in default namespace.

kubectl minikube renew certificate

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?
Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.
The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster
minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

Merging kubeconfig JSON and YAML

I have two kubeconfigs file, the first one is following which I use to communicate with the cluster and the second one is for Aquasec which is in JSON format. How can I merge these two?
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://656835E69F31E2933asdAFAKE3F5904sadFDDC112dsasa7.yld432.eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:test651666:cluster/Magento
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.142.242.111:6443
name: kubernetes
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:test651666:cluster/testing
user: arn:aws:eks:eu-west-2:test651666:cluster/testing
name: arn:aws:eks:eu-west-2:test651666:cluster/testing
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-for-desktop
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: arn:aws:eks:eu-west-2:test651666:cluster/testing
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:test651666:cluster/testing
You can set the KUBECONFIG environment variable to multiple config files delimited by : and kubectl will automatically merge them behind the scenes.
For example:
export KUBECONFIG=config:my-config.json
In the export above, config is the default config file contained within ~/.kube and my-config.json would be your second config file, which you said is in JSON format.
You can see the merged config using this command, which shows a unified view of the configuration that kubectl is currently using:
kubectl config view
Because kubectl automatically merges multiple configs, you shouldn't need to save the merged config to a file. But if you really want to do that, you can redirect the output, like this:
kubectl config view --flatten > merged-config.yaml
Check out Mastering the KUBECONFIG file, Organizing Cluster Access Using kubeconfig Files for more explanation and to see some other examples.

Puppet kubernetes module

I installed the puppet kubernetes module to manage pods of my kubernetes cluster with https://github.com/garethr/garethr-kubernetes/blob/master/README.md
I am not able to get any pod information back when I run
puppet resource kubernetes_pod
It just returns an empty line.
I am using a minikube k8s cluster to test the puppet module against.
cat /etc/puppetlabs/puppet/kubernetes.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://<ip address>:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/apiserver.crt
client-key: /root/.minikube/apiserver.key
I am able to use curl with the certs to talk to the K8s REST API
curl --cacert /root/.minikube/ca.crt --cert /root/.minikube/apiserver.crt --key /root/.minikube/apiserver.key https://<minikube ip>:844/api/v1/pods/
It looks like the garethr-kubernetes package hasn't been updated since August 2017, so you probably need a version of the kubeclient gem at least that old. It seems kubeclient 3.0 came out quite recently, so you might want to try the latest version from the 2.5 major (currently 2.5.2).
From the requirements, this could be related to a credentials issue.
Or the configuration is set to a namespace with nothing in it.
As show in this issue, check the following:
kubectl get pods works fine at the command line, and my ~/.puppetlabs/etc/puppet/kubernetes.conf file is generated as suggested:
mc0e#xxx:~$ kubectl config view --raw=true
apiVersion: v1
clusters:
- cluster:
server: http://localhost:8080
name: test-doc
contexts:
- context:
cluster: test-doc
user: ""
name: test-doc
current-context: test-doc
kind: Config
preferences: {}
users: []

Kubectl Error when accessing Namespaces

I was trying out the Tectonic Kubernetes sandbox setup and according to their documentation:
https://coreos.com/tectonic/docs/latest/tutorials/first-app.html
I did download the kubectl and the corresponding kube-config files, but when I tried to get the namespaces using the following command:
kubectl get namespaces
I get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What is this? From where is it picking up this port locahost:8080?
EDIT:
Joe-MacBook-Pro:~ joe$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
Joe-MacBook-Pro:~ joe$
I'm lacking some details on your setup, but the problem is basically clear - you're not connected to the cluster.
You should have a kubeconfig file containing the cluster connection information i.e. the context, I assume if you run kubectl config view you'll get nothing.
I'm on windows using git bash, if I run the same command I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://platform-svc-integration.net
name: svc-integration
contexts:
- context:
cluster: svc-integration
user: svc-integration-admin
name: svc-integration-system
current-context: svc-integration-system
kind: Config
preferences: {}
users:
- name: svc-integration-admin
user:
client-certificate: <path>/admin/admin.crt
client-key: <path>/admin/admin.key
basically what I'm trying to say is you need to configure your context, start by doing kubectl config --help to list your options, it's pretty straight forward but if don't manage just refer to the documentation.