Passing certificates/keys in kubeconfig throws an JSON error - kubernetes

Using Kubectl client (1.7.0) on Windows to connect to remote cluster.
The config file in Windows ( located in .kube) directory is configured as follows:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: C:\Users\DK05478\.kube\ca.crt
server: https://10.99.70.153:6443
name: devo
contexts:
- context:
cluster: devo
user: admindevo
name: devo
current-context: devo
kind: Config
preferences: {}
users:
- name: admindevo
user:
client-certificate-data: C:\Users\DK05478\.kube\apiserver.crt
client-key-data: C:\Users\DK05478\.kube\apiserver.key
These certificate files I have downloaded from the remote system to my localhost. But this does not work. Throws the following error->
C:\Windows\kubernetes>kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd64"}
error: Error loading config file "C:\Users\DK05478/.kube/config": [pos 107]: json: error decoding base64 binary 'C:\Users\DK05478\.kube\ca.crt': illegal base64 data at input byte 1
How can I fix this issue? whats that I am doing wrong ?

Remove the -data suffix from certificate-authority-data, client-certificate-data and client-key-data. Like what #sfgroups said, the xxx-data param is meant for a base64-encoded cert/key.
Once you do, your kubeconfig should look like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\DK05478\.kube\ca.crt
server: https://10.99.70.153:6443
name: devo
contexts:
- context:
cluster: devo
user: admindevo
name: devo
current-context: devo
kind: Config
preferences: {}
users:
- name: admindevo
user:
client-certificate: C:\Users\DK05478\.kube\apiserver.crt
client-key: C:\Users\DK05478\.kube\apiserver.key

certificate-authority-data: , client-certificate-data:, client-key-data: reference to the file. I think you need base64 encoded keys value here. you can look .kube/config file from your cluster master.
look at this page for base64 usage example https://kubernetes.io/docs/concepts/configuration/secret/

Related

error: kubectl get pods -> couldn't get current server API group list |docker-desktop

I've been building and destroying couple of terraform projects and then after couple of hours came to a weird error saying:
kubectl get pods
E0117 14:10:23.537699 21524 memcache.go:238] couldn't get current server API group list: Get "https://192.168.59.102:8443/api?t
E0117 14:10:33.558130 21524 memcache.go:238] couldn't get current server API group list: Get "https://192.168.59.102:8443/api?t
tried to check everything I can even restore and purge data on docker-desktop but it didn't help.
my .kube/config :
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority: C:\Users\dani0\.minikube\ca.crt
extensions:
- extension:
last-update: Tue, 17 Jan 2023 14:04:24 IST
provider: minikube.sigs.k8s.io
version: v1.28.0
name: cluster_info
server: https://192.168.59.102:8443
name: minikube
contexts:
- context:
cluster: docker-desktop
name: docker-desktop
- context:
extensions:
- extension:
last-update: Tue, 17 Jan 2023 14:04:24 IST
provider: minikube.sigs.k8s.io
version: v1.28.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: minikube
user:
client-certificate: C:\Users\dani0\.minikube\profiles\minikube\client.crt
client-key: C:\Users\dani0\.minikube\profiles\minikube\client.key
well I've fixed it by deleting the minikube cluster with "minikube delete" and then just "minikube start" and now it seems to work to me :)

kubernetes: change the current/default context via kubectl command

I am doing an exercise from KodeKoud which provide the CKAD certification training.
The exercise has a my-kube-config.yml file located under root/. The file content is below:
(I ommited some unrelated parts)
apiVersion: v1
kind: Config
clusters:
- name: production
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
- name: development
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
- name: test-cluster-1
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
contexts:
- name: test-user#production
context:
cluster: production
user: test-user
- name: research
context:
cluster: test-cluster-1
user: dev-user
users:
- name: test-user
user:
client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
client-key: /etc/kubernetes/pki/users/test-user/test-user.key
- name: dev-user
user:
client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt
client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key
current-context: test-user#development
The exercise asking me to:
use the dev-user to access test-cluster-1. Set the current context
to the right one so I can do that.
Since I see in the config file, there is a context named research which meets the requirement, so I run the following command to change the current context to the required one:
kubectl config use-context research
but the console gives me error: error: no context exists with the name: "research".
Ok, I guessed maybe the name with value research is not acceptable, maybe I have to follow the convention of <user-name>#<cluster-name>? I am not sure , but I then tried the following:
I modified the name from research to dev-user#test-cluster-1, so that context part becomes:
- name: dev-user#test-cluster-1
context:
cluster: test-cluster-1
user: dev-user
after that I run command: kubectl config use-context dev-user#test-cluster-1, but I get error:
error: no context exists with the name: "dev-user#test-cluster-1"
Why? Based on the course material that is the way to chagne the default/current context. Is the course out-dated that I am using a deprecated one? What is the problem?
Your initial idea was correct. You would need to change the context to research which can be done using
kubectl config use-context research
But the command would not be applied to the correct config in this instance. You can see the difference by checking the current-context with and without a kubeconfig directed to the my-kube-config file.
kubectl config current-context
kubernetes-admin#kubernetes
kubectl config --kubeconfig=/root/my-kube-config current-context
test-user#development
So run the use-context command with the correct kubeconfig
kubectl config --kubeconfig=/root/my-kube-config use-context research
To able to change context, you have to edit $HOME/.kube/config file with your config data and merge with default one. I've tried to replicate your config file and it was possible to change the context, however your config file looks very strange.
See the lines from my console for your reference:
bazhikov#cloudshell:~ (nb-project-326714)$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://controlplane:6443
name: development
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.246.22.167
name: gke_nb-project-326714_europe-west2_cluster-west2
...
...
...
- name: test-user
user:
client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
client-key: /etc/kubernetes/pki/users/test-user/test-user.key
bazhikov#cloudshell:~ (nb-project-326714)$ kubectl config use-context research
Switched to context "research".
Copy your default config file prior editing if you don't want to ruin your cluster config :)

Use multiple contexts with same user-name in kubectl config

I want to use multiple clusters with my kubectl so I either put everything into one config or add one config file per cluster to the KUBECONFIG env variable. That's all fine.
My problem is now, that I've users with the same user-name for each cluster but they use different client-key-data for each cluster (context) but somehow the context uses that user-name so it's not clear which user belongs to which cluster.
Better give an example:
Cluster 1:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://10.11.12.13:8888
name: team-cluster
contexts:
- context:
cluster: team-cluster
user: kubernetes-admin
name: kubernetes-admin#team-cluster
users:
- name: kubernetes-admin
user:
client-certificate-data: XXYYYZZZ
client-key-data: XXXYYYZZZ
Cluster 2:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://10.11.12.14:8888
name: dev-cluster
contexts:
- context:
cluster: dev-cluster
user: kubernetes-admin
name: kubernetes-admin#dev-cluster
users:
- name: kubernetes-admin
user:
client-certificate-data: AABBCC
client-key-data: AABBCC
As you see, in both cluster there's a user with name kubernetes-admin but from the context it's not clear which of those. Maybe there's another way to give it a unique identifier that is used by the context.
Maybe the solution is obvious but I've not found any example for such a case. Thanks for any help.
I had same issue with my config and found out that name in users is not username used to log in - its just name used to identify user section in config. In Your case only cert key is used to know who You are. So You can use:
users:
- name: kubernetes-admin-1
user:
client-certificate-data: AABBCC
client-key-data: AABBCC
- name: kubernetes-admin-2
user:
client-certificate-data: XXYYYZZZ
client-key-data: XXXYYYZZZ
and refer to that in context just by key:
contexts:
- context:
cluster: dev-cluster
user: kubernetes-admin-1
Full config:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://10.11.12.13:8888
name: team-cluster
- cluster:
server: https://10.11.12.14:8888
name: dev-cluster
contexts:
- context:
cluster: team-cluster
user: kubernetes-admin-1
name: kubernetes-admin#team-cluster
- context:
cluster: dev-cluster
user: kubernetes-admin-2
name: kubernetes-admin#dev-cluster
users:
- name: kubernetes-admin-1
user:
client-certificate-data: XXYYYZZZ
client-key-data: XXXYYYZZZ
- name: kubernetes-admin-2
user:
client-certificate-data: AABBCC
client-key-data: AABBCC
For auth methods that require username it's used something like this:
users:
- name: kubernetes-admin-with-password
user:
username: kubernetes-admin
password: mySecretPass
Using more than one kubeconfig is not much comfortable - you need to specify them for each command. You can have as much contexts and users if You want in one config and select right context (and save selected context as default).
If you have multiple kubeconfig files in the KUBECONFIG variable, then kubectl internally merges them before usage (see here). So, if you have two users with the same name in your kubeconfig files, they will probably override each other and you get either one or the other.
The solution is to either use different names for the users in the various kubeconfig files, or to explicitly specify one of the kubeconfig files, e.g. kubectl --kubeconfig dev-cluster.conf or having only a single kubeconfig file in the KUBECONFIG variable at a time.
In general, I would recommend the first approach and use a unique name for each different set of credentials (i.e. user) across your entire local configuration.
User in kubeconfig doesn't matter it is used as a reference for respective context. actually the common-name(CN) from client cert being used for authorization.
Working config: (i copied all clusters ca,certs and keys to $HOME/pki dir)
apiVersion: v1
kind: Config
current-context: "c1"
preferences: {}
clusters:
- cluster:
certificate-authority: ../pki/c1_ca.pem
server: https://20.100.1.10:6443
name: c1
- cluster:
certificate-authority: ../pki/c2_ca.pem
server: https://20.100.1.8:6443
name: c2
- cluster:
certificate-authority: ../pki/c3_ca.pem
server: https://20.100.1.11:6443
name: c3
contexts:
- context:
cluster: c1
namespace: default
user: c1
name: c1
- context:
cluster: c2
namespace: default
user: c2
name: c2
- context:
cluster: c3
namespace: default
user: c3
name: c3
users:
- name: c1
user:
client-certificate: ../pki/c1_client_cert.pem
client-key: ../pki/c1_client_key.pem
- name: c2
user:
client-certificate: ../pki/c2_client_cert.pem
client-key: ../pki/c2_client_key.pem
- name: c3
user:
client-certificate: ../pki/c3_client_cert.pem
client-key: ../pki/c3_client_key.pem

Unable to sign in with default config file generated with Oracle cloud

I have generated a config file with Oracle cloud for Kubernetes, The generated file keeps throwing the error "Not enough data to create auth info structure.
", wat are methods for fixing this
I have created a new oracle cloud account and set up a cluster for Kubernetes (small with only 2 nodes using quick setup) when I upload the generated config file, to Kubernetes dashboard, it throws the error "Not enough data to create auth info structure".
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURqRENDQW5TZ0F3SUJBZ0lVZFowUzdXTTFoQUtDakRtZGhhbWM1VkxlRWhrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hqRUxNQWtHQTFVRUJoTUNWVk14RGpBTUJnTlZCQWdUQlZSbGVHRnpNUTh3RFFZRFZRUUhFd1pCZFhOMAphVzR4RHpBTkJnTlZCQW9UQms5eVlXTnNaVEVNTUFvR0ExVUVDeE1EVDBSWU1ROHdEUVlEVlFRREV3WkxPRk1nClEwRXdIaGNOTVRrd09USTJNRGt6T0RBd1doY05NalF3T1RJME1Ea3pPREF3V2pCZU1Rc3dDUVlEVlFRR0V3SlYKVXpFT01Bd0dBMVVFQ0JNRlZHVjRZWE14RHpBTkJnTlZCQWNUQmtGMWMzUnBiakVQTUEwR0ExVUVDaE1HVDNKaApZMnhsTVF3d0NnWURWUVFMRXdOUFJGZ3hEekFOQmdOVkJBTVRCa3M0VXlCRFFUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLSDFLeW5lc1JlY2V5NVlJNk1IWmxOK05oQ1o0SlFCL2FLNkJXMzQKaE5lWjdzTDFTZjFXR2k5ZnRVNEVZOFpmNzJmZkttWVlWcTcwRzFMN2l2Q0VzdnlUc0EwbE5qZW90dnM2NmhqWgpMNC96K0psQWtXWG1XOHdaYTZhMU5YbGQ4TnZ1YUtVRmdZQjNxeWZYODd3VEliRjJzL0tyK044NHpWN0loMTZECnVxUXp1OGREVE03azdwZXdGN3NaZFBSOTlEaGozcGpXcGRCd3I1MjN2ZWV0M0lMLzl3TXN6VWtkRzU3MnU3aXEKWG5zcjdXNjd2S25QM0U0Wlc1S29YMkRpdXpoOXNPZFkrQTR2N1VTeitZemllc1FWNlFuYzQ4Tk15TGw4WTdwcQppbEd2SzJVMkUzK0RpWXpHbFZuUm1GU1A3RmEzYmFBVzRtUkJjR0c0SXk5QVZ5TUNBd0VBQWFOQ01FQXdEZ1lEClZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGUFprNlI0ZndpOTUKOFR5SSt0VWRwaExaT2NYek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ0g2RVFHbVNzakxsQURKZURFeUVFYwpNWm9abFU5cWs4SlZ3cE5NeGhLQXh2cWZjZ3BVcGF6ZHZuVmxkbkgrQmZKeDhHcCszK2hQVjJJZnF2bzR5Y2lSCmRnWXJJcjVuLzliNml0dWRNNzhNL01PRjNHOFdZNGx5TWZNZjF5L3ZFS1RwVUEyK2RWTXBkdEhHc21rd3ZtTGkKRmd3WUJHdXFvS0NZS0NSTXMwS2xnMXZzMTMzZE1iMVlWZEFGSWkvTWttRXk1bjBzcng3Z2FJL2JzaENpb0xpVgp0WER3NkxGRUlKOWNBNkorVEE3OGlyWnJyQjY3K3hpeTcxcnFNRTZnaE51Rkt6OXBZOGJKcDlNcDVPTDByUFM0CjBpUjFseEJKZ2VrL2hTWlZKNC9rNEtUQ2tUVkFuV1RnbFJpTVNiRHFRbjhPUVVmd1kvckY3eUJBTkkxM2QyMXAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://czgkn3bmu4t.uk-london-1.clusters.oci.oraclecloud.com:6443
name: cluster-czgkn3bmu4t
contexts:
- context:
cluster: cluster-czgkn3bmu4t
user: user-czgkn3bmu4t
name: context-czgkn3bmu4t
current-context: context-czgkn3bmu4t
kind: ''
users:
- name: user-czgkn3bmu4t
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster
- generate-token
- --cluster-id
- ocid1.cluster.oc1.uk-london-1.aaaaaaaaae3deztchfrwinjwgiztcnbqheydkyzyhbrgkmbvmczgkn3bmu4t
command: oci
env: []
if you could help me resolve this I would be extremely grateful
You should be able to solve this by downloading a v1 kubeconfig. Then specifying the --token-version=1.0.0 flag on the create kubeconfig command.
oci ce cluster create-kubeconfig <options> --token-version=1.0.0
Then use that kubeconfig in the dashboard.

kubectl does not work with multiple clusters config

I have ~/.kube/config with following content:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes-jenkins
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.sk1.us-east-1.eks.amazonaws.com
name: kuberntes-dev
contexts:
- context:
cluster: kubernetes-dev
user: aws-dev
name: aws-dev
- context:
cluster: kubernetes-jenkins
user: aws-jenkins
name: aws-jenkins
current-context: aws-dev
kind: Config
preferences: {}
users:
- name: aws-dev
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- EKS_DEV_CLUSTER
command: heptio-authenticator-aws
env: null
- name: aws-jenkins
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- EKS_JENKINS_CLUSTER
command: heptio-authenticator-aws
env: null
But when I'm trying to kubectl cluster-info I get:
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As far as I understand something wrong in my kubeconfig, but I don't see what exactly.
Also I tried to find any related issues, but with no luck.
Could you suggest me something?
Thanks.
You need to choose the context that you'd like to use. More informantion on how use multiple clusters with multiple users here.
Essentially, you can view your current context (for the current cluster configured)
$ kubectl config current-context
To view, all the clusters configured:
$ kubectl config get-clusters
And to choose your cluster:
$ kubectl config use-context <cluster-name>
There are options to set different users per cluster in case you have them defined in your ~/kube/config file.
Your cluster name has a typo in it (name: kuberntes-dev) compared with the reference in the context (cluster: kubernetes-dev)