I have new Civo cluster,I downloaded config
kubectl config --kubeconfig=civo-enroute-demo-kubeconfig
does not work
Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"
The loading order follows these rules:
1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
place.
2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
last file in the list.
3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.
Output config view
kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://00.220.23.220:6443
name: enroute-demo
contexts:
- context:
cluster: enroute-demo
user: enroute-demo
name: enroute-demo
current-context: enroute-demo
kind: Config
preferences: {}
users:
- name: enroute-demo
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
How should the kubectl command look like?
You can use a kubeconfig file with kubectl as
kubectl --kubeconfig=config_file.yaml command
Example:
kubectl --kubeconfig=civo-enroute-demo-kubeconfig.yaml get nodes
OR
You can export KUBECONFIG and use kubectl command
Related
I inherited a couple server running Kubernetes. And one of the things secops wants me to do is install an agent on the server. One of the first commands to run is
kubectl create secret generic
Running this, I am prompted for username and password. No one here knows what this is b/c the dev who set up the server is gone. So I don't know how to run this command and get passed the username/password. An obvious suggestion from someone else was using default user/pass but I can't even find that online. Found this to help get info on the server:
kubectl config view
Output of this command:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Server:
CentOS Linux release 7.9.2009
Kernel - 5.17.2-1.el7.elrepo.x86_64
Any help is appreciated.
It is plausible that the kubeconfig file you are using is corrupt. You can reproduce similar symptoms(user/pass prompt) by editing the user name in your kubeconfig file. You need to find out(or create) the right kubeconfig file for the user. If you are an admin, you can find it at /etc/kubernetes/admin.conf in the master node.
Here are steps to reproduce the issue:
// This is my kubeconfig file, working fine
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
// I searched for the user name
kubectl config view |grep 'user: default'
user: default
// corrupted the user name from default to default1
sed -i.bak 's/user: default/user: default1/g' ~/.kube/config
// now getting prompted for user/password
kubectl get pod --kubeconfig .kube/config
Please enter Username:
^C
//reverted the changes done earlier
sed -i 's/user: default1/user: default/g' ~/.kube/config
// commands working fine now
kubectl get pod --kubeconfig .kube/config
No resources found in default namespace.
THE PLOT:
I am working on a kubernetes environment where we have PROD and ITG setup. The ITG setup has multi-cluster environment whereas PROD setup is a single-cluster environment.
I am trying to automate some process using Python where I have to deal with kubeconfig file and I am using the kubernetes library for it.
THE PROBLEM:
The kubeconfig file for PROD has "current-context" key available but the same is missing from the kubeconfig file for ITG.
prdconfig:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://cluster3.url.com:3600
name: cluster-ABC
contexts:
- context:
cluster: cluster-LMN
user: cluster-user
name: cluster-LMN-context
current-context: cluster-LMN-context
kind: Config
preferences: {}
users:
- name: cluster-user
user:
exec:
command: kubectl
apiVersion: <clientauth/version>
args:
- kubectl-custom-plugin
- authenticate
- https://cluster.url.com:8080
- --user=user
- --token=/api/v2/session/xxxx
- --token-expiry=1000000000
- --force-reauth=false
- --insecure-skip-tls-verify=true
itgconfig:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://cluster1.url.com:3600
name: cluster-ABC
- cluster:
insecure-skip-tls-verify: true
server: https://cluster2.url.com:3601
name: cluster-XYZ
contexts:
- context:
cluster: cluster-ABC
user: cluster-user
name: cluster-ABC-context
- context:
cluster: cluster-XYZ
user: cluster-user
name: cluster-XYZ-context
kind: Config
preferences: {}
users:
- name: cluster-user
user:
exec:
command: kubectl
apiVersion: <clientauth/version>
args:
- kubectl-custom-plugin
- authenticate
- https://cluster.url.com:8080
- --user=user
- --token=/api/v2/session/xxxx
- --token-expiry=1000000000
- --force-reauth=false
- --insecure-skip-tls-verify=true
When I try loading the kubeconfig file for PROD using config.load_kube_config(os.path.expanduser('~/.kube/prdconfig')) it works.
And when I try loading the kubeconfig file for ITG using config.load_kube_config(os.path.expanduser('~/.kube/itgconfig')), I get the following error:
ConfigException: Invalid kube-config file. Expected key
current-context in C:\Users<username>/.kube/itgconfig
Although it is very clear from the error message that it is considering the kubeconfig file as invalid, as it does not have "current-context" key in it.
THE SUB-PLOT:
When working with kubectl, the missing "current-context" does not make any difference as we can always specify context along with the command. But the 'load_kube_config()' function makes it mandatory to have "current-context" available.
THE QUESTION:
So, is "current-context" a mandatory key in kubeconfig file?
THE DISCLAIMER:
I am very new to kubernetes and have very little experience working with it.
As described in the comments:
If we want to use kubeconfig file to work out of the box by default, with specific cluster using kubectl or python script we can mark one of the contexts in our kubeconfig file as the default by specifying current-context.
Note about Context:
A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster.
In order to mark one of our contexts (f.e. dev-fronted) in our kubeconfig file as the default one please run:
kubectl config use-context dev-fronted
Now whenever you run a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev-frontend context
Please take a look at:
- Mering kubeconfig files:
determine the context to use based on the first hit in this chain:
Use the --context command-line flag if it exists.
Use the current-context from the merged kubeconfig files.
An empty context is allowed at this point.
determine the cluster and user. At this point, there might or might not be a context. Determine the cluster and user based on the first hit in this chain, which is run twice: once for user and once for cluster:
Use a command-line flag if it exists: --user or --cluster.
If the context is non-empty, take the user or cluster from the context.
The user and cluster can be empty at this point.
Whenever we run kubectl commands without specified current-context we should provide additional configuration parameters to tell kubectl which configuration to use, in your example it could be f.e.:
kubectl --kubeconfig=/your_directory/itgconfig get pods --context cluster-ABC-context
As described earlier - to simplify this task we can use configure current-context in kubeconfig file configuration:
kubectl config --kubeconfig=c/your_directory/itgconfig use-context cluster-ABC-context
Going further into errors generated by your script we should notice errors from config/kube_config.py:
config/kube_config.py", line 257, in set_active_context context_name = self._config['current-context']
kubernetes.config.config_exception.ConfigException:: Invalid kube-config file. Expected key current-context in ...
Here is an example with additional context="cluster-ABC-context" parameter:
from kubernetes import client, config
config.load_kube_config(config_file='/example/data/merged/itgconfig', context="cluster-ABC-context")
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
...
Listing pods with their IPs:
10.200.xxx.xxx kube-system coredns-558bd4d5db-qpzb8
192.168.xxx.xxx kube-system etcd-debian-test
...
Additional information
Configure Access to Multiple Clusters
Organizing Cluster Access Using kubeconfig Files
I have two kubeconfigs file, the first one is following which I use to communicate with the cluster and the second one is for Aquasec which is in JSON format. How can I merge these two?
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://656835E69F31E2933asdAFAKE3F5904sadFDDC112dsasa7.yld432.eu-west-2.eks.amazonaws.com
name: arn:aws:eks:eu-west-2:test651666:cluster/Magento
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.142.242.111:6443
name: kubernetes
contexts:
- context:
cluster: arn:aws:eks:eu-west-2:test651666:cluster/testing
user: arn:aws:eks:eu-west-2:test651666:cluster/testing
name: arn:aws:eks:eu-west-2:test651666:cluster/testing
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-for-desktop
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: arn:aws:eks:eu-west-2:test651666:cluster/testing
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:test651666:cluster/testing
You can set the KUBECONFIG environment variable to multiple config files delimited by : and kubectl will automatically merge them behind the scenes.
For example:
export KUBECONFIG=config:my-config.json
In the export above, config is the default config file contained within ~/.kube and my-config.json would be your second config file, which you said is in JSON format.
You can see the merged config using this command, which shows a unified view of the configuration that kubectl is currently using:
kubectl config view
Because kubectl automatically merges multiple configs, you shouldn't need to save the merged config to a file. But if you really want to do that, you can redirect the output, like this:
kubectl config view --flatten > merged-config.yaml
Check out Mastering the KUBECONFIG file, Organizing Cluster Access Using kubeconfig Files for more explanation and to see some other examples.
I need to use an environment variable inside my kubeconfig file to point the NODE_IP of the Kubernetes API server.
My config is:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://$NODE_IP:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
......
But it seems like the kubeconfig file is not getting rendered variables when I run the command:
kubectl --kubeconfig mykubeConfigFile get pods.
It complains as below:
Unable to connect to the server: dial tcp: lookup $NODE_IP: no such host
Did anyone try to do something like this or is it possible to make it work?
Thanks in advance
This thread contains explanations and answers:
... either wait Implement templates · Issue #23896 · kubernetes/kubernetes for the implementation of the templating proposal in k8s (not merged yet)
... or preprocess your yaml with tools like:
envsubst:
export NODE_IP="127.0.11.1"
envsubst < mykubeConfigFile.yml | kubectl --kubeconfig mykubeConfigFile.yml get pods
sed:
cat mykubeConfigFile.yml | sed s/\$\$EXTERNAL_IP/127.0.11.1/ | kubectl --kubeconfig mykubeConfigFile.yml get pods
I'm encountering some really weird behaviour while attempting to switch contexts using kubectl.
My config file declares two contexts; one points to an in-house cluster, while the other points to an Amazon EKS cluster.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <..>
server: <..>
name: in-house
- cluster:
certificate-authority-data: <..>
server: <..>
name: eks
contexts:
- context:
cluster: in-house
user: divesh-in-house
name: in-house-context
- context:
cluster: eks
user: divesh-eks
name: eks-context
current-context: in-house-context
preferences: {}
users:
- name: divesh-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks"
env: null
- name: divesh-in-house
user:
client-certificate-data: <..>
client-key-data: <..>
I'm also using the aws-iam-authenticator to authenticate to the EKS cluster.
My problem is this - as long as I work with the in-house cluster, everything works fine. But, when I execute kubectl config use-context eks-context, I observe the following behaviour.
Any operation I try to perform on the cluster (say, kubectl get pods -n production) shows me a Please enter Username: prompt. I assumed the aws-iam-authenticator should have managed the authentication for me. I can confirm that running the authenticator manually (aws-iam-authenticator token -i eks) works fine for me.
Executing kubectl config view omits the divesh-eks user, so the output looks like
users:
- name: divesh-eks
user: {}
Switching back to the in-house cluster by xecuting kubectl config use-context in-house-context modifies my config file and deletes the divesh-eks-user, so the config file now contains
users:
- name: divesh-eks
user: {}
My colleagues don't seem to face this problem.
Thoughts?
The exec portion of that config was added in 1.10 (https://github.com/kubernetes/kubernetes/pull/59495)
If you use a version of kubectl prior to that version, it will not recognize the exec plugin (resulting in prompts for credentials), and if you use it to make kubeconfig changes, it will drop the exec field when it persists the changes