I started getting an error in my CI process Please enter Username: error: EOF when running kubectl commands.
The kubectl version matches the cluster version, and I can run the same commands fine from my machine with the same configuration shown by kubectl config view.
Here are the logs:
+ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: REDACTED_FOR_QUESTION
name: REDACTED_FOR_QUESTION
contexts:
- context:
cluster: REDACTED_FOR_QUESTION
user: REDACTED_FOR_QUESTION
name: REDACTED_FOR_QUESTION
current-context: REDACTED_FOR_QUESTION
kind: Config
preferences: {}
users:
- name: REDACTED_FOR_QUESTION
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
++ echo 'Starting Kube proxy - port is 8001'
++ set +e
++ echo 'using kubectl'
++ sleep 1
++ kubectl proxy --port=8001
error: EOF
++ cat backingfile
++ rm backingfile
++ set -e
+ echo Starting Kube proxy - port is 8001 using kubectl Please enter Username:
Starting Kube proxy - port is 8001 using kubectl Please enter Username:
+ kubectl version
Please enter Username: error: EOF
Exited with code 1
What I am doing in my script is the following:
echo "using kubectl"
kubectl proxy --port=${KUBECTL_PROXY_PORT} > backingfile &
sleep 1 && cat backingfile && rm backingfile
set -e
As this allowed me to launch kubectl in the background but still get the logs of the command.
What is causing this error and how can I run the command successfully again? Please enter Username: error: EOF
It looks like you are pointing to the different KUBECONFIG files:
1. You should verify your KUBECONFIG varaible,
2. You can add in your script --kubeconfig=absolute_path_to_the_KUBECONFIG_file flag
3 You combine those above with kubectl config --kubeconfig=XXXX use-context xxxx
Please follow Define clusters, users, and contexts
Hope this help.
Related
I inherited a couple server running Kubernetes. And one of the things secops wants me to do is install an agent on the server. One of the first commands to run is
kubectl create secret generic
Running this, I am prompted for username and password. No one here knows what this is b/c the dev who set up the server is gone. So I don't know how to run this command and get passed the username/password. An obvious suggestion from someone else was using default user/pass but I can't even find that online. Found this to help get info on the server:
kubectl config view
Output of this command:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Server:
CentOS Linux release 7.9.2009
Kernel - 5.17.2-1.el7.elrepo.x86_64
Any help is appreciated.
It is plausible that the kubeconfig file you are using is corrupt. You can reproduce similar symptoms(user/pass prompt) by editing the user name in your kubeconfig file. You need to find out(or create) the right kubeconfig file for the user. If you are an admin, you can find it at /etc/kubernetes/admin.conf in the master node.
Here are steps to reproduce the issue:
// This is my kubeconfig file, working fine
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
// I searched for the user name
kubectl config view |grep 'user: default'
user: default
// corrupted the user name from default to default1
sed -i.bak 's/user: default/user: default1/g' ~/.kube/config
// now getting prompted for user/password
kubectl get pod --kubeconfig .kube/config
Please enter Username:
^C
//reverted the changes done earlier
sed -i 's/user: default1/user: default/g' ~/.kube/config
// commands working fine now
kubectl get pod --kubeconfig .kube/config
No resources found in default namespace.
I wish to run a Drone CI/CD pipeline on a Raspberry Pi, including a stage to update a Kubernetes Deployment. Unfortunately, all the pre-built solutions that I've found for doing so (e.g. 1, e.g. ) are not built for arm64 architecture, so I believe I need to build my own.
I am attempting to adapt the commands from here (see also README.md, which describes the authorization required), but my attempt to contact the cluster still fails with authorization problems:
$ cat service-account-definition.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-demo-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-demo-service-account-clusterrolebinding
subjects:
- kind: ServiceAccount
name: drone-demo-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f service-account-definition.yaml
serviceaccount/drone-demo-service-account created
clusterrolebinding.rbac.authorization.k8s.io/drone-demo-service-account-clusterrolebinding created
$ kubectl get serviceaccount drone-demo-service-account
NAME SECRETS AGE
drone-demo-service-account 1 10s
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.ca\.crt}' > secrets/cert
$ head -c 10 secrets/cert
LS0tLS1CRU%
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.token}' | base64 > secrets/token
$ head -c 10 secrets/token
WlhsS2FHSk%
$ cat Dockerfile
FROM busybox
COPY . .
CMD ["./script.sh"]
$ cat script.sh
#!/bin/sh
server=$(cat secrets/server) # Pre-filled
cert=$(cat secrets/cert)
# Added this `tr` call, which is not present in the source I'm working from, after noticing that
# the file-content contains newlines
token=$(cat secrets/token | tr -d '\n')
echo "DEBUG: server is $server, cert is $(echo $cert | head -c 10)..., token is $(echo $token | head -c 10)..."
# Cannot depend on the binami/kubectl image (https://hub.docker.com/r/bitnami/kubectl), because
# it's not available for arm64 - https://github.com/bitnami/charts/issues/7305
wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/arm64/kubectl
chmod +x kubectl
./kubectl config set-credentials default --token=$token
echo $cert | base64 -d > ca.crt
./kubectl config set-cluster default --server=$server --certificate-authority=ca.crt
./kubectl config set-context default --cluster=default --user=default
./kubectl config use-context default
echo "Done with setup, now cat-ing .kube/config"
echo
cat $HOME/.kube/config
echo "Attempting to get pods"
echo
./kubectl get pods
$ docker build -t stack-overflow-testing . && docker run stack-overflow-testing
Sending build context to Docker daemon 10.75kB
Step 1/3 : FROM busybox
---> 3c277069c6ae
Step 2/3 : COPY . .
---> 74c6a132d255
Step 3/3 : CMD ["./script.sh"]
---> Running in dc55f33f74bb
Removing intermediate container dc55f33f74bb
---> dc68a5d6ba9b
Successfully built dc68a5d6ba9b
Successfully tagged stack-overflow-testing:latest
DEBUG: server is https://rassigma.avril:6443, cert is LS0tLS1CRU..., token is WlhsS2FHSk...
Connecting to storage.googleapis.com (142.250.188.16:443)
wget: note: TLS certificate validation not implemented
saving to 'kubectl'
kubectl 18% |***** | 7118k 0:00:04 ETA
kubectl 43% |************* | 16.5M 0:00:02 ETA
kubectl 68% |********************** | 26.2M 0:00:01 ETA
kubectl 94% |****************************** | 35.8M 0:00:00 ETA
kubectl 100% |********************************| 38.0M 0:00:00 ETA
'kubectl' saved
User "default" set.
Cluster "default" set.
Context "default" created.
Switched to context "default".
Done with setup, now cat-ing .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /ca.crt
server: https://rassigma.avril:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: WlhsS2FHSkhZM[...REDACTED]
Attempting to get pods
error: You must be logged in to the server (Unauthorized)
If I copy the ~/.kube/config from my laptop to the docker container, kubectl commands succeed as expected - so, this isn't a networking issue, just an authorization one. I do note that my laptop-based ~/.kube/config lists client-certificate-data and client-key-data rather than token under users: user:, but I suspect that's because my base config is recording a non-service-account.
How can I set up kubectl to authorize as a service account?
Some reading I have done that didn't answer the question for me:
kubenetes documentation on AuthN/AuthZ
Google Kubernetes Engine article on service accounts
Configure Service Accounts for Pods (this described how to create and associate the accounts, but not how to act as them)
Two blog posts (1, 2) that refer to Service Accounts
It appears you have used | base64 instead of | base64 --decode
I've created a service account for CI purposes and am testing it out. Upon trying any kubectl command, I get the error:
error: You must be logged in to the server (Unauthorized)
Below is my .kube/config file
apiVersion: v1
clusters:
- cluster:
server: <redacted>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: bamboo
name: default
current-context: 'default'
kind: Config
preferences: {}
users:
- name: bamboo
user:
token: <redacted>
The service account exists and has a cluster role: edit and cluster role binding attached.
What am I doing wrong?
I reproduce the error if I copy the token directly without decoding. Then applied the following steps to decode and set the token and it is working as expected.
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<serviceaccount-name> -o jsonpath='{.secrets[0].name}'`
$ TOKEN=`kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
$ kubectl config set-credentials <service-account-name> --token=$TOKEN
So, I think it might be your case.
I need to use an environment variable inside my kubeconfig file to point the NODE_IP of the Kubernetes API server.
My config is:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://$NODE_IP:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
......
But it seems like the kubeconfig file is not getting rendered variables when I run the command:
kubectl --kubeconfig mykubeConfigFile get pods.
It complains as below:
Unable to connect to the server: dial tcp: lookup $NODE_IP: no such host
Did anyone try to do something like this or is it possible to make it work?
Thanks in advance
This thread contains explanations and answers:
... either wait Implement templates · Issue #23896 · kubernetes/kubernetes for the implementation of the templating proposal in k8s (not merged yet)
... or preprocess your yaml with tools like:
envsubst:
export NODE_IP="127.0.11.1"
envsubst < mykubeConfigFile.yml | kubectl --kubeconfig mykubeConfigFile.yml get pods
sed:
cat mykubeConfigFile.yml | sed s/\$\$EXTERNAL_IP/127.0.11.1/ | kubectl --kubeconfig mykubeConfigFile.yml get pods
I have this config file
apiVersion: v1
clusters:
- cluster:
server: [REDACTED] // IP of my cluster
name: staging
contexts:
- context:
cluster: staging
user: ""
name: staging-api
current-context: staging-api
kind: Config
preferences: {}
users: []
I run this command
kubectl config --kubeconfig=kube-config use-context staging-api
I get this message
Switched to context "staging-api".
I then run
kubectl get pods
and I get this message
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As far as I can tell from the docs
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
I'm doing it right. Am I missing something?
Yes, Try the following steps to access the kubernetes cluster. This steps assumes that you have your k8s certificates in /etc/kubernetes.
You need to setup the cluster name, Kubeconfig, User and Kube cert file in following variables and then simply run those commands:
CLUSTER_NAME="kubernetes"
KCONFIG=admin.conf
KUSER="kubernetes-admin"
KCERT=admin
cd /etc/kubernetes/
$ kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
$ kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
$ kubectl config set-context ${KUSER}#${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
$ kubectl config use-context ${KUSER}#${CLUSTER_NAME} --kubeconfig=${KCONFIG}
$ kubectl config view --kubeconfig=${KCONFIG}
After this you will be able to access the cluster. Hope this helps.
You need to fetch the credentials of the running cluster. Try this:
gcloud container clusters get-credentials <cluster_name> --zone <zone_name>
More info:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
I've got the same problem like mentioned in the title.
When I executed:
kubectl config current-context
The output was:
error: current-context is not set
And in my case it was indentation problem.
One white-space before current-context caused me a few hours of debugging:
contexts:
- context:
cluster: arn:aws:eks:us-east-2:...:cluster/...
user: arn:aws:eks:us-east-2:...:cluster/...
name: arn:aws:eks:us-east-2:...:cluster/...
current-context: arn:aws:eks:us-east-2:...:cluster/... <-Whitespace at the begging of the row was the source of the error.
I had the same issue on a mac m1...
The problem was that i am using kubectx and kubens, so that tools are ones that are controlling context and namespace.
In this situation The correct command has to be
kubectx staging-api
More information on the Official Repository