Kubernetes initial password (GCP)? (not using kops?) - kubernetes

Generally you can use kops get secrets kube --type secret -oplaintext, but I am not running on AWS and am using GCP.
I read that kubectl config view should show you this info, but I see no such thing (wondering if this has to do with GCP serviceaccount setup, am also using GKE).
The kubectl config view returns something like:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://MY_IP
name: MY_CLUSTER_NAME
contexts:
- context:
cluster: MY_CLUSTER_NAME
user: MY_CLUSTER_NAME
name: MY_CLUSTER_NAME
current-context: MY_CONTEXT_NAME
kind: Config
preferences: {}
users:
- name: MY_CLUSTER_NAME
user:
auth-provider:
config:
access-token: MY_ACCESS_TOKEN
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: 2019-02-27T03:20:49Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Neither Username=>Admin or Username=>MY_CLUSTER_NAME worked with Password=>MY_ACCESS_TOKEN
Any ideas?

Try:
gcloud container clusters describe ${CLUSTER} \
--flatten="masterAuth"
[--zone=${ZONE}|--region=${REGION} \
--project=${PROJECT}
It's possible that your cluster has basic authentication (username|password) disabled as this authentication mechanism is discouraged.
An alternative mechanism provided with Kubernetes Engine is (as shown in your config) is to use your gcloud credentials to get you onto the cluster.
The following command will configure ~/.kube/config so that you may access the cluster using your gcloud credentials. It looks as though this step has been done and you can use kubectl directly.
gcloud container clusters get-credentials ${CLUSTER} \
[--zone=${ZONE}|--region=${REGION}] \
--project=${PROJECT}
As long as you're logged in using gcloud with an account that's permitted to use the cluster, you should be able to:
kubectl cluster-info
kubectl get nodes

Related

How to properly access multiple kubernetes cluster using kubectl

I have two clusters and the config files are stored in .kube. I am exporting KUBECONFIG as below
export KUBECONFIG=/home/vagrant/.kube/config-cluster1:/home/vagrant/.kube/config-cluster2
checking the contexts
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 kubernetes-admin
cluster-2 cluster-2 kubernetes-admin
But when I choose cluster-2 as my current context I get an error
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* cluster-1 cluster-1 kubernetes-admin
cluster-2 cluster-2 kubernetes-admin
kubectl config use-context cluster-2
Switched to context "cluster-2".
kubectl get pods -A
error: You must be logged in to the server (Unauthorized)
If I export only the config for cluster-2 and try running kubectl it works fine.
My question is whether I am exporting the config files properly or should I be doing something more.
You need to separate the AUTHINFO (context.user on config file) for each cluster with the respective credentials.
For example:
apiVersion: v1
clusters:
- cluster:
server: https://192.168.10.190:6443
name: cluster-1
- cluster:
server: https://192.168.99.101:8443
name: cluster-2
contexts:
- context:
cluster: cluster-1
user: kubernetes-admin-1
name: cluster-1
- context:
cluster: cluster-2
user: kubernetes-admin-2
name: cluster-2
kind: Config
preferences: {}
users:
- name: kubernetes-admin-1
user:
client-certificate: /home/user/.minikube/credential-for-cluster-1.crt
client-key: /home/user/.minikube/credential-for-cluster-1.key
- name: kubernetes-admin-2
user:
client-certificate: /home/user/.minikube/credential-for-cluster-2.crt
client-key: /home/user/.minikube/credential-for-cluster-2.key
You can find more useful tips in the following article:
Using different kubectl versions with multiple Kubernetes clusters:
When you are working with multiple Kubernetes clusters, it’s easy to
mess up with contexts and run kubectl in the wrong cluster. Beyond
that, Kubernetes has restrictions for versioning mismatch between the
client (kubectl) and server (kubernetes master), so running commands
in the right context does not mean running the right client version.
To overcome this:
Use asdf to manage multiple kubectl versions
Set the KUBECONFIG env var to change between multiple kubeconfig files
Use kube-ps1 to keep track of your current context/namespace
Use kubectx and kubens to change fast between clusters/namespaces
Use aliases to combine them all together
I also recommend the following reads:
Mastering the KUBECONFIG file by Ahmet Alp Balkan (Google Engineer)
How Zalando Manages 140+ Kubernetes Clusters by Henning Jacobs (Zalando Tech)
I wrote a script to switch kubeconfig and namespace easily. Hope it can help you.
. k-use -k <kubeconfig> -n <namespace>
https://github.com/kingonion/k-use

certificate signed by unknown authority when connect to remote kubernetes cluster using kubectl

I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.

Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get,

[xueke#master-01 admin]$ kubectl logs nginx-deployment-76bf4969df-999x8
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-deployment-76bf4969df-999x8)
[xueke#master-01 admin]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.101:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I specified the admin user here
How do I need to modify it?
The above error means your apiserver doesn't have the credentials (kubelet cert and key) to authenticate the kubelet's log/exec commands and hence the Forbidden error message.
You need to provide --kubelet-client-certificate=<path_to_cert> and --kubelet-client-key=<path_to_key> to your apiserver, this way apiserver authenticate the kubelet with the certficate and key pair.
For more information, have a look at:
https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/
In our case, the error stemmed from Azure services being downgraded because of a bug in DNS resolution, introduced in Ubuntu 18.04.
See Azure status and the technical thread. I ran this command to set a fallback DNS address in the nodes:
az vmss list-instances -g <resourcegroup> -n vmss --query "[].id" --output tsv \
| az vmss run-command invoke --scripts "echo FallbackDNS=168.63.129.16 >> /etc/systemd/resolved.conf; systemctl restart systemd-resolved.service" --command-id RunShellScript --ids #-
That's an RBAC error. The user had no permissions to see logs. If you have a user with cluster-admin permissions you can fix this error with
kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin
Note: Not a good idea to give an anonymous user cluster-admin role. Will fix the issue though.

Kubernetes context is not set

I have this config file
apiVersion: v1
clusters:
- cluster:
server: [REDACTED] // IP of my cluster
name: staging
contexts:
- context:
cluster: staging
user: ""
name: staging-api
current-context: staging-api
kind: Config
preferences: {}
users: []
I run this command
kubectl config --kubeconfig=kube-config use-context staging-api
I get this message
Switched to context "staging-api".
I then run
kubectl get pods
and I get this message
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As far as I can tell from the docs
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
I'm doing it right. Am I missing something?
Yes, Try the following steps to access the kubernetes cluster. This steps assumes that you have your k8s certificates in /etc/kubernetes.
You need to setup the cluster name, Kubeconfig, User and Kube cert file in following variables and then simply run those commands:
CLUSTER_NAME="kubernetes"
KCONFIG=admin.conf
KUSER="kubernetes-admin"
KCERT=admin
cd /etc/kubernetes/
$ kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
$ kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
$ kubectl config set-context ${KUSER}#${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
$ kubectl config use-context ${KUSER}#${CLUSTER_NAME} --kubeconfig=${KCONFIG}
$ kubectl config view --kubeconfig=${KCONFIG}
After this you will be able to access the cluster. Hope this helps.
You need to fetch the credentials of the running cluster. Try this:
gcloud container clusters get-credentials <cluster_name> --zone <zone_name>
More info:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
I've got the same problem like mentioned in the title.
When I executed:
kubectl config current-context
The output was:
error: current-context is not set
And in my case it was indentation problem.
One white-space before current-context caused me a few hours of debugging:
contexts:
- context:
cluster: arn:aws:eks:us-east-2:...:cluster/...
user: arn:aws:eks:us-east-2:...:cluster/...
name: arn:aws:eks:us-east-2:...:cluster/...
current-context: arn:aws:eks:us-east-2:...:cluster/... <-Whitespace at the begging of the row was the source of the error.
I had the same issue on a mac m1...
The problem was that i am using kubectx and kubens, so that tools are ones that are controlling context and namespace.
In this situation The correct command has to be
kubectx staging-api
More information on the Official Repository

No access token in .kube/config

After upgrading my cluster in GKE the dashboard will no longer accept certificate authentication.
No problem there's a token available in the .kube/config says my colleague
user:
auth-provider:
config:
access-token: REDACTED
cmd-args: config config-helper --format=json
cmd-path: /home/user/workspace/google-cloud-sdk/bin/gcloud
expiry: 2018-01-09T08:59:18Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Except in my case there isn't...
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /home/user/Dev/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
I've tried re-authenticating with gcloud, comparing gcloud settings with colleagues, updating gcloud, re-installing gcloud, checking permissions in Cloud Platform. Pretty much everything I can think of, by still no access token will be generated.
Can anyone help please?!
$ gcloud container clusters get-credentials cluster-3 --zone xxx --project xxx
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-3.
$ gcloud config list
[core]
account = xxx
disable_usage_reporting = False
project = xxx
Your active configuration is: [default]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4"
Ok, very annoying and silly answer - you have to make any request using kubectl for the token to be generated and saved into the kubeconfig file.
You mentioned you "updated your cluster in GKE" - I'm sure what you actually did, so I'm interpreting it as generating a new cluster. There are two things to insure that you didn't appear to cover in your problem statement - one is that kubectl is enabled and two is that you can actually generate a new kubeconfig file (you could easily be referring to the older ~/.kube/conf from the prior-to-the-updated cluster in GKE). Therefore, doing these commands insures you have the correct authentication required and that token should become available:
$ gcloud components install kubectl
$ gcloud container clusters create <cluster-name>
$ gcloud container clusters get-credentials <cluster-name>
Then...Generate the kubeconfig file (assuming you have a running cluster on GCP and a service account configured for the project/GKE, have run kubectl proxy, etc.):
$ gcloud container clusters get-credentials <cluster_id>
This will create a ${HOME}/.kube/config file which has the token in it. Inspect the config file and you'll see the token value:
$ cat ~/.kube/config
OR
$ kubectl config view will display it to the screen...
...
users:
- name: gke_<project_id><zone><cluster_id>
user:
auth-provider:
config:
access-token: **<COPY_THIS_TOKEN>**
cmd-args: config config-helper --format=json
cmd-path: ...path-to.../google-cloud-sdk/bin/gcloud
expiry: 2018-04-13T23:11:15Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
With that token copied, go back to http://localhost:8001/ and select "token", then paste the token value there...good to go