After upgrading my cluster in GKE the dashboard will no longer accept certificate authentication.
No problem there's a token available in the .kube/config says my colleague
user:
auth-provider:
config:
access-token: REDACTED
cmd-args: config config-helper --format=json
cmd-path: /home/user/workspace/google-cloud-sdk/bin/gcloud
expiry: 2018-01-09T08:59:18Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Except in my case there isn't...
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /home/user/Dev/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
I've tried re-authenticating with gcloud, comparing gcloud settings with colleagues, updating gcloud, re-installing gcloud, checking permissions in Cloud Platform. Pretty much everything I can think of, by still no access token will be generated.
Can anyone help please?!
$ gcloud container clusters get-credentials cluster-3 --zone xxx --project xxx
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-3.
$ gcloud config list
[core]
account = xxx
disable_usage_reporting = False
project = xxx
Your active configuration is: [default]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4"
Ok, very annoying and silly answer - you have to make any request using kubectl for the token to be generated and saved into the kubeconfig file.
You mentioned you "updated your cluster in GKE" - I'm sure what you actually did, so I'm interpreting it as generating a new cluster. There are two things to insure that you didn't appear to cover in your problem statement - one is that kubectl is enabled and two is that you can actually generate a new kubeconfig file (you could easily be referring to the older ~/.kube/conf from the prior-to-the-updated cluster in GKE). Therefore, doing these commands insures you have the correct authentication required and that token should become available:
$ gcloud components install kubectl
$ gcloud container clusters create <cluster-name>
$ gcloud container clusters get-credentials <cluster-name>
Then...Generate the kubeconfig file (assuming you have a running cluster on GCP and a service account configured for the project/GKE, have run kubectl proxy, etc.):
$ gcloud container clusters get-credentials <cluster_id>
This will create a ${HOME}/.kube/config file which has the token in it. Inspect the config file and you'll see the token value:
$ cat ~/.kube/config
OR
$ kubectl config view will display it to the screen...
...
users:
- name: gke_<project_id><zone><cluster_id>
user:
auth-provider:
config:
access-token: **<COPY_THIS_TOKEN>**
cmd-args: config config-helper --format=json
cmd-path: ...path-to.../google-cloud-sdk/bin/gcloud
expiry: 2018-04-13T23:11:15Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
With that token copied, go back to http://localhost:8001/ and select "token", then paste the token value there...good to go
Related
I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.
Generally you can use kops get secrets kube --type secret -oplaintext, but I am not running on AWS and am using GCP.
I read that kubectl config view should show you this info, but I see no such thing (wondering if this has to do with GCP serviceaccount setup, am also using GKE).
The kubectl config view returns something like:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://MY_IP
name: MY_CLUSTER_NAME
contexts:
- context:
cluster: MY_CLUSTER_NAME
user: MY_CLUSTER_NAME
name: MY_CLUSTER_NAME
current-context: MY_CONTEXT_NAME
kind: Config
preferences: {}
users:
- name: MY_CLUSTER_NAME
user:
auth-provider:
config:
access-token: MY_ACCESS_TOKEN
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: 2019-02-27T03:20:49Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Neither Username=>Admin or Username=>MY_CLUSTER_NAME worked with Password=>MY_ACCESS_TOKEN
Any ideas?
Try:
gcloud container clusters describe ${CLUSTER} \
--flatten="masterAuth"
[--zone=${ZONE}|--region=${REGION} \
--project=${PROJECT}
It's possible that your cluster has basic authentication (username|password) disabled as this authentication mechanism is discouraged.
An alternative mechanism provided with Kubernetes Engine is (as shown in your config) is to use your gcloud credentials to get you onto the cluster.
The following command will configure ~/.kube/config so that you may access the cluster using your gcloud credentials. It looks as though this step has been done and you can use kubectl directly.
gcloud container clusters get-credentials ${CLUSTER} \
[--zone=${ZONE}|--region=${REGION}] \
--project=${PROJECT}
As long as you're logged in using gcloud with an account that's permitted to use the cluster, you should be able to:
kubectl cluster-info
kubectl get nodes
I have this config file
apiVersion: v1
clusters:
- cluster:
server: [REDACTED] // IP of my cluster
name: staging
contexts:
- context:
cluster: staging
user: ""
name: staging-api
current-context: staging-api
kind: Config
preferences: {}
users: []
I run this command
kubectl config --kubeconfig=kube-config use-context staging-api
I get this message
Switched to context "staging-api".
I then run
kubectl get pods
and I get this message
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As far as I can tell from the docs
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
I'm doing it right. Am I missing something?
Yes, Try the following steps to access the kubernetes cluster. This steps assumes that you have your k8s certificates in /etc/kubernetes.
You need to setup the cluster name, Kubeconfig, User and Kube cert file in following variables and then simply run those commands:
CLUSTER_NAME="kubernetes"
KCONFIG=admin.conf
KUSER="kubernetes-admin"
KCERT=admin
cd /etc/kubernetes/
$ kubectl config set-cluster ${CLUSTER_NAME} \
--certificate-authority=pki/ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${KCONFIG}
$ kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/admin.conf
$ kubectl config set-context ${KUSER}#${CLUSTER_NAME} \
--cluster=${CLUSTER_NAME} \
--user=${KUSER} \
--kubeconfig=${KCONFIG}
$ kubectl config use-context ${KUSER}#${CLUSTER_NAME} --kubeconfig=${KCONFIG}
$ kubectl config view --kubeconfig=${KCONFIG}
After this you will be able to access the cluster. Hope this helps.
You need to fetch the credentials of the running cluster. Try this:
gcloud container clusters get-credentials <cluster_name> --zone <zone_name>
More info:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
I've got the same problem like mentioned in the title.
When I executed:
kubectl config current-context
The output was:
error: current-context is not set
And in my case it was indentation problem.
One white-space before current-context caused me a few hours of debugging:
contexts:
- context:
cluster: arn:aws:eks:us-east-2:...:cluster/...
user: arn:aws:eks:us-east-2:...:cluster/...
name: arn:aws:eks:us-east-2:...:cluster/...
current-context: arn:aws:eks:us-east-2:...:cluster/... <-Whitespace at the begging of the row was the source of the error.
I had the same issue on a mac m1...
The problem was that i am using kubectx and kubens, so that tools are ones that are controlling context and namespace.
In this situation The correct command has to be
kubectx staging-api
More information on the Official Repository
I have three clusters in Google Kubernetes Engine, and I am trying to see Kubernetes dashboard but I get the same access-token for two different clusters.
Using kubectl config view command I get:
- name: gke_PROJECT_ZONE_A_NAME_A
user:
auth-provider:
config:
access-token: TOKEN-A
- name: gke_PROJECT_ZONE_B_NAME_B
user:
auth-provider:
config:
access-token: TOKEN-B
- name: gke_PROJECT_ZONE_C_NAME_C
user:
auth-provider:
config:
access-token: TOKEN-B
when gke_PROJECT_ZONE_B_NAME_B and gke_PROJECT_ZONE_C_NAME_C share the same access token, hence when I connect via kubectl proxy and insert the token I get the same the dashboard.
How I can refresh the access token for cluster B or C so i'll get the desired dashboard?
i've tried to use gcloud container clusters get-credentials CLUSTER-C --zone ZONE-C --project MY_PROJECT, which returns
Fetching cluster endpoint and auth data. kubeconfig entry generated
for CLUSTER-C.
and afterwards I don't get any access token for CLUSTER-C
thank you
Restarting the UI service by running kubectl proxy, entering to the UI via http://localhost:8001/ui and refreshing the page cause the access token to refresh.
If you know your access token for CLUSTER-C, you can do this
$ kubectl config set-credentials gke_PROJECT_ZONE_C_NAME_C --token=""
I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine.
I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config, such as:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.{CLUSTER_NAME}
name: {CLUSTER_NAME}
contexts:
- context:
cluster: {CLUSTER_NAME}
user: {CLUSTER_NAME}
name: {CLUSTER_NAME}
current-context: {CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: {CLUSTER_NAME}
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: {CLUSTER_NAME}-basic-auth
user:
password: REDACTED
username: admin
I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?
Also, is it safe to just share the cluster.certificate-authority-data?
For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.
for Dex there are a few open source cli clients as follows:
Nordstrom/kubelogin
pusher/k8s-auth-example
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts - with 2 options for specialised Policies to control access. (see below)
NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
Create service account for user Alice
kubectl create sa alice
Get related secret
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
Get ca.crt from secret (using OSX base64 with -D flag for decode)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
Get service account token from secret
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
Get information from your kubectl config (current-context, server..)
# get current context
c=$(kubectl config current-context)
# get cluster name of context
name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1)
# get endpoint of current context
endpoint=$(kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}")
On a fresh machine, follow these steps (given the ca.cert and $endpoint information retrieved above:
Install kubectl
brew install kubectl
Set cluster (run in directory where ca.crt is stored)
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
Set user credentials
kubectl config set-credentials alice-staging --token=$user_token
Define the combination of alice user with the staging cluster
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=alice
Switch current-context to alice-staging for the user
kubectl config use-context alice-staging
To control user access with policies (using ABAC), you need to create a policy file (for example):
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "system:serviceaccount:default:alice",
"namespace": "default",
"resource": "*",
"readonly": true
}
}
Provision this policy.json on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json flags to API servers
This would allow Alice (through her service account) read only rights to all resources in default namespace only.
You say :
I need to enable other users to also administer.
But according to the documentation
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config
bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC.
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/