I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are?
You can read the cluster config using kubectl. This will contain the username and password for the UI.
kubectl config view
As of April 29 use:
gcloud container clusters describe [clustername]
This will give you some YAML (see here) containing also the username and password.
The user/password are stored in the API.
If you do:
gcloud preview container --zone <zone> clusters list
You should be able to see the user name and password for your cluster.
Note that the HTTPS cert that it uses is currently signed by an internal CA (stored in your home directory) so for a web browser, you will need to manually accept the certificate. We're working on making this more clean.
You can also type
$ kubectl proxy
which will serve up the UI at http://localhost:8001/ui
Use the below command to find the Kubernetes auto-generated password.
$Kubectl config view --minify
Related
Is it possible to configure a custom password for the Kubernetes dashboard when using eks without customizing "kube-apiserver"?
This URL mentions changes in "kube-apiserver"
https://techexpert.tips/kubernetes/kubernetes-dashboard-user-authentication/
In K8s, requests come as Authentication and Authorization (so the API server can determine if this user can perform the requested action). K8s dont have users, in the simple meaning of that word (Kubernetes users are just strings associated with a request through credentials). The credential strategy is a choice you make while you install the cluster (you can choose from x509, password files, Bearer tokens, etc.).
Without API K8s server automatically falls back to an anonymous user and there is no way to check if provided credentials are valid.
You can do something like : not tested
Create a new credential using OpenSSL
export NEW_CREDENTIAL=USER:$(echo PASSWORD | openssl passwd -apr1
-noverify -stdin)
Append the previously created credentials to
/opt/bitnami/kubernetes/auth.
echo $NEW_CREDENTIAL | sudo tee -a /opt/kubernetes/auth
Replace the cluster basic-auth secret.
kubectl delete secret basic-auth -n kube-system
kubectl create secret generic basic-auth --from-file=/opt/kubernetes/auth -n kube-system
There are many guides, answers, etc... that specifically show how to enable the kubernetes dashboard, and several that attempt to explain how to remotely access them, but many have an issue with regard to accepting the token once you get to the login screen.
The problem as I understand it is that the service does not (rightfully) accept remote tokens over http. Even though I can get to the login screen I can't get into the dashboard due to the inability to use the token. How can I get around this limitation?
Taken from https://www.edureka.co/community/31282/is-accessing-kubernetes-dashboard-remotely-possible:
you need to make the request from the remote host look like it's coming from a localhost (where the dashboard is running):
From the system running kubernetes / dashboard:
Deploy the dashboard UI:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
Start the proxy:
kubectl proxy&
Create a secret:
kubectl create serviceaccount [account name]
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=default:[account name]
kubectl get secret
kubectl describe secret [account name]
From the system you wish to access the dashboard:
Create an ssh tunnel to the remote system (the system running the dashboard):
ssh -L 9999:127.0.0.1:8001 -N -f -l [remote system username] [ip address of remote system] -P [port you are running ssh on]
You will likely need to enter a password unless you are using keys. Once you've done all this, from the system you established the ssh connection access http://localhost:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You can change the port 9999 to anything you'd like.
Once you open the browser url, copy the token from the "describe secret" step and paste it in.
I have setup k8s single node cluster with kubeadm. I have configured oidc with it and made changes to ~/.kube/config file. Is there any explicit configuration that has to be done to kubectl context or credentials?
I have added the user, client-id, client-secret, id_token and refresh id to the /.kube/config file.
Apart from this i have added oidc-issuer-url, oidc-username-claim and oidc-client-id to kube-apiserver.yaml file.
Apart from this is there anything else that has to be added? I assume i am missing something due to which i get error: You must be logged in to the server (the server has asked for the client to provide credentials) when i try the command kubectl --user=name#gmail.com get nodes
you may take a look at the log of apiserver to check what error you get during authentication.
And you should add oidc-issuer-url, oidc-username-claim, oidc-client-id, and --oidc-ca-file in apiserver.yaml.
Trying to simply connect to the master ui ui interface.
I guess we got it all in the title, Just tried any commands related to auth, nothing to do. kubectl config view won't provide a user and its associated password.
Hope that'll sound obvious to some;
Best
If you are running the Google gke, you may not find your admin pass(web-ui too) with kubectl config view.
However, you can get it from https://console.cloud.google.com/ --> Container Engine --> Show Credentials.
I know it's possible to access the static views of the api, but I can't find out the basic auth details that I need to login via the browser. Where can I find these? I'm on GCE and created a cluster.
Run kubectl config view. It'll dump out the auth information used to access your cluster, including the basic auth username and password.