There are many guides, answers, etc... that specifically show how to enable the kubernetes dashboard, and several that attempt to explain how to remotely access them, but many have an issue with regard to accepting the token once you get to the login screen.
The problem as I understand it is that the service does not (rightfully) accept remote tokens over http. Even though I can get to the login screen I can't get into the dashboard due to the inability to use the token. How can I get around this limitation?
Taken from https://www.edureka.co/community/31282/is-accessing-kubernetes-dashboard-remotely-possible:
you need to make the request from the remote host look like it's coming from a localhost (where the dashboard is running):
From the system running kubernetes / dashboard:
Deploy the dashboard UI:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
Start the proxy:
kubectl proxy&
Create a secret:
kubectl create serviceaccount [account name]
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=default:[account name]
kubectl get secret
kubectl describe secret [account name]
From the system you wish to access the dashboard:
Create an ssh tunnel to the remote system (the system running the dashboard):
ssh -L 9999:127.0.0.1:8001 -N -f -l [remote system username] [ip address of remote system] -P [port you are running ssh on]
You will likely need to enter a password unless you are using keys. Once you've done all this, from the system you established the ssh connection access http://localhost:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
You can change the port 9999 to anything you'd like.
Once you open the browser url, copy the token from the "describe secret" step and paste it in.
Related
I deployed a Dashboard with: https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
I could create an user that can access the resources, but I have to log in with a token, I used: https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
Then I wanted to log in without authentication, so I used: kubectl patch deployment kubernetes-dashboard -n kubernetes-dashboard --type ‘json’ -p ‘[{“op”: “add”, “path”: “/spec/template/spec/containers/0/args/-”, “value”: “–enable-skip-login”}]’
Then I can log-in skipping the authentication but the default user (or Service Account?) can’t see any resource (nodes, pods, services…)
Can you help me give permissions to the default user?
Thanks.
I expect all rescources to be shown on the Dashboard.
After installing microk8s and then enabling kubeflow I'm given the username, password and link to Kubeflow dashboard. Then I access the dashboard as expected and all is well. BUT after restarting my machine and executing microk8s start I can no longer get to the kubeflow dashboard.
All the pods start fine and then I go to access the dashboard and get:
Access to 10.64.140.44.nip.io was denied
You don't have authorisation to view this page.
HTTP ERROR 403
Looking at the kubernetes logs for the pod/container oidc-gatekeeper-xxxxx / oidc-gatekeeper I have:
level=error msg="Failed to exchange authorization code with token: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_client\",\"error_description\":\"Invalid client credentials.\"}" ip=10.1.252.88 request="/authservice/oidc/callback?code=ipcb55gymqsy5pcgjn7eaenad&state=MTYyMjYzNjE4OHxFd3dBRURoMVZtSm9Wak4yUXpWQlYxZ3pPVWs9fPTKezGok06ig6bjtYvWt9sqhzaCpO_xhSMeTUFDL81j"
And for pod/container dex-auth-5d9bf87db9-rjtm8 / dex-auth:
level=info msg="invalid client_secret on token request for client: authservice-oidc"
Only by removing microk8s altogether and reinstalling everytime I restart my machine can I get this working again which is obviously not workable.
Any help would be greatly appreciated.
I've managed to resolve the issue but I'm not 100% sure which action resolved it.
I tried using Firefox rather than Chrome and noticed some documentation used IP http://10.64.140.43.nip.io/ rather than http://10.64.140.44.nip.io/.
Having been refused access as above for http://10.64.140.44.nip.io/ I found http://10.64.140.43.nip.io/ took me straight into the dashboard.
I restarted my machine to see if it was just the IP (note: checking "microk8s kubectl get services -n kubeflow" specified 10.64.150.44 as the external IP), but this time http://10.64.140.44.nip.io/ just gave me the dex log in screen and after logging in took me to the dashboard without issue.
Perhaps I just did something wrong somewhere, I'm not sure and can't check now it works as it should. Apologise if you get here with the issue and this doesn't help.
I had a similar error. Solution for me was to enable dns, istio, and storage first. Wait until the pods were running, and then enable Kubeflow. Then make sure to port-forward using the istio-system namespace with the istio-ingressgateway pod. Kubeflow also makes a istio-ingressgateway pod, but connecting to that yielded the error. Per Kubeflow guide
Is it possible to configure a custom password for the Kubernetes dashboard when using eks without customizing "kube-apiserver"?
This URL mentions changes in "kube-apiserver"
https://techexpert.tips/kubernetes/kubernetes-dashboard-user-authentication/
In K8s, requests come as Authentication and Authorization (so the API server can determine if this user can perform the requested action). K8s dont have users, in the simple meaning of that word (Kubernetes users are just strings associated with a request through credentials). The credential strategy is a choice you make while you install the cluster (you can choose from x509, password files, Bearer tokens, etc.).
Without API K8s server automatically falls back to an anonymous user and there is no way to check if provided credentials are valid.
You can do something like : not tested
Create a new credential using OpenSSL
export NEW_CREDENTIAL=USER:$(echo PASSWORD | openssl passwd -apr1
-noverify -stdin)
Append the previously created credentials to
/opt/bitnami/kubernetes/auth.
echo $NEW_CREDENTIAL | sudo tee -a /opt/kubernetes/auth
Replace the cluster basic-auth secret.
kubectl delete secret basic-auth -n kube-system
kubectl create secret generic basic-auth --from-file=/opt/kubernetes/auth -n kube-system
I just started using OpenShift and have permissions problems. I am on the free trial for OpenShift 4.3.3 and cannot get my containers to run as root. I am the only user on my instance and I have admin, but it says I need cluster-admin to run the containers as root?
I tried running:
oc policy add-role-to-group cluster-admin anyuid
and that returned:
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io "cluster-admin" is forbidden: user "hustlin" (groups=["system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:["*"], Resources:["*"], Verbs:["*"]}
{NonResourceURLs:["*"], Verbs:["*"]}
Going through OpenShift Online -> Administrator view -> User Management -> Roles -> cluster-admin -> Role Bindings, it states:
Restricted Access
You don't have access to this section due to cluster policy.
Error details
rolebindings.rbac.authorization.k8s.io is forbidden: User "hustlin" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
I feel like it should not be this difficult for me to run a container as root. Just testing out OpenShift and I haven't been able to successfully run a single container on the platform, they all eventually go to CrashLoopBackOff.
Yes, I did try the:
oc login -u system:admin
command and it prompted me for my password before returning:
error: username system:admin is invalid for basic auth
I even tried following this guide from the OpenShift blog, but it would not recognize oadm.
I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are?
You can read the cluster config using kubectl. This will contain the username and password for the UI.
kubectl config view
As of April 29 use:
gcloud container clusters describe [clustername]
This will give you some YAML (see here) containing also the username and password.
The user/password are stored in the API.
If you do:
gcloud preview container --zone <zone> clusters list
You should be able to see the user name and password for your cluster.
Note that the HTTPS cert that it uses is currently signed by an internal CA (stored in your home directory) so for a web browser, you will need to manually accept the certificate. We're working on making this more clean.
You can also type
$ kubectl proxy
which will serve up the UI at http://localhost:8001/ui
Use the below command to find the Kubernetes auto-generated password.
$Kubectl config view --minify