Explicit configuration for oidc and k8s - kubernetes

I have setup k8s single node cluster with kubeadm. I have configured oidc with it and made changes to ~/.kube/config file. Is there any explicit configuration that has to be done to kubectl context or credentials?
I have added the user, client-id, client-secret, id_token and refresh id to the /.kube/config file.
Apart from this i have added oidc-issuer-url, oidc-username-claim and oidc-client-id to kube-apiserver.yaml file.
Apart from this is there anything else that has to be added? I assume i am missing something due to which i get error: You must be logged in to the server (the server has asked for the client to provide credentials) when i try the command kubectl --user=name#gmail.com get nodes

you may take a look at the log of apiserver to check what error you get during authentication.
And you should add oidc-issuer-url, oidc-username-claim, oidc-client-id, and --oidc-ca-file in apiserver.yaml.

Related

microk8s kubeflow dashboard access - Failed to exchange authorization code with token: oauth2: cannot fetch token: 401 Unauthorized

After installing microk8s and then enabling kubeflow I'm given the username, password and link to Kubeflow dashboard. Then I access the dashboard as expected and all is well. BUT after restarting my machine and executing microk8s start I can no longer get to the kubeflow dashboard.
All the pods start fine and then I go to access the dashboard and get:
Access to 10.64.140.44.nip.io was denied
You don't have authorisation to view this page.
HTTP ERROR 403
Looking at the kubernetes logs for the pod/container oidc-gatekeeper-xxxxx / oidc-gatekeeper I have:
level=error msg="Failed to exchange authorization code with token: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_client\",\"error_description\":\"Invalid client credentials.\"}" ip=10.1.252.88 request="/authservice/oidc/callback?code=ipcb55gymqsy5pcgjn7eaenad&state=MTYyMjYzNjE4OHxFd3dBRURoMVZtSm9Wak4yUXpWQlYxZ3pPVWs9fPTKezGok06ig6bjtYvWt9sqhzaCpO_xhSMeTUFDL81j"
And for pod/container dex-auth-5d9bf87db9-rjtm8 / dex-auth:
level=info msg="invalid client_secret on token request for client: authservice-oidc"
Only by removing microk8s altogether and reinstalling everytime I restart my machine can I get this working again which is obviously not workable.
Any help would be greatly appreciated.
I've managed to resolve the issue but I'm not 100% sure which action resolved it.
I tried using Firefox rather than Chrome and noticed some documentation used IP http://10.64.140.43.nip.io/ rather than http://10.64.140.44.nip.io/.
Having been refused access as above for http://10.64.140.44.nip.io/ I found http://10.64.140.43.nip.io/ took me straight into the dashboard.
I restarted my machine to see if it was just the IP (note: checking "microk8s kubectl get services -n kubeflow" specified 10.64.150.44 as the external IP), but this time http://10.64.140.44.nip.io/ just gave me the dex log in screen and after logging in took me to the dashboard without issue.
Perhaps I just did something wrong somewhere, I'm not sure and can't check now it works as it should. Apologise if you get here with the issue and this doesn't help.
I had a similar error. Solution for me was to enable dns, istio, and storage first. Wait until the pods were running, and then enable Kubeflow. Then make sure to port-forward using the istio-system namespace with the istio-ingressgateway pod. Kubeflow also makes a istio-ingressgateway pod, but connecting to that yielded the error. Per Kubeflow guide

Using KeyCloak Gateway in a K8S Cluster

I have KeyCloak Gateway running successfully locally providing Google OIDC authentication for the Kubernetes dashboard. However using the same settings results in an error when the app is deployed as a pod in the cluster itself.
The error I see when the Gateway is running in a K8S pod is:
unable to exchange code for access token {"error": "invalid_request: Credentials in post body and basic Authorization header do not match"}
I'm calling the gateway with the following options:
--enable-logging=true
--enable-self-signed-tls=true
--listen=:443
--upstream-url=https://mydashboard
--discovery-url=https://accounts.google.com
--client-id=<client id goes here>
--client-secret=<secret goes here>
--resources=uri=/*
With these settings applied to a container in a pod I can browse to the Gateway, am redirected to Google to log in, and then am redirected back to the Gateway where the error above is generated.
What could account for the difference between running the application locally and running it in a pod that would generate the above error?
This turned out to be a copy/paste fail in the end, with the client secret being incorrect. The error message wasn't much help here, but at least it was a simple fix.

kubectl error: You must be logged in to the server (Unauthorized)

Today I met a strange issue about my Windows kubectl client suddenly raise authorization issue in connecting ICp.
I was using ICP with a Widows configured kubectl.exe. Then, after a while, due to laptop automatic sleeping, my VPN connection was disconnected, hence lose connection to remote ICP. Later I came back and re-connect the ICP. I use kubectl command again and faced:
error: You must be logged in to the server (Unauthorized)
On ICP master node, nothing wrong if I used:
kubectl -s 127.0.0.1:8888 -n kube-system get pods -o wide
I went back to re-configure client (pasted the code copied from admin -> configure kubectl), commands executed successful but when I issue
kubectl get pods
still error.
I checked article:
kubectl - error: You must be logged in to the server
kubectl error: "You must be logged in to the server (the server has asked for the client to provide credentials)"
error: You must be logged in to the server (the server has asked for the client to provide credentials)
It looks like didn't much helpful
It turns out that the tokens was invalid (not sure if it because of 12 hours expiration). If you simply F5 the browser page you didn't re-authenticated but still can access the console page, but actually the token should be updated by re-login ICP Portal again.
The issue was fixed by re-access the ICP portal:
https://<master host>:8443/console/
This will re-allow you authenticate. After that, go to admin -> configure client, paste the latest commands you will find the token might be updated. Executing the new commands solved the issue.
2 Question still left:
a) If the page was long opened and token expired, ICP portal page may not auto refreshed to force you re-login, that means the token in set-credentials command are still old.
b) Even setting old tokens are accepted and command never complain an error even warning. This may mislead us when token are changed on servers, e.g, If I saved the commands to a local txt file and re-execute it again (even after token expired), the commands still finished successful, but actually I still didn't get authenticated correctly when I try to login.

GKE / kube-ui password not showing via `kubectl config view`

Trying to simply connect to the master ui ui interface.
I guess we got it all in the title, Just tried any commands related to auth, nothing to do. kubectl config view won't provide a user and its associated password.
Hope that'll sound obvious to some;
Best
If you are running the Google gke, you may not find your admin pass(web-ui too) with kubectl config view.
However, you can get it from https://console.cloud.google.com/ --> Container Engine --> Show Credentials.

Kubernetes master username and password

I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are?
You can read the cluster config using kubectl. This will contain the username and password for the UI.
kubectl config view
As of April 29 use:
gcloud container clusters describe [clustername]
This will give you some YAML (see here) containing also the username and password.
The user/password are stored in the API.
If you do:
gcloud preview container --zone <zone> clusters list
You should be able to see the user name and password for your cluster.
Note that the HTTPS cert that it uses is currently signed by an internal CA (stored in your home directory) so for a web browser, you will need to manually accept the certificate. We're working on making this more clean.
You can also type
$ kubectl proxy
which will serve up the UI at http://localhost:8001/ui
Use the below command to find the Kubernetes auto-generated password.
$Kubectl config view --minify