kubectl error: You must be logged in to the server (Unauthorized) - kubernetes

Today I met a strange issue about my Windows kubectl client suddenly raise authorization issue in connecting ICp.
I was using ICP with a Widows configured kubectl.exe. Then, after a while, due to laptop automatic sleeping, my VPN connection was disconnected, hence lose connection to remote ICP. Later I came back and re-connect the ICP. I use kubectl command again and faced:
error: You must be logged in to the server (Unauthorized)
On ICP master node, nothing wrong if I used:
kubectl -s 127.0.0.1:8888 -n kube-system get pods -o wide
I went back to re-configure client (pasted the code copied from admin -> configure kubectl), commands executed successful but when I issue
kubectl get pods
still error.
I checked article:
kubectl - error: You must be logged in to the server
kubectl error: "You must be logged in to the server (the server has asked for the client to provide credentials)"
error: You must be logged in to the server (the server has asked for the client to provide credentials)
It looks like didn't much helpful

It turns out that the tokens was invalid (not sure if it because of 12 hours expiration). If you simply F5 the browser page you didn't re-authenticated but still can access the console page, but actually the token should be updated by re-login ICP Portal again.
The issue was fixed by re-access the ICP portal:
https://<master host>:8443/console/
This will re-allow you authenticate. After that, go to admin -> configure client, paste the latest commands you will find the token might be updated. Executing the new commands solved the issue.
2 Question still left:
a) If the page was long opened and token expired, ICP portal page may not auto refreshed to force you re-login, that means the token in set-credentials command are still old.
b) Even setting old tokens are accepted and command never complain an error even warning. This may mislead us when token are changed on servers, e.g, If I saved the commands to a local txt file and re-execute it again (even after token expired), the commands still finished successful, but actually I still didn't get authenticated correctly when I try to login.

Related

redhat sso after deletion of Master-realm client in master Realm, Admin login is blank

Redhat sso after deletion of Master-realm client in master Realm, rdsso admin login (/auth/admin/master/console/) is blank.
And in the rdsso server log, we can see this Error
ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-10971) Uncaught server error: java.lang.NullPointerException
Since the redsso background is live, we are not trying to troubleshoot more.
Note: We just deleted 1 master-realm client out of 70 and this has messed up our admin login console.
In Keycloak, the "realmname-realm" in the master realm (including 'master-realm') clients are essential for the operation of the admin console. Deleting them will result in the admin console not being operational, as you have noticed. The fix is restoring the client back to its previous state, probably best done through a database restore.

Pods deployed in OKD4.8 are going to "ImagePullBackOff" state with "unauthorized: authentication required" error

I have successfully installed OKD4.8 and able to deploy applications. When I try to deploy a new application many days after the installation the pods are going to "ImagePullBackOff" state with "unauthorized: authentication required" error.
To reproduce: Install OKD4.8 and deploy a few applications and leave the setup for some days and deploy a new application then the pod is going into "ImagePullBackOff" state with "unauthorized: authentication required" error.
Log bundle
$ sudo podman pull image-registry.openshift-image-registry.svc:5000/frontend/nginx:v1.0
WARN[0000] Failed to decode the keys ["storage.options.override_kernel_check"] from "/etc/containers/storage.conf".
Trying to pull image-registry.openshift-image-registry.svc:5000/frontend/nginx:v1.0...
Error: initializing source docker://image-registry.openshift-image-registry.svc:5000/frontend/nginx:v1.0: unable to retrieve auth token: invalid username/password: unauthorized: authentication required
I think this might be happening due to the oc login session being expired which the podman fails to authenticate the default internal registry with the old token. So, kindly share me any process to avoid this kind of behaviour
Client Version: 4.8.0-0.okd-2021-11-14-052418
Server Version: 4.8.0-0.okd-2021-11-14-052418
Kubernetes Version: v1.21.2-1555+9e8f924492b7d7-dirty
I have installed OKD4.8 on VMWare machine using the UPI method.

microk8s kubeflow dashboard access - Failed to exchange authorization code with token: oauth2: cannot fetch token: 401 Unauthorized

After installing microk8s and then enabling kubeflow I'm given the username, password and link to Kubeflow dashboard. Then I access the dashboard as expected and all is well. BUT after restarting my machine and executing microk8s start I can no longer get to the kubeflow dashboard.
All the pods start fine and then I go to access the dashboard and get:
Access to 10.64.140.44.nip.io was denied
You don't have authorisation to view this page.
HTTP ERROR 403
Looking at the kubernetes logs for the pod/container oidc-gatekeeper-xxxxx / oidc-gatekeeper I have:
level=error msg="Failed to exchange authorization code with token: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_client\",\"error_description\":\"Invalid client credentials.\"}" ip=10.1.252.88 request="/authservice/oidc/callback?code=ipcb55gymqsy5pcgjn7eaenad&state=MTYyMjYzNjE4OHxFd3dBRURoMVZtSm9Wak4yUXpWQlYxZ3pPVWs9fPTKezGok06ig6bjtYvWt9sqhzaCpO_xhSMeTUFDL81j"
And for pod/container dex-auth-5d9bf87db9-rjtm8 / dex-auth:
level=info msg="invalid client_secret on token request for client: authservice-oidc"
Only by removing microk8s altogether and reinstalling everytime I restart my machine can I get this working again which is obviously not workable.
Any help would be greatly appreciated.
I've managed to resolve the issue but I'm not 100% sure which action resolved it.
I tried using Firefox rather than Chrome and noticed some documentation used IP http://10.64.140.43.nip.io/ rather than http://10.64.140.44.nip.io/.
Having been refused access as above for http://10.64.140.44.nip.io/ I found http://10.64.140.43.nip.io/ took me straight into the dashboard.
I restarted my machine to see if it was just the IP (note: checking "microk8s kubectl get services -n kubeflow" specified 10.64.150.44 as the external IP), but this time http://10.64.140.44.nip.io/ just gave me the dex log in screen and after logging in took me to the dashboard without issue.
Perhaps I just did something wrong somewhere, I'm not sure and can't check now it works as it should. Apologise if you get here with the issue and this doesn't help.
I had a similar error. Solution for me was to enable dns, istio, and storage first. Wait until the pods were running, and then enable Kubeflow. Then make sure to port-forward using the istio-system namespace with the istio-ingressgateway pod. Kubeflow also makes a istio-ingressgateway pod, but connecting to that yielded the error. Per Kubeflow guide

Explicit configuration for oidc and k8s

I have setup k8s single node cluster with kubeadm. I have configured oidc with it and made changes to ~/.kube/config file. Is there any explicit configuration that has to be done to kubectl context or credentials?
I have added the user, client-id, client-secret, id_token and refresh id to the /.kube/config file.
Apart from this i have added oidc-issuer-url, oidc-username-claim and oidc-client-id to kube-apiserver.yaml file.
Apart from this is there anything else that has to be added? I assume i am missing something due to which i get error: You must be logged in to the server (the server has asked for the client to provide credentials) when i try the command kubectl --user=name#gmail.com get nodes
you may take a look at the log of apiserver to check what error you get during authentication.
And you should add oidc-issuer-url, oidc-username-claim, oidc-client-id, and --oidc-ca-file in apiserver.yaml.

gsutil getting stuck and asking for authentication

I'm trying to copy a file from my Windows machine (already set up, and I've been using it regularly for gsutil) to copy a file, but it keeps telling me I'm trying to access protected data with no configured credentials.
Yesterday, though, it was running fine.
E:\studioProjects3\demo\rsalib\build\libs>gsutil cp rsalib-1.0.jar gs://dark-b
lade-365.appspot.com
Copying file://rsalib-1.0.jar [Content-Type=application/octet-stream]...
Uploading gs://dark-blade-365.appspot.com/rsalib-1.0.jar: 0 B/4.14 KiB
Uploading gs://dark-blade-365.appspot.com/rsalib-1.0.jar: 4.14 KiB/4.14 K
iB
You are attempting to access protected data with no configured
credentials. Please visit https://cloud.google.com/console#/project
and sign up for an account, and then run the "gcloud auth login"
command to configure gsutil to use these credentials.
E:\studioProjects3\demo\rsalib\build\libs>gsutil acl get gs://dark-blade-365.a
ppspot.com
You are attempting to access protected data with no configured
credentials. Please visit https://cloud.google.com/console#/project
and sign up for an account, and then run the "gcloud auth login"
command to configure gsutil to use these credentials.
I'm 100% sure that I own this project/bucket and it shows up on my developer console.
What I've tried so far:
Running gcloud auth login to fetch a new token. I've already done this multiple times, and it's still given me the same exact error.
Tried ensuring the project is the same as the bucket, and not the second project that I've also set up to have "authorized access" to the bucket.
Tried rebooting my machine in case there was some environment issue
Tried gcloud auth revoke, followed by gcloud auth login again.
None of these has resolved my issue. This is what gcloud auth list shows:
E:\studioProjects3\demo\rsalib\build\libs>gcloud auth list
Credentialed accounts:
- yaraju#gmail.com (active)
To set the active account, run:
$ gcloud config set account ``ACCOUNT''
Please help me figure out what's going on here.
gsutil works fine from C:. But if I run it from E:\ it gets stuck and gives me that scary error message.
To fix:
Just run gsutil from any path on C:\ and give the absolute paths to whatever local paths you want to transfer from/to.