solved
For anyone who's trying to fix this: this error means that minio is not being able to read your .../.well-known/openid-configuration url*.
(set through MINIO_IDENTITY_OPENID_CONFIG_URL or identity_openid > config_url in the json cnofig file)
original post
I'm trying to call minIO's AssumeRoleWithClientGrants, using postman I request (POST)
http://localhost:9000/?Action=AssumeRoleWithClientGrants&DurationSeconds=10000&Token=__SOME_TKN____&Version=2011-06-15
But I'm getting this response:
<?xml version="1.0" encoding="UTF-8"?>
<ErrorResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
<Error>
<Type></Type>
<Code>InvalidParameterValue</Code>
<Message>provider jwt doesn't exist</Message>
</Error>
<RequestId>163E438F6C8286CC</RequestId>
</ErrorResponse>
I also ran this python example, and when I click on authenticate and then on approve once --WSO2-- it redirects me to an error page that says
botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the AssumeRoleWithWebIdentity operation: provider jwt doesn't exist
Any help would be much appreciated,
I got the exact same error as yours in my minio which deployed in Azure AKS. The issue was fixed by restarting the minio pod. (Looks like the minio pod was misfunctional after the AKS cluster nodes were rebooted due to the Azure DNS issue https://status.azure.com/en-ca/status)
Related
I am using openstack container to enable integration testing against swift
The container used is : https://hub.docker.com/r/jeantil/openstack-swift-keystone-docker/
And the steps followed are : https://github.com/jeantil/openstack-swift-keystone-docker
The configuration is working fine on local and open internet(concourse pipeline job)
But when I am using the same in concourse pipeline job on INTRANET, I am getting the below error:
Failed to discover available identity versions when contacting http://127.0.0.1:35357/v3. Attempting to parse version from URL.
Unauthorized (HTTP 401)
I am getting this error while creating a new service or even loading user lists:
Example:
openstack endpoint create --region RegionOne object-store internal http://127.0.0.1:8080/v1/KEY_%\(tenant_id\)s
openstack endpoint create --region RegionOne object-store admin http://127.0.0.1:8080/v1
openstack user list
Is it due to some proxy related configuration, because everything is working fine if I am running this concourse job on internet
I tried multiple approaches and at the end I was able to solve the issue.
Include ENV NO_PROXY=localhost in the dockerfile so that the proxy configurations are removed for this config
After installing microk8s and then enabling kubeflow I'm given the username, password and link to Kubeflow dashboard. Then I access the dashboard as expected and all is well. BUT after restarting my machine and executing microk8s start I can no longer get to the kubeflow dashboard.
All the pods start fine and then I go to access the dashboard and get:
Access to 10.64.140.44.nip.io was denied
You don't have authorisation to view this page.
HTTP ERROR 403
Looking at the kubernetes logs for the pod/container oidc-gatekeeper-xxxxx / oidc-gatekeeper I have:
level=error msg="Failed to exchange authorization code with token: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_client\",\"error_description\":\"Invalid client credentials.\"}" ip=10.1.252.88 request="/authservice/oidc/callback?code=ipcb55gymqsy5pcgjn7eaenad&state=MTYyMjYzNjE4OHxFd3dBRURoMVZtSm9Wak4yUXpWQlYxZ3pPVWs9fPTKezGok06ig6bjtYvWt9sqhzaCpO_xhSMeTUFDL81j"
And for pod/container dex-auth-5d9bf87db9-rjtm8 / dex-auth:
level=info msg="invalid client_secret on token request for client: authservice-oidc"
Only by removing microk8s altogether and reinstalling everytime I restart my machine can I get this working again which is obviously not workable.
Any help would be greatly appreciated.
I've managed to resolve the issue but I'm not 100% sure which action resolved it.
I tried using Firefox rather than Chrome and noticed some documentation used IP http://10.64.140.43.nip.io/ rather than http://10.64.140.44.nip.io/.
Having been refused access as above for http://10.64.140.44.nip.io/ I found http://10.64.140.43.nip.io/ took me straight into the dashboard.
I restarted my machine to see if it was just the IP (note: checking "microk8s kubectl get services -n kubeflow" specified 10.64.150.44 as the external IP), but this time http://10.64.140.44.nip.io/ just gave me the dex log in screen and after logging in took me to the dashboard without issue.
Perhaps I just did something wrong somewhere, I'm not sure and can't check now it works as it should. Apologise if you get here with the issue and this doesn't help.
I had a similar error. Solution for me was to enable dns, istio, and storage first. Wait until the pods were running, and then enable Kubeflow. Then make sure to port-forward using the istio-system namespace with the istio-ingressgateway pod. Kubeflow also makes a istio-ingressgateway pod, but connecting to that yielded the error. Per Kubeflow guide
I am using Vault with postgres storage backend along with kv secret engine. I am uisng kubernetes auth method to get the vault token. I followed the below documentation to setup the vault with kubernetes
https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
When I start the webapplication for the first time and try to retrieve the tokens it is working but when I delete the webapp deployment and try to deploy webapp again and try to retrieve the vault token again with the api
v1/auth/kubernetes/login
I get the following error
error: 400 Bad Request: [{"errors":["missing client token"]}
But the request has the jwt token of service account. Please see the below image
Due to this error Pod keeps restarting and all of a sudden after some time vault honours the request and returns the vault token.
This looks strange any reason for such behavior?
UPDATE:
This issue does not happen for consul backend
I recently upgraded my Grafana to v7.0.3 and started the image-rendering service as a separate pod in my k8 cluster.
I have specified both GF_RENDERING_SERVER_URL and GF_RENDERING_CALLBACK_URL
My Grafana is configured to use the active directory (AuthN). Only authenticated users can see dashboards.
Now the problem is when my Image rendering service calls for Grafana chart I think as it is behind AD; it fails to get it (there was http 401 as well)
Can someone suggests what am I missing/how can I pass authentication details?
t=60&timezone=Europe%2FLondon&url=http%3A%2F%2Fmobile-grafana.mobile-grafana.svc.cluster.local%3A3000%2Fd-solo%2F000000017%2Fjenkins-performance-and-health-overview%3ForgId%3D1%26refresh%3D1m%26from%3D1591535203773%26to%3D1591546003773%26var-node%3Djenkins-stg.k8s.mobile.sbx.zone%26panelId%3D4%26width%3D1000%26height%3D500%26tz%3DEurope%252FLondon%26render%3D1&width=1000" t=2020-06-07T16:06:45+0000 lvl=eror msg="Remote rendering request failed" logger=rendering renderer=http error="403 Forbidden"
t=2020-06-07T16:06:45+0000 lvl=eror msg="Rendering failed." logger=context userId=2 orgId=1 uname="Pankaj Sainic" error="Remote rendering request failed. 403: 403 Forbidden" ```
If you are using a proxy, you have to add this "NO_PROXY" property to make it works !
NO_PROXY:0.0.0.0,127.0.0.1,renderer,grafana
renderer and grafana are here the service name declared in my docker-compose file
I have KeyCloak Gateway running successfully locally providing Google OIDC authentication for the Kubernetes dashboard. However using the same settings results in an error when the app is deployed as a pod in the cluster itself.
The error I see when the Gateway is running in a K8S pod is:
unable to exchange code for access token {"error": "invalid_request: Credentials in post body and basic Authorization header do not match"}
I'm calling the gateway with the following options:
--enable-logging=true
--enable-self-signed-tls=true
--listen=:443
--upstream-url=https://mydashboard
--discovery-url=https://accounts.google.com
--client-id=<client id goes here>
--client-secret=<secret goes here>
--resources=uri=/*
With these settings applied to a container in a pod I can browse to the Gateway, am redirected to Google to log in, and then am redirected back to the Gateway where the error above is generated.
What could account for the difference between running the application locally and running it in a pod that would generate the above error?
This turned out to be a copy/paste fail in the end, with the client secret being incorrect. The error message wasn't much help here, but at least it was a simple fix.