InvalidIdentityToken: Couldn't retrieve verification key from your identity provider - kubernetes

I am new to aws and kubectl, I need to deploy one of the app to aws. After deploying to eks cluster, I edited the ingress in the kubectl but unfortunately it returned 404 not found. (i am pretty sure the new service container works fine)
after checking from kubectl describe ingress, here are some events reports:
Warning FailedBuildModel 40m ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-4a93-4e27-9d6b-xxxxxxxx
Warning FailedBuildModel 22m ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-5368-41e1-8a4d-xxxxxxxx
Warning FailedBuildModel 5m8s ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-20ea-4bd0-b1cb-xxxxxxxx
Anyone has ideas about this issue?

Related

Ec2 Metadata updgrade from imdSV1 to imdSV2 causes 403 and 401 error- kube2iam

I recently updated my ec2 instances to use imdSV2 but had to rollback because of the following issue:
It looks like after i did the upgrade my init containers started failing and i saw the following in the logs:
time="2022-01-11T14:25:01Z" level=info msg="PUT /latest/api/token (403) took 0.753220 ms" req.method=PUT req.path=/latest/api/token req.remote=XXXXX res.duration=0.75322 res.status=403 time="2022-01-11T14:25:37Z" level=error msg="Error getting instance id, got status: 401 Unauthorized"
We are using Kube2iam for the same. Any advice what changes need to be done on the Kube2iam side to support imdSV2? Below is some info from my kube2iam daemonset:
EKS =1.21
image = "jtblin/kube2iam:0.10.9"

AKS Application Gateway Ingress Controller Issue

Below is the exception facing while implementing AGIC in AKS
Readiness Prob is failing for the ingress-azure
Events:
Type Reason Age From Message
Normal Scheduled 5m22s default-scheduler Successfully assigned default/ingress-azure-fc5dcbcd8-bsgt8 to aks-agentpool-22890870-vmss000002
Normal Pulling 5m22s kubelet Pulling image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.4.0"
Normal Pulled 5m22s kubelet Successfully pulled image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.4.0" in 121.018102ms
Normal Created 5m22s kubelet Created container ingress-azure
Normal Started 5m22s kubelet Started container ingress-azure
Warning Unhealthy 21s (x30 over 5m11s) kubelet Readiness probe failed: Get "http://10.240.xx.xxx:8123/health/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
kubectl logs -f mic_xxxx:
failed to update user-assigned identities on node aks-agentpool-2xxxxx-vmss (add [1], del [0], update[0]), error: failed to get identity resource, error: failed to get vmss aks-agentpool-2xxxx-vmss in resource group MC_Axx-xx_axxx-ak8_koreacentral, error: failed to get vmss aks-agentpool-2xxxxx-vmss in resource group MC_Axx-axxx_agw-ak8_koreacentral, error: compute.VirtualMachineScaleSetsClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '4xxxxxx-xxxxxxx-7xxx-xxxxxxx' with object id '4xxxxxx-xxxxxxx-7xxx-xxxxxxx' does not have authorization to perform action 'Microsoft.Compute/virtualMachineScaleSets/read' over scope '/subscriptions/{subscription_id}/resourceGroups/{MC_rg_name}/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-2xxxxx-vmss' or the scope is invalid. If access was recently granted, please refresh your credentials."
Steps Implemented:
AKS cluster with RABAC enabled & Azure CNI
2 subnets in the same vnet with same resource group (Not the RG which starts with MC_)
Provided the contributor & reader access to the AGW after implementing it.
Applied
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/v1.8.8/deploy/infra/deployment-rbac.yaml
Made changes according in the helm-config.yaml and authenticated using identityResourceID.
Suggested us on this exception. Thanks.

Rancher can not access the cluster

I use Rancher to access the cluster , but the access failed and an error was reported:
Cluster health check failed: Failed to communicate with API server: Get "https://172.20.0.1:443/api/v1/namespaces/kube-system?timeout=45s": context deadline exceeded.
Error Get "https://172.20.0.1:443/api/v1/namespaces?timeout=45s": waiting for cluster [c-9dbht] agent to connect.
I located that cattle-cluster-agent is in crashLoopBackOff state and reported the following error:
error msg="failed to unmarshal https://releases.rancher.com/index.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type repo.IndexFile".
level=error msg="error syncing 'rancher-latest': handler helm-clusterrepo-download: failed to parse response from https://releases.rancher.com/index.yaml, requeuing".
Observed a panic: runtime.boundsError{x:3, y:0, signed:true, code:0x2} (runtime error: slice bounds out of range [:3] with capacity 0).
The cattle-cluster-agent's container is constantly being restarted.
Anyone knows how to solve it?

Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io"

i have ingress controller up and running in default namespace. my other namespaces have their own ingress yaml files. whenever i try to deploy that. i get the following error:
Error from server (InternalError): error when creating "orchestration-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: x509: certificate is valid for ingress-nginx-controller-admission, ingress-nginx-controller-admission.ingress-nginx.svc, not ingress-nginx-controller-admission.default.svc```
This solved my error. i removed previous version of ingress and deployed this one.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml

Azure CD Issue : Failed to fetch App Service 'myAppServiceName' publishing credentials

I'm trying to deploy my release on a azure web App. It's not working and I don't know what to do. Maybe I'm missing something in the configuration in my app service or in my release pipeline. I've got the following error
Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal.
And here is a block of my debug :
2019-04-11T08:25:35.4761242Z ##[debug]Predeployment Step Started
2019-04-11T08:25:35.4776374Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data subscriptionid = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
2019-04-11T08:25:35.4776793Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data subscriptionname = Paiement à l’utilisation
2019-04-11T08:25:35.4777798Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth param serviceprincipalid = null
2019-04-11T08:25:35.4778094Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data environmentAuthorityUrl = https://login.windows.net/
2019-04-11T08:25:35.4781237Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth param tenantid = ***
2019-04-11T08:25:35.4782509Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e=https://management.azure.com/
2019-04-11T08:25:35.4782769Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data environment = AzureCloud
2019-04-11T08:25:35.4785012Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth scheme = ManagedServiceIdentity
2019-04-11T08:25:35.4785626Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data msiclientId = undefined
2019-04-11T08:25:35.4785882Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data activeDirectoryServiceEndpointResourceId = https://management.core.windows.net/
2019-04-11T08:25:35.4786107Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data AzureKeyVaultServiceEndpointResourceId = https://vault.azure.net
2019-04-11T08:25:35.4786348Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data AzureKeyVaultDnsSuffix = vault.azure.net
2019-04-11T08:25:35.4786525Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth param authenticationType = null
2019-04-11T08:25:35.4786735Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data EnableAdfsAuthentication = false
2019-04-11T08:25:35.4792324Z ##[debug]{"subscriptionID":"mysubscriptionID","subscriptionName":"Paiement à l’utilisation","servicePrincipalClientID":null,"environmentAuthorityUrl":"https://login.windows.net/","tenantID":"***","url":"https://management.azure.com/","environment":"AzureCloud","scheme":"ManagedServiceIdentity","activeDirectoryResourceID":"https://management.azure.com/","azureKeyVaultServiceEndpointResourceId":"https://vault.azure.net","azureKeyVaultDnsSuffix":"vault.azure.net","authenticationType":null,"isADFSEnabled":false,"applicationTokenCredentials":{"clientId":null,"domain":"***","baseUrl":"https://management.azure.com/","authorityUrl":"https://login.windows.net/","activeDirectoryResourceId":"https://management.azure.com/","isAzureStackEnvironment":false,"scheme":0,"isADFSEnabled":false}}
2019-04-11T08:25:35.4809400Z Got service connection details for Azure App Service:'myAppServiceName'
2019-04-11T08:25:35.4846967Z ##[debug][GET]http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/
2019-04-11T08:25:35.5443632Z ##[debug]Deployment Failed with Error: Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5444488Z ##[debug]task result: Failed
2019-04-11T08:25:35.5501745Z ##[error]Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5511780Z ##[debug]Processed: ##vso[task.issue type=error;]Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5512729Z ##[debug]Processed: ##vso[task.complete result=Failed;]Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5512828Z Failed to add release annotation. Error: Failed to get App service 'myAppServiceName' application settings. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5645194Z (node:5004) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Failed to fetch App Service 'myAppServiceName' publishing profile. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5759915Z ##[section]Finishing: Deploy Azure App Service
And some screenshot of
azure missing configuration ?
release pipeline config 1
release pipeline config 2
release pipeline config 3
Let me know if you need more informations.. I'm new in this so maybe missing simple things... Best regards
do you have setting identity Status On ?
like below
In my case, we had just moved our app service to a new resource group, but the pipeline was still referencing the old resource group. Correcting the resource group fixed the issue
A simple typo can also be the reason for this error message.
You will get this error message even though if it's just a typo or wrong value in your "slotName".
Please do ensure that the "slotName" you've given is the actual slotname (the default is 'production'). So if you've added a slot that's called 'stage' then inside the portal it will have your '/stage' or '-stage', but it's still just called 'stage'.
I know several have had this error message shown and none of the above helped them out (I faced the same issue the first time).
My research indicated this to be an intermittent problem.
I redeployed 2 times and it worked.
The first redeploy - just seemed to wait for ages to connect to an available agent, so I cancelled that too, and redeployed - which worked without any issue.
If this is still an issue or if someone had this issue, all I did was just to rerun the release and it well went well. Hopefully someone has saved time by just re-releasing, if this wont work then probably try something else.