Connecting Kubernetes to Google Cloud SQL - Invalid JWT Signature - google-cloud-sql

Using the Kubernetes sidecar pattern to connect to Cloud SQL. Followed instructions here: https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
The cloudsql-proxy container is giving the error:
2017/12/22 14:34:02 couldn't connect to "beliefer-4342:us-central1:beliefer-4342-cloud-instance": Post https://www.googleapis.com/sql/v1beta4/projects/beliefer-4342/instances/beliefer-4342-cloud-instance/createEphemeral?alt=json: oauth2: cannot fetch token: 400 Bad Request
Response: {
"error" : "invalid_grant",
"error_description" : "Invalid JWT Signature."
}

Do you have NTP configured in your VM in Google Cloud? - The first thing to do is ensure your VM time is synchronized with NTP server. If NTP is working fine, then you can also set a different expiration time for the token - 1000s should work fine for your case.
An invalid JWT signature error can also mean that your signature failed to authenticate your $jwtHeader and $jwtClaim. Are you using the correct private key obtained from API console with correct email address of the service account? Additionally, you need to encode your signature with base-64.
Containers vs virtual machines use different approaches to networking and management, which can complicate connections. For more tools and operations detail you can check out this wiki.

Related

Identity Server 4 API JWT, Load Balancing, Data Protection, Kubernetes,

Running into issues with multiple instances of IdentityServer4 on Kubernetes exposed by the load balancer. I dont think there is a issue with credential login, my issues are around JWT Tokens. Works fine when there is only 1 instance.
Overview:
IdentityServer4
MongoDB Data Storage
PersistedGrantStore
Data Protection setup on Redis
Multiple .Net Core 3.1 Web API. Using AddIdnetityServerAuthentication in start up passing in the connection and the API Name. I am running multiple instance of the API. Reducing down to 1 I still get the same issue. Works fine if there is only 1 instance of the Identity Server but multiple instances I get the following error on the API:
"Bearer" was not authenticated. Failure message: "IDX10501: Signature validation failed. Unable to match key:
I am not getting any errors or failed authentications on the IdentityServer logs.
So the questions going on in my head is, JWT token so in I believe the request should be validated by the token, i.e. the API should not be requesting info form the Identity Server? Identity Server has DataProtection setup running on Redis as its store, I can see its dropped info in there. I have persisted grants store, but tokens are not added.
Do I need to switch to resource vs JWT? What is likely overhead for that?
Are the tokens not getting shared between the API instances via Data Protection?
Thanks for any advice / suggestions.
In case anyone else comes across this. It was down to mistakenly leaving developer signing in the config of Identity Server. Replaced with a certificate solved the issue.
builder.AddDeveloperSigningCredential();
to
builder.AddSigningCredential(rsaCertificate);

Keycloak Gatekeeper always fail to validate 'iss' claim value

Adding the match-claims to the configuration file doesn't seem to do anything. Actually, Gatekeeper is always throwing me the same error when opening a resource (with or without the property).
My Keycloak server is inside a docker container, accessible from an internal network as http://keycloak:8080 while accessible from the external network as http://localhost:8085.
I have Gatekeeper connecting to the Keycloak server in an internal network. The request comes from the external one, therefore, the discovery-url will not match the 'iss' token claim.
Gatekeeper is trying to use the discovery-url as 'iss' claim. To override this, I'm adding the match-claims property as follows:
discovery-url: http://keycloak:8080/auth/realms/myRealm
match-claims:
iss: http://localhost:8085/auth/realms/myRealm
The logs look like:
On startup
keycloak-gatekeeper_1 | 1.5749342705316222e+09 info token must contain
{"claim": "iss", "value": "http://localhost:8085/auth/realms/myRealm"}
keycloak-gatekeeper_1 | 1.5749342705318246e+09 info keycloak proxy service starting
{"interface": ":3000"}
On request
keycloak-gatekeeper_1 | 1.5749328645243566e+09 error access token failed verification
{ "client_ip": "172.22.0.1:38128",
"error": "oidc: JWT claims invalid: invalid claim value: 'iss'.
expected=http://keycloak:8080/auth/realms/myRealm,
found=http://localhost:8085/auth/realms/myRealm."}
This ends up in a 403 Forbidden response.
I've tried it on Keycloak-Gatekeeper 8.0.0 and 5.0.0, both with the same issue.
Is this supposed to work the way I'm trying to use it?
If not, what I'm missing?, how can I validate the iss or bypass this validation? (preferably the former)?
It is failing during discovery data validation - your setup violates OIDC specification:
The issuer value returned MUST be identical to the Issuer URL that was directly used to retrieve the configuration information. This MUST also be identical to the iss Claim value in ID Tokens issued from this Issuer.
It is MUST, so you can't disable it (unless you want to hack source code - it should be in coreos/go-oidc library). Configure your infrastructure setup properly (e.g. use the same DNS name for Keycloak in internal/external network, content rewrite for internal network requests, ...) and you will be fine.
Change the DNS name to host.docker.internal
token endpoint: http://host.docker.internal/auth/realms/example-realm/open-id-connect/token
issuer URL in your property file as http://host.docker.internal/auth/realms/example-realm
In this way both outside world access and internal calls to keycloak can be achieved

Token introspection considering token as not active

I've keycloak 4.0.0 installed on two debian stretch machines. Those are configured in standalone clustered mode.
Both share a mysql cluster database instance and a load balancer is doing HA.
I've a code which needs to validate tokens against introspection endpoint put it's not working half of the time.
This is actually because load balancer is doing its job and consequently easy to reproduce:
ask a token on /auth/realms//protocol/openid-connect/token on server 1
call introspection endpoint /auth/realms//protocol/openid-connect/token/introspect to check the access token provided by the server 1 on server 2
If I call the introspection endpoint on server I've the json response I expect, but on server 2 I just have active: false.
This is quite strange because sessions are replicated on admin interface in "show sessions".
Any ideas ?
Thanks !
RĂ©mi
I was facing the same issue.
for introspect api , try setting the host header.
For ex: when hitting /protocol/openid-connect/token api pass header "host: foo"
Now when hitting the protocol/openid-connect/token/introspect api set header "host: foo"

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

Azure mobile service throws "Authorization has been denied for this request."

I'm using Azure mobile services with .net backend. My API controller works OK on my pc but as soon as I deploy it to Azure, Upon pinging from Postman gives "Authorization has been denied for this request." message with HttpStatusCode 401
Note that... I'm using table storage for storage instead of SQL Server and in the process removed all of Entity Framework related code. Also, None of the endpoints do not require any authentication.
Thanks.
The default authentication for mobile services is anonymous (i.e., no authentication required) when running locally, and application (i.e., at least the application key needs to be supplied).
If you're using Postman, try adding a "x-zumo-application" header to the request, with the application key (which you can get in the Azure portal) as the value. The request should work then.