Keycloak Gatekeeper always fail to validate 'iss' claim value - keycloak

Adding the match-claims to the configuration file doesn't seem to do anything. Actually, Gatekeeper is always throwing me the same error when opening a resource (with or without the property).
My Keycloak server is inside a docker container, accessible from an internal network as http://keycloak:8080 while accessible from the external network as http://localhost:8085.
I have Gatekeeper connecting to the Keycloak server in an internal network. The request comes from the external one, therefore, the discovery-url will not match the 'iss' token claim.
Gatekeeper is trying to use the discovery-url as 'iss' claim. To override this, I'm adding the match-claims property as follows:
discovery-url: http://keycloak:8080/auth/realms/myRealm
match-claims:
iss: http://localhost:8085/auth/realms/myRealm
The logs look like:
On startup
keycloak-gatekeeper_1 | 1.5749342705316222e+09 info token must contain
{"claim": "iss", "value": "http://localhost:8085/auth/realms/myRealm"}
keycloak-gatekeeper_1 | 1.5749342705318246e+09 info keycloak proxy service starting
{"interface": ":3000"}
On request
keycloak-gatekeeper_1 | 1.5749328645243566e+09 error access token failed verification
{ "client_ip": "172.22.0.1:38128",
"error": "oidc: JWT claims invalid: invalid claim value: 'iss'.
expected=http://keycloak:8080/auth/realms/myRealm,
found=http://localhost:8085/auth/realms/myRealm."}
This ends up in a 403 Forbidden response.
I've tried it on Keycloak-Gatekeeper 8.0.0 and 5.0.0, both with the same issue.
Is this supposed to work the way I'm trying to use it?
If not, what I'm missing?, how can I validate the iss or bypass this validation? (preferably the former)?

It is failing during discovery data validation - your setup violates OIDC specification:
The issuer value returned MUST be identical to the Issuer URL that was directly used to retrieve the configuration information. This MUST also be identical to the iss Claim value in ID Tokens issued from this Issuer.
It is MUST, so you can't disable it (unless you want to hack source code - it should be in coreos/go-oidc library). Configure your infrastructure setup properly (e.g. use the same DNS name for Keycloak in internal/external network, content rewrite for internal network requests, ...) and you will be fine.

Change the DNS name to host.docker.internal
token endpoint: http://host.docker.internal/auth/realms/example-realm/open-id-connect/token
issuer URL in your property file as http://host.docker.internal/auth/realms/example-realm
In this way both outside world access and internal calls to keycloak can be achieved

Related

"Unexpected error when authenticating with identity provider" error when Keycloak broker is configured as a client to another Keycloak instance

I am getting an error when I try to login to Keycloak by using it as a broker.1 I am using credentials from another keycloak instance to login. So far, I am redirected to the correct login page but after entering my credentials I receive an error.
I have set up Keycloack Identity Brokering on computer 1 by following the basic steps.2 I have used the generated redirection URI of the broker to register a new client on computer 2 in another Keycloak instance.3 The client configuration present on computer 2 4 is then used to fill in Authorization URL, Token URL, Client ID and Client Secret on the Identity Broker on Computer 1. 5
I may be leaving important fields missing. Pictures are attached for reference.
I have changed some settings to get the broker to work with the other Keycloak instance. I am now sending client secret as basic auth with signed verification off. I have also enabled back-channel logout. Hope this helps someone else.
I fixed this problem by regenerating the client secret on the identity provider side and using it on keycloak. The keycloak realm data import was not working very well for me apparently.
In my case I needed to empty the hosted domain field in the "Identity providers" configuration of my Google identity provider in Keycloak.
See also:
Keycloak Google identity provider error: "Identity token does not contain hosted domain parameter"

Identity Server 4 issued JWT Validation failure

I have an Identity Server running based on IdentityServer 4 (.Net Core v2) targeting the full .Net framework, and I have an ASP.NET WebAPI built against ASP.Net Web API 2 (i.e. NOT .Net Core) that is using the Identity Server 3 OWIN middleware for token authentication.
When running locally, everything works just fine - I can use Postman to request an Access Token from the Identity Server using a RO Password flow, and I can then make a request to the WebAPI sending the token as a Bearer token - all works fine.
Now, when everything is hosted on our test servers, I get a problem when calling the WebAPI - I simply get an Unauthorized response. The token returned from the Identity server is ok (checked using http://jwt.io), but validation of the JWT is failing in the WebAPI.
On further investigation, after adding Katana logging, I see that a SecurityTokenInvalidAudienceException is being reported.
Audience validation failed. Audiences:
'https://11.22.33.44:1234/resources, XXXWebApi'. Did not match:
validationParameters.ValidAudience: 'https://localhost:1234/resources'
or validationParameters.ValidAudiences: 'null'
Looking at the JWT audience, we have:
aud: "https://11.22.33.44:1234/resources", "XXXWebApi"
In the WebAPI Startup, I have the call to
app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
{
Authority = , // path to our local ID Server
ClientId = "XXXWebApi",
ClientSecret = "XXX_xxx-xxx-xxx-xxx",
RequiredScopes = new[] { "XXXWebApi" }
});
So the JWT audience looks ok, but its obviously not matching with what is supplied by the middleware (built from the IdP discovery end point). I would have thought that because I am specifying the RequiredScopes to include XXXWebApi that would have been enough to match the JWTs audience but that seems to be ignored.
I'm unsure what to change in the WebAPI authentication options to make this work.
EDIT: I changed the WebAPI Token auth options to use the validation endpoint, and this also fails in the IdentityServer with the same error.
If I call the Identity Server introspection endpoint directly from Postman with the same token though, it succeeds.
Ok, so after a lot of head scratching and trying various things out I at least have something working.
I had to ensure the Identity Server was hosted against a publicly available DNS, and configure the Authority value in the IdentityServerBearerTokenAuthenticationOptions to use the same value.
That way, any tokens issued have the xx.yy.zz full domain name in the JWT audience (aud), and when the OWIN validation middleware in the WebAPI verifies the JWT it uses the same address for comparison rather than localhost.
I'm still slightly confused why the middleware cant just use the scope value for validation because the token was issued with the API resource scope (XXXWebAPi) in the audience, and the API is requesting the same scope id/name in the options as shown.
As far as I understand your WebAPI project is used as an API resource.
If so - remove the 'clientId' and 'clientSecret' from the UseIdentityServerBearerTokenAuthentication, keep the 'RequiredScopes' and the authority (you may also need to set ValidationMode = ValidationMode.Both).
You need them, when you are using reference tokens. From what you've said - you are using a JWT one. Check here, here and here.

Connecting Kubernetes to Google Cloud SQL - Invalid JWT Signature

Using the Kubernetes sidecar pattern to connect to Cloud SQL. Followed instructions here: https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
The cloudsql-proxy container is giving the error:
2017/12/22 14:34:02 couldn't connect to "beliefer-4342:us-central1:beliefer-4342-cloud-instance": Post https://www.googleapis.com/sql/v1beta4/projects/beliefer-4342/instances/beliefer-4342-cloud-instance/createEphemeral?alt=json: oauth2: cannot fetch token: 400 Bad Request
Response: {
"error" : "invalid_grant",
"error_description" : "Invalid JWT Signature."
}
Do you have NTP configured in your VM in Google Cloud? - The first thing to do is ensure your VM time is synchronized with NTP server. If NTP is working fine, then you can also set a different expiration time for the token - 1000s should work fine for your case.
An invalid JWT signature error can also mean that your signature failed to authenticate your $jwtHeader and $jwtClaim. Are you using the correct private key obtained from API console with correct email address of the service account? Additionally, you need to encode your signature with base-64.
Containers vs virtual machines use different approaches to networking and management, which can complicate connections. For more tools and operations detail you can check out this wiki.

Keycloak issuer validation and multi-tenancy approach

Let's say we have several micro-services. Each of them uses Keycloak authentication. We have also load balancer based on for ex. nginx which has external URLs and different routes to keycloak (for ex. in OpenShift it can be https://keycloak.rhel-cdk.10.1.2.2.xip.io). But internally this address can be inaccessible. Also having micro-service configuration dependent on the load balancer URL is a bit weird. What what be more appropriate is to use internal keycloak auth URL inside of the micro-services or even short URI. But in this case token will not be validated because of issuer validation problem. How to configure this in good and flexible manner? Can I simply override realmInfoUrl in order to change the validation? Can I define what issuer will be used for client based token.
Another problem is how to better handle multi-tenant scenario? First on the client side I guess we don't have any specific support for multi-tenancy. I should handle this manually by switching between different URLs/headers and use proper Config Resolver. On the server side I need to dynamically provide a proper KeycloakDeployment instance for each case. Any other recommendations?
Unfortunately Keycloak is too restrictive with its token validation according to the issuer ("iss") field in the token. It requires that the URL used to validate the token matches the URL in the "iss" field.
A while ago I have opened a JIRA ticket for that problem (vote for it!): https://issues.jboss.org/browse/KEYCLOAK-5045
In case this helps anyone out during the early stages of development, you can set the Host header to the keycloak url that your backend service will use during the validation of the token. This way, the generated token will contain your Host header url in the issuer field. In my sandbox, I had keycloak running on docker at keycloack:8080 and a functional test calling keycloack via localhost:8095 to request a token (direct grant). Before setting the Host header to keycloack:8080, the issuer field was being set to localhost:8095 and the token was failing the validation with the "Invalid token issuer" error, since the backend service connects to keycloak on keycloak:8080 and TokenVerifier.java does the following check.
public boolean test(JsonWebToken t) throws VerificationException {
if (this.realmUrl == null) {
throw new VerificationException("Realm URL not set");
} else if (!this.realmUrl.equals(t.getIssuer())) {
throw new VerificationException("Invalid token issuer. Expected '" + this.realmUrl + "', but was '" + t.getIssuer() + "'");
} else {
return true;
}
}
Reference: https://github.com/keycloak/keycloak-community/blob/master/design/hostname-default-provider.md

Nexus OSS Remote User Token (RUT) for SSO

Hello I am using Nexus OSS, and wanted to simulate SSO, using Remote User Token. Currently the Nexus is configured to LDAP authentication, and is working fine.
As per the instructions found here https://books.sonatype.com/nexus-book/reference/rutauth.html
Basically enabled Remote User Token and added header field Name "REMOTE_USER". This user is ldap and has access.
This instance is behind apache, so from apache, To test this RUT, I can set the header value REMOTE_USER, whoever, I don't see passed in user getting logged nor I see cookie being generated. I even tried firefox rest api client and set header, but with the same results. I can see that HTTP header is being set right.
Am I missing something?
Is there a way to debug that? Appreciate any help.
Thanks
S
RUT handles authentication, but the authenticated user still needs to be authorized to access the web UI. What this means is that you need an LDAP user or group mapping in Nexus which assigns the necessary roles and privileges to the user.
I had a similar issue with Nginx, the header was not set using the correct value.
This can be quite confusing, as the reverse proxy does not complain and simply sends a blank request header to Nexus.
Using Keycloak and Nginx (Lua), instead of a preferred_username field in the IdP response:
-- set headers with user info: this will overwrite any existing headers
-- but also scrub(!) them in case no value is provided in the token
ngx.req.set_header("X-Proxy-REMOTE-USER", res.preferred_username)
I had to use the preferred_username field returned in the response's id_token element:
-- set headers with user info: this will overwrite any existing headers
-- but also scrub(!) them in case no value is provided in the token
ngx.req.set_header("X-Proxy-REMOTE-USER", res.id_token.preferred_username)