I'm using AWS Elastic Load Balancer to authenticate users, which signs the user claim so that applications can verify the signature and verify that the claims were sent by the load balancer, as described in:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html#user-claims-encoding
To verify the signature, it is necessary to request the public key located at:
https://public-keys.auth.elb.region.amazonaws.com/key-id
Notice that the key-id is dynamic, and is sent along the JWT on the header, in the kid field.
{
"alg": "algorithm",
"kid": "12345678-1234-1234-1234-123456789012",
"signer": "arn:aws:elasticloadbalancing:region-code:account-id:loadbalancer/app/load-balancer-name/load-balancer-id",
"iss": "url",
"client": "client-id",
"exp": "expiration"
}
At the application level, I want to use Quarkus with smallrye-jwt to verify the JWT.
Reading the guide at:
https://quarkus.io/guides/security-jwt#configuration-reference
There is the configuration mp.jwt.verify.publickey.location which accepts a URL, but how do i configure it when the the public key URL from AWS requires a key-id to be extracted from the JWT header?
I had a similar problem with string values in docker secrets. I ended up writing my own config interceptor which I found in the guide below.
https://quarkus.io/guides/config-extending-support
So in your case a suggestion is to filter out the property which you need to resolve/extract and use a rest client to fetch the value.
The solution doesn't come off ass elegant, but should do the trick for you.
Related
Background
On the Google Kubernetes Engine we've been using Cloud Endpoints, and the Extensible Service Proxy (v2) for service-to-service authentication.
The services authenticate themselves by including the bearer JWT token in the Authorization header of the HTTP requests.
The identity of the services has been maintained with GCP Service Accounts, and during deployment, the Json Service Account key is mounted to the container at a predefined location, and that location is set as the value of the GOOGLE_APPLICATION_CREDENTIALS env var.
The services are implemented in C# with ASP.NET Core, and to generate the actual JWT token, we use the Google Cloud SDK (https://github.com/googleapis/google-cloud-dotnet, and https://github.com/googleapis/google-api-dotnet-client), where we call the following method:
var credentials = GoogleCredential.GetApplicationDefault();
If the GOOGLE_APPLICATION_CREDENTIALS is correctly set to the path of the Service Account key, then this returns a ServiceAccountCredential object, on which we can call the GetAccessTokenForRequestAsync() method, which returns the actual JWT token.
var jwtToken = await credentials.GetAccessTokenForRequestAsync("https://other-service.example.com/");
var authHeader = $"Bearer {jwtToken}";
This process has been working correctly without any issues.
The situation is that we are in the process of migrating from using the manually maintained Service Account keys to using Workload Identity instead, and I cannot figure out how to correctly use the Google Cloud SDK to generate the necessary JWT tokens in this case.
The problem
When we enable Workload Identity in the container, and don't mount the Service Account key file, nor set the GOOGLE_APPLICATION_CREDENTIALS env var, then the GoogleCredential.GetApplicationDefault() call returns a ComputeCredential instead of a ServiceAccountCredential.
And if we call the GetAccessTokenForRequestAsync() method, that returns a token which is not in the JWT format.
I checked the implementation, and the token seems to be retrieved from the Metadata server, of which the expected response format seems to be the standard OAuth 2.0 model (represented in this model class):
{
"access_token": "foo",
"id_token": "bar",
"token_type": "Bearer",
...
}
And the GetAccessTokenForRequestAsync() method returns the value of access_token. But as far as I understand, that's not a JWT token, and indeed when I tried using it to authenticate against ESP, it responded with
{
"code": 16,
"message": "JWT validation failed: Bad JWT format: Invalid JSON in header",
..
}
As far as I understand, normally the id_token contains the JWT token, which should be accessible via the IdToken property of the TokenResponse object, which is also accessible via the SDK, I tried accessing it like this:
var jwtToken = ((ComputeCredential)creds.UnderlyingCredential).Token.IdToken;
But this returns null, so apparently the metadata server does not return anything in the id_token field.
Question
What would be the correct way to get the JWT token with the .NET Google Cloud SDK for accessing ESP, when using Workload Identity in GKE?
To get an IdToken for the attached service account, you can use GoogleCredential.GetApplicationDefault().GetOidcTokenAsync(...).
I'm creating Java appliation, where I will need users to log in. Currently I'm verifying if I can configure Keycloak safe enough. I'd like to make sure my application is really authenticating users against my Keycloak server - eg I know there is something like DNS Poisining or other attacks, where my application could get to attackers server with duplicated/attackers Keycloak instance. What surprised me, I have currently configuration with follwing keys:
keycloak.auth-server-url=...
keycloak.realm=...
keycloak.resource=...
keycloak.public-client=true
keycloak.security-constraints[0].authRoles[0]=..
keycloak.security-constraints[0].securityCollections[0].patterns[0]=...
keycloak.principal-attribute=preferred_username
and no public key is needed. Even worse here: https://stackoverflow.com/a/40516696/520521 I see upvoted comment telling, my application may download key from (malicious) server.
Are there any extra steps I need to follow, to authenticate Keycloak server before starting to authenticate users against it?
Based on your configuration, it seems that you've defined your client in Keycloak as public. This allows your client to be able to call Keycloak without any authentication. This type of client is used for example when you're going to authenticate via js in webpage in which nothing can be hidden from attacker as they have access to the source of the page.
If you set the "Access Type" of your client to "confidential" (in Client Settings on Keycloak Admin UI) and save the settings, there will appear another tab (next to "Setting" tab of the client) titled "Credentials". There you can see the default secret that is created for your client. You should then put this secret as below in your keycloak.json file inside your application:
"credentials": {
"secret": "paste-the-secret-value-here"
}
You can also re-generate the value by selecting the "Regenerate Secret" button.
You can also change the "Client Authenticator" there and set it to "X509 Certificate". Then you would be asked to define a regular expression to validate the "Subject DN" of the certificate that client will be using for authentication. Any certificate matching that regex would be considered as valid and authenticated. Then you have to setup your client to use such certificate instead of defining the "secret" value in the keycloak.json file.
There is of course another option which uses "Signed JWT" which is also secure and you can find the details about how to set it up in Keycloak documentation at Client Authentication section.
I can't see a word that this is the aim, but seeing where public and private key is placed I understand that answer is, that in realm settings -> keys -> active there is list of keys. You may download public key or certificate with button on right side. In my case of Spring boot, enter application.properties file a public key under keycloak.realm-key.
Adding the match-claims to the configuration file doesn't seem to do anything. Actually, Gatekeeper is always throwing me the same error when opening a resource (with or without the property).
My Keycloak server is inside a docker container, accessible from an internal network as http://keycloak:8080 while accessible from the external network as http://localhost:8085.
I have Gatekeeper connecting to the Keycloak server in an internal network. The request comes from the external one, therefore, the discovery-url will not match the 'iss' token claim.
Gatekeeper is trying to use the discovery-url as 'iss' claim. To override this, I'm adding the match-claims property as follows:
discovery-url: http://keycloak:8080/auth/realms/myRealm
match-claims:
iss: http://localhost:8085/auth/realms/myRealm
The logs look like:
On startup
keycloak-gatekeeper_1 | 1.5749342705316222e+09 info token must contain
{"claim": "iss", "value": "http://localhost:8085/auth/realms/myRealm"}
keycloak-gatekeeper_1 | 1.5749342705318246e+09 info keycloak proxy service starting
{"interface": ":3000"}
On request
keycloak-gatekeeper_1 | 1.5749328645243566e+09 error access token failed verification
{ "client_ip": "172.22.0.1:38128",
"error": "oidc: JWT claims invalid: invalid claim value: 'iss'.
expected=http://keycloak:8080/auth/realms/myRealm,
found=http://localhost:8085/auth/realms/myRealm."}
This ends up in a 403 Forbidden response.
I've tried it on Keycloak-Gatekeeper 8.0.0 and 5.0.0, both with the same issue.
Is this supposed to work the way I'm trying to use it?
If not, what I'm missing?, how can I validate the iss or bypass this validation? (preferably the former)?
It is failing during discovery data validation - your setup violates OIDC specification:
The issuer value returned MUST be identical to the Issuer URL that was directly used to retrieve the configuration information. This MUST also be identical to the iss Claim value in ID Tokens issued from this Issuer.
It is MUST, so you can't disable it (unless you want to hack source code - it should be in coreos/go-oidc library). Configure your infrastructure setup properly (e.g. use the same DNS name for Keycloak in internal/external network, content rewrite for internal network requests, ...) and you will be fine.
Change the DNS name to host.docker.internal
token endpoint: http://host.docker.internal/auth/realms/example-realm/open-id-connect/token
issuer URL in your property file as http://host.docker.internal/auth/realms/example-realm
In this way both outside world access and internal calls to keycloak can be achieved
I have a need to import my partners' X509 client certificates (along with complete chain) on all of my service fabric cluster nodes so that I can validate each incoming request and authenticate each partner based on the client certificate. This means when I import a client certificate, I want the related intermediate certificate (that signed the client certificate) and related root certificate (that signed the intermediate certificate) to be installed automatically into appropriate cert stores such as 'Intermediate Certificate Authorities' and 'Trusted Root Certification Authorities' in Local Machine store.
The reason why I want the entire chain stored in appropriate locations in certificate store is because I intend to validate incoming client certificate using X509Chain in System.Security.Cryptography.X509Certificates namespace in my service authentication pipeline component. The X509Chain seem to depend on the 'Trusted Root Certification Authorities' store for complete root certificate validation.
There is lot of information on how to secure a) node to node and b) managing client to cluster communication such as this: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security. However there is not much information on securing the communication between services (hosted in service fabric cluster) and the end user consumers using client certificates. If I missed this information, please let me know.
I don't have lot of partner client certificates to configure. The number of partners is well within manageable range. Also I can not recreate the cluster every time there is a new partner client certificate to add.
Do I need to do leverage
/ServiceManifest/CodePackage/SetupEntryPoint element in
SerivceManifest.xml file and write custom code to import partner
certificates (that are stored in the key vault or else where)? What are the pros
and cons of this approach?
Or is there any other easy way to import partner certificates that satisfies all of my requirements? If
so, please detailed steps on how to achieve this.
Update:
I tried the suggested method of adding client certificates as described in the above link under osProfile section. This seemed pretty straight forward.
To be able to do this, I first needed to push the related certificates (as secrets) in to the associated key vault as described at this link. In this article, it describes (in section "Formatting certificates for Azure resource provider use") how to format the certificate information into a Json format before storing it as secret in key vault. This json has following format for uploading pfx file bytes:
{
"dataType": "pfx",
"data": "base64-encoded-cert-bytes-go-here",
"password": "pfx-password"
}
However since I am dealing with public portion of client certificates, I am not dealing with pfx files but only base64 cer files in windows (which apparently are same as pem files elsewhere). And there is no password for public portion of certificates. So I changed the Json format to following:
{
"dataType": "pem",
"data": "base64-encoded-cert-bytes-go-here"
}
When I invoked New-AzureRmResourceGroupDeployment with related ARM template with appropriate changes under osProfile section, I am getting following error:
New-AzureRmResourceGroupDeployment : 11:08:11 PM - Resource Microsoft.Compute/virtualMachineScaleSets 'nt1vm' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "CertificateImproperlyFormatted",
"message": "The secret's JSON representation retrieved from
https://xxxx.vault.azure.net/secrets/ClientCert/ba6855f9866644ccb4c436bb2b7675d3 has data type pem which is not
an accepted certificate type."
}
]
}
}'
I also tried using 'cer' data type as shown below:
{
"dataType": "cer",
"data": "base64-encoded-cert-bytes-go-here"
}
It also resulted in the same error.
What am I doing wrong?
I'd consider importing a certificate on all nodes as described here. (Step 5) You can add multiple certificates in specified stores by using ARM templates, that reference Azure Key Vault. Use durability level Silver/Gold, to keep the cluster running during re-deployment.
Be careful with adding certificates in the trusted store.
If a certificate is created by a trusted CA, there's no direct need to put
anything in the trusted root authorities store (as they are already there).
Validate client certificates using X509Certificate2.Verify, unless every client has his own service instance to communicate with.
Let's say we have several micro-services. Each of them uses Keycloak authentication. We have also load balancer based on for ex. nginx which has external URLs and different routes to keycloak (for ex. in OpenShift it can be https://keycloak.rhel-cdk.10.1.2.2.xip.io). But internally this address can be inaccessible. Also having micro-service configuration dependent on the load balancer URL is a bit weird. What what be more appropriate is to use internal keycloak auth URL inside of the micro-services or even short URI. But in this case token will not be validated because of issuer validation problem. How to configure this in good and flexible manner? Can I simply override realmInfoUrl in order to change the validation? Can I define what issuer will be used for client based token.
Another problem is how to better handle multi-tenant scenario? First on the client side I guess we don't have any specific support for multi-tenancy. I should handle this manually by switching between different URLs/headers and use proper Config Resolver. On the server side I need to dynamically provide a proper KeycloakDeployment instance for each case. Any other recommendations?
Unfortunately Keycloak is too restrictive with its token validation according to the issuer ("iss") field in the token. It requires that the URL used to validate the token matches the URL in the "iss" field.
A while ago I have opened a JIRA ticket for that problem (vote for it!): https://issues.jboss.org/browse/KEYCLOAK-5045
In case this helps anyone out during the early stages of development, you can set the Host header to the keycloak url that your backend service will use during the validation of the token. This way, the generated token will contain your Host header url in the issuer field. In my sandbox, I had keycloak running on docker at keycloack:8080 and a functional test calling keycloack via localhost:8095 to request a token (direct grant). Before setting the Host header to keycloack:8080, the issuer field was being set to localhost:8095 and the token was failing the validation with the "Invalid token issuer" error, since the backend service connects to keycloak on keycloak:8080 and TokenVerifier.java does the following check.
public boolean test(JsonWebToken t) throws VerificationException {
if (this.realmUrl == null) {
throw new VerificationException("Realm URL not set");
} else if (!this.realmUrl.equals(t.getIssuer())) {
throw new VerificationException("Invalid token issuer. Expected '" + this.realmUrl + "', but was '" + t.getIssuer() + "'");
} else {
return true;
}
}
Reference: https://github.com/keycloak/keycloak-community/blob/master/design/hostname-default-provider.md