Could you please let me know how I can change the Resident metadata value to have a different certificate other than ws02 where I have signed a metadata using a specific cert. Seems IS is signing the SAMLRequest using its own cert so i get an invalid signature when sending a SAML Request to the Identity Provider.
I change the certificate alias on service provider configuration from IS console to the appropriate certificate but doesn't seem to overwrite signing it and still using the standard wso2 certificate.
Is there somewhere in the IS configuration where I can change the wso2carbon cert to one of my own so it will apply to identity provider resident?
Currently, the primary keystore configured by the / element in the /repository/conf/carbon.xml file is used for internal data encryption (encrypting data in internal data stores and configuration files) as well as for signing messages that are communicated with external parties. However, it is sometimes a common requirement to have separate keystores for communicating messages with external parties (such SAML, OIDC id_token signing) and for encrypting information in internal data stores. This is because, for the first scenario of signing messages, the keystore certificates need to be frequently renewed. However, for encrypting information in internal data stores, the keystore certificates should not be changed frequently because the data that is already encrypted will become unusable every time the certificate changes.
This feature will be available from IS 5.5.0 WUM and above. You can follow steps in [1] to configure multiple keystores.
<InternalKeyStore>
<Location>${carbon.home}/repository/resources/security/internal.jks</Location>
<Type>JKS</Type>
<Password>wso2carbon</Password>
<KeyAlias>wso2carbon</KeyAlias>
<KeyPassword>wso2carbon</KeyPassword>
</InternalKeyStore>
[1]https://docs.wso2.com/display/ADMIN44x/Configuring+Keystores+in+WSO2+Products#ConfiguringKeystoresinWSO2Products-second_keystore_internal_dataConfiguringaseparatekeystoreforencryptingdataininternaldatastores
Related
Does Keycloak 17 and above powered by Quarkus distribution has standalone mode?
The keycloak documentation says, that i can still use it, to setting up HTTPS/SSL. In the documentation there is a procedure, to edit the standalone.xml file, that no longer exist in this new version of keycloak.
Does standalone mode still exist? Or is there a different documentation in this not deprecated, new version that should be used? How to set up HTTPS/SSL then?
See https://www.keycloak.org/server/all-config?q=https
Use these parameters to customize TLS configuration based on your needs:
https-certificate-file
The file path to a server certificate or certificate chain in PEM format.
https-certificate-key-file
The file path to a private key in PEM format.
https-cipher-suites
The cipher suites to use.
https-client-auth
Configures the server to require/request client authentication.
https-key-store-file
The key store which holds the certificate information instead of specifying separate files.
https-key-store-password
The password of the key store file.
https-key-store-type
The type of the key store file.
https-port
The used HTTPS port.
https-protocols
The list of protocols to explicitly enable.
https-trust-store-file
The trust store which holds the certificate information of the certificates to trust.
https-trust-store-password
The password of the trust store file.
https-trust-store-type
The type of the trust store file.
Container deployement has also support for TLS, see Keycloak Docker HTTPS required
Many of my (confidential) apps are talking to each other via the client credential flow.
They request a token from the Azure Identity platform and use this token to authenticate against another app.
A while ago I used client secrets to do so, but later I read that this is not recommended for production environments.
For this reason I changed to self-signed certificates that are valid a longer time.
Those certificates are generated by myself with Azure Keyvault.
However, also this is not recommended.
Microsoft states that that in production environments you should use certificates that are signed by an official CA.
If I now use Lets encrypt, this will expire all three months what is also not such a nice solution.
My questions:
Why is the client secret not recommended in production environments?
Why is the self-signed certificate a problem? I do understand this in matters of HTTPS, but where is the security breach if its used for client credential flow? In my case I am the owner of the app and the app registration.
Do I need to buy a certificate that is one-year valid to do it "the right way"?
Do you have any source of best practices here?
• Client secrets include application credentials, SSH keys, API keys, database passwords, encryption keys, connection strings and so on to connect various resources and access the data or functionality for achieving the designated purpose of that application. Thus, if these are breached, they can put your application at great risk of compromise. Also, the client secret generated in Azure AD and used in APIs for connecting to Azure AD for authentication and authorization purpose is listed and mentioned in unencrypted form in the API code itself. Though, we have an option to store that secret in a key vault and refer to that secret through either managed identity or RBAC assignments, but their credentials too can fall in wrong hands and let the application be vulnerable if the managed identity is a user assigned or even if then access scope of the secret is not well defined according to the required specific need. Thus, client secret is not recommended to be used in a production API.
• In client credentials flow, applications are directly granted permissions by an administrator to perform a certain action regarding the API to be called through it via certificate or federated credentials. Thus, when using a self-signed certificate in client credentials grant scenario, the administrator has granted the daemon app requesting access to other API all the required privileges regarding accessibility of code, API, permissions, data, etc. which can result in poor validation and misuse as the it is very easy to generate a certificate’s key pair without reasonable entropy. Also, protecting the private key of the key pair appropriately to its use and strong validation of the same is not promised in a self-signed certificate due to which it is not recommended in client credentials flow.
• For best practices regarding web app service deployment, please refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/app-service/security-recommendations#general
It explains the best security recommendations for deploying a web app service.
We are using IdentityServer4 and our version loads the signing key from a PFX file in file system or from the windows certificate store. Using the certificate works. The question is - which certificate issuer should be used in production?
Is a certificate from a public CA recommended? Or is it enough to have a self-signed certificate (without a CA at all) such as it can be created with IIS Manager?
In our tests we have found that the client could still validate the signature in the access token, even if the signing certificate would not have a valid CA chain on the client.
In the docs, it says that you can also use raw key material instead of a certificate:
http://docs.identityserver.io/en/latest/topics/crypto.html#token-signing-and-validation
In this scenario there would be no CA chain whatsoever.
That leads me to the assumption, that when the client loads the public signing key (via the HTTP(s) endpoint), the CA chain information might not be passed anyways. Is that right? Through the loading mechanism via HTTPs you also have a combined security mechanism.
So my conclusion is that for the signing credential a self-signed cert is just as safe as one from VeriSign. Can this be confirmed?
There is no certificate involved in signing and verifying the tokens. Only a private and public key (RSA or ECDSA key).
However a certificate can be useful to "import/transport" the keys into .NET. So, because of that we don't care about who issued the certificate.
When importing the key, one approach is to bundle the certificate that holds the public key + the private key and store it in a PKCE#12 file (.pfx/.p12 extension). Then load that file into .NET. Before .NET 5 working with keys was a bit hard.
The more important thing is that you can manage and deploy the private key in a secure way and that it is persisted over time.
Optionally, you can add support for key-rotation.
I built an Apereo CAS demo server with a WAR overlay (with which
different services should be authenticated).
I have set up delegated authentication with SAML2 (for integrating
with italian SPID system), following this guide
https://apereo.github.io/2019/02/25/cas61-delegate-authn-saml2-idp/
I copied the SP metadata to a IDP configuration.
This IDP is a test IDP instance, meant to check if SP metadata and the SAML request are ok.
When I run the server and I browse it,
I can see the second login form, but when I click it, the validation checks shows some errors.
Among them there is the following:
"The certificate must be signed with a valid algorithm."
"The certificate must not be signed using the SHA1 algorithm (deprecated)."
I am pretty sure that SHA-256 would be fine with the IDP validator.
In the documentation
https://apereo.github.io/cas/development/integration/Delegate-Authentication-SAML.html
I see that the following parameter should be in charge of specifying
the signing certificate algorithm:
cas.authn.pac4j.saml[].signature-algorithms:
Its description is
# Required: false
# Type: java.util.List<String>
# Owner: org.apereo.cas.configuration.model.support.pac4j.saml.Pac4jSamlClientProperties
# Module: cas-server-support-pac4j-webflow
# Collection of signing signature algorithms, if any, to override the global defaults.
# cas.authn.pac4j.saml[].signature-algorithms:
I don't know what values are allowed there. I googled a lot but with no luck.
I have tried to set that list like this
cas.authn.pac4j.saml[0].signature-algorithms=["sha256WithRSAEncryption"],
cas.authn.pac4j.saml[0].signature-algorithms=["SHA256withRSA"] but,
after clicking the SAML login button, I always get the following
error:
org.pac4j.saml.exceptions.SAMLException: org.pac4j.saml.exceptions.SAMLException: Could not determine the signature parameters
at org.pac4j.saml.crypto.DefaultSignatureSigningParametersProvider.build(DefaultSignatureSigningParametersProvider.java:60)
...
How can I specify a SHA-256 algorithm for signing the certificate?
I have a need to import my partners' X509 client certificates (along with complete chain) on all of my service fabric cluster nodes so that I can validate each incoming request and authenticate each partner based on the client certificate. This means when I import a client certificate, I want the related intermediate certificate (that signed the client certificate) and related root certificate (that signed the intermediate certificate) to be installed automatically into appropriate cert stores such as 'Intermediate Certificate Authorities' and 'Trusted Root Certification Authorities' in Local Machine store.
The reason why I want the entire chain stored in appropriate locations in certificate store is because I intend to validate incoming client certificate using X509Chain in System.Security.Cryptography.X509Certificates namespace in my service authentication pipeline component. The X509Chain seem to depend on the 'Trusted Root Certification Authorities' store for complete root certificate validation.
There is lot of information on how to secure a) node to node and b) managing client to cluster communication such as this: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security. However there is not much information on securing the communication between services (hosted in service fabric cluster) and the end user consumers using client certificates. If I missed this information, please let me know.
I don't have lot of partner client certificates to configure. The number of partners is well within manageable range. Also I can not recreate the cluster every time there is a new partner client certificate to add.
Do I need to do leverage
/ServiceManifest/CodePackage/SetupEntryPoint element in
SerivceManifest.xml file and write custom code to import partner
certificates (that are stored in the key vault or else where)? What are the pros
and cons of this approach?
Or is there any other easy way to import partner certificates that satisfies all of my requirements? If
so, please detailed steps on how to achieve this.
Update:
I tried the suggested method of adding client certificates as described in the above link under osProfile section. This seemed pretty straight forward.
To be able to do this, I first needed to push the related certificates (as secrets) in to the associated key vault as described at this link. In this article, it describes (in section "Formatting certificates for Azure resource provider use") how to format the certificate information into a Json format before storing it as secret in key vault. This json has following format for uploading pfx file bytes:
{
"dataType": "pfx",
"data": "base64-encoded-cert-bytes-go-here",
"password": "pfx-password"
}
However since I am dealing with public portion of client certificates, I am not dealing with pfx files but only base64 cer files in windows (which apparently are same as pem files elsewhere). And there is no password for public portion of certificates. So I changed the Json format to following:
{
"dataType": "pem",
"data": "base64-encoded-cert-bytes-go-here"
}
When I invoked New-AzureRmResourceGroupDeployment with related ARM template with appropriate changes under osProfile section, I am getting following error:
New-AzureRmResourceGroupDeployment : 11:08:11 PM - Resource Microsoft.Compute/virtualMachineScaleSets 'nt1vm' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "CertificateImproperlyFormatted",
"message": "The secret's JSON representation retrieved from
https://xxxx.vault.azure.net/secrets/ClientCert/ba6855f9866644ccb4c436bb2b7675d3 has data type pem which is not
an accepted certificate type."
}
]
}
}'
I also tried using 'cer' data type as shown below:
{
"dataType": "cer",
"data": "base64-encoded-cert-bytes-go-here"
}
It also resulted in the same error.
What am I doing wrong?
I'd consider importing a certificate on all nodes as described here. (Step 5) You can add multiple certificates in specified stores by using ARM templates, that reference Azure Key Vault. Use durability level Silver/Gold, to keep the cluster running during re-deployment.
Be careful with adding certificates in the trusted store.
If a certificate is created by a trusted CA, there's no direct need to put
anything in the trusted root authorities store (as they are already there).
Validate client certificates using X509Certificate2.Verify, unless every client has his own service instance to communicate with.