WS-FEDERATION Update Certificate - single-sign-on

Good morning fellas !!
I have a question regarding WS-FEDERATION.
We have a partner (IDP initiated and SP Hosted) that is configured with WS-FEDERATION.
They use ADFS and we use OPENAM (we are the SP)
Everything is set up fine and running smoothly.
But their certificate is going to expire soon and we have to update it.
So from what I saw here what we are suppose to do :
Export the SP and IDP metadata , replace the X509 Certificate value with the new one then import those configuration back.
Is that correct ?
I am using OPENAM, and they have the ssoadm command mapped with a GUI.
So I exported both of those metadata, I replaced the certificate value and we will re-import them with the partner at the same time.
I never did that change, and I want to make sure that it is the proper way and that I am not missing anything !
Thank you for the help :) :)

In this situation you will find that ADFS has interim phases i.e.
Old certificate
Old and new
New
During the middle phase, you need to get the ADFS metadata and re-import into OpenAM.
That way there will be no down time as both certificates are catered for.
If you wait until the third phase, there will be some downtime until you have done the new import.

Related

Installing SSL Certificates for Wazuh-Dashboard

Is it possible to have Wazuh Manager served through custom SSL certificates? The wazuh-certs-tool gives you a self cert, and every other way to get it served through SSL has failed.
The closest I've gotten to getting this to work is I've had the dashboard being served by a custom SSL, I had agents connecting to it successfully and providing a heartbeat, but had zero log flows or events happening. When I had it in this state, I saw the API calls were coming from what appeared to be a Java instance, erroring out complaining about receiving certificate. I saw a keystore file located at /etc/wazuh-indexer. Do I also need to add the root-ca cert here as well?
It seems that your indexer's excepted certificates do not match the certificates in your manager or the dashboard.
If you follow the normal installation guide, it shows how and where to place your certificates, that are created using the wazuh-cert-tool. But, certificates can be created from any other source, as long as they have the expected information, you can check that informationenter link description here here.
I would recommend you follow the installation steps in the installation guide, from scratch to make sure you copy each excepted certificate in it's place and that the configuration files for your indexer, dashboard, and manager take into account the correct files. All you would need to change, the creation of the certificates, to have your own custom certs.
In case of further doubt, do not hesitate to ask.

Configuring Shibboleth Metadata File

We have recently migrated to a new hosting environment so have installed a fresh instance of Shibboleth. When we generate sp metadata files, the urls are non-secure (ie http) even though the url used to generate the metadata uses https.
When using the test connection from our own Azure AD system, we see the obvious error: "The reply URL specified in the request does not match the reply URLs configured for the application:"
I have limited knowledge of configuring the system beyond working on shibboleth2.xml and attribute-map.xml so would be very grateful if anyone can point me in the right direction to fix this.
I'm not sure if you managed to configure it but i'm currently working on this as well, and i think i can help.
So the ReplyURL you need to provide in the Azure Portal, is the reply URL that accepts the authentiaction reply message from the identity provider.
In the case of Shibboleth it is:
http[s]://yoursitename/Shibboleth.SSO/Auth/Saml
So if your webpage is for instance:
https://localhost/Foo
The replyURL should be:
https://localhost/Shibboleth.SSO/Auth/Saml
Notice that the page "Foo" is not in the replyURL.
After the authentication the browser should send the IDP reply to https://localhost/Shibboleth.SSO/Auth/Saml, after which Shibboleth should redirect you back to https://localhost/Foo
At least that's the default behaviour.

How to bootstrap certificates for LCM to reference for signatureverification settings?

WMF 5.1 includes new functionality to allow signing of MOF documents and DSC Resource modules (reference). However, this seems very difficult to implement in reality -- or I'm making it more complicated than it is...
My scenario is VMs in Azure and I'd like to leverage Azure Automation for Pull DSC Server; however, I see this applying on premise too. The problem is that the certificate used to sign the MOF configurations and/or modules needs to get placed on the machine before fetching and applying the configuration otherwise configuration will fail because the certificate isn't trusted or present on the machine.
I tried using Azure KeyVault to bootstrap the certificate (just the public key because that's my understanding of how signing works) and that fails using Add-AzureRmVMSecret because the CertificateUrl parameter expects a full certificate with the public/private key pair to install. In an ideal world, this would be the solution but that's not the case...
Other ideas, again in this context, would be to upload the cert to blob storage, use a CustomScriptExtension to pull down the cert and install into the LocalMachine store but that feels nasty as well because, ideally, that script should be signed as well and that puts us back in the same spot.
I suppose another idea would be to first PUSH a configuration that downloaded and installed certificates only but that doesn't sound great either.
Last option would be to rely on an AD GPO or something similar to potentially push the certificate first...but, honestly, trying to move away from much of that if/when possible...
Am I off-base on this? It seems like this should be a solvable problem -- just looking for at least one "good" way of doing it.
Thanks
David Jones has quite a bit of experiencing dealing with this issue in an on-premises environment, but as you stated the same concepts should apply for Azure. Here is a link to his blog. This is a link to his GitHub site with a PKITools module that he created. If all else fails you can reach out to him on Twitter.
While it's quite easy to populate a pre-booted image with public certificates. it's not possible (that I have found) to populate the private key.
DSC would require the private key to decrypt the passwords.
The most common tactic people blog about is to use the unattend to script the import of a PFX. issue there is you have to leave the password for the PFX in plain text. Perhaps that is ok in your environment.
The other option requires a more complicated setup. Use a simple DSC or GPO to auto enroll a unique certificate. then have the system, via first boot script or DSC custom resource, tickle an API (Like Polaris) and that triggers a DSC script that uses PKITools or other script to get the public certificate that the machine has. Then have that API push a new DSC config (or pull settings) to the machine.

how to Configure openam as Identity provider(IdP) to test SAML based SSO

I am trying to configure openam as Identity provider to test my SAML
based service provider application.
I have searched a lot and saw documentation of openam. There are lots
of thing supported by openam which probably I do not need at this
moment. I don't wish to read whole documentation which will take lot
of time reading things I do not want to test right now. I even saw
chatpet 9 "Managing SAML 2.0 SSO" at
http://docs.forgerock.org/en/openam/10.0.0/admin-guide/index/index.html
But it requires lot of things to be configured before this.
Is there any quick start guide to test it as saml based IdP?
EDIT
Not a quick, detailed is also fine. But I want OpenAm as Identity provider. SP is an application hosted on Jetty which we have developed. Also tell me what changed do I have to make on SP like what urls of application should respond with what.
There is no one-fits-all answer to your question really. Setting up SAMLv2 Federation largely depends on the actual SP implementation, some SPs can work with SAML metadata, some don't..
The simplest way to set up federation between two OpenAM instances for reference would be something like:
Create Hosted IdP wizard on node1
Create Hosted SP wizard on node2
On both nodes remove the persistent NameID-Format, so both will have transient at the top of the list
Register Remote SP wizard on node1, with URL: node2/openam/saml2/jsp/exportmetadata.jsp
Register Remote IdP wizard on node2, with URL: node1/openam/saml2/jsp/exportmetadata.jsp
On node2 in the Hosted SP setting set the transient user to "anonymous"
After all this you can test Federation by using:
/openam/spssoinit?metaAlias=/sp&idpEntityID=node1_entityid on node2
/openam/idpssoinit?metaAlias=/idp&spEntityID=node2_entityid on node1
I've used the default metaAlias values, but those should be visible on the console pages. Similarly by downloading the metadata you can see the actual entity IDs for the given entities.
Based on this, you should see now that with an OpenAM IdP you could at least test SAML support using the idpssoinit URL (if your SP supports unsolicited responses), but from the other way around it pretty much depends on your SP implementation how you need to actually trigger a SAML authentication.
This seems like a simple setup.

Does STS Need the RP Certificate Installed?

I have a custom STS built with WIF. If I have the Relying Party and STS on the same server, I can get it working.
However, I'm getting ID4036 errors when using a remote machine. As I have dug into it, I found that by default in my STS was always encrpyting the outbound token with a local certificate rather than the certificate requested by the Relying Party. One solution would be to install the certiicate used by the Relying Party (public key only) on the STS and code the STS to use that certificate.
However, that creates a problem as I add other Relying Parties on different servers.
Here's an Example:
STS on MySTS - signs tokens with SigningCert.
Relying Party on MyWebServer01 - wants to encrypt/decrypt with MyWebServer01Cert (owns public / private key)
I can install MyWebServer01Cert on MySTS and set the STS to use that for encrypting tokens, and everything should work. However, let's say I want to add a Relying Party application to MyWebServer02. It will not work unless I install the public and private key of MyWebServer01Cert.
I would think that you can simply transmit the public key to the STS and each RP can use it's own - somewhat like SSL. Is this not the case?
Any help / suggestions would be appreciated.
First of all, for encryption only the public key is needed. You actually never want to give away the private key of a certificate.
If you use the WS Federation protocol (usually used for STS scenarios on web sites) the request to the STS is not sent by your RP server, but by the browser of the user. I doub't that you call tell the browser to use the public key of the previous site for communication over https. The encrypted token on the other hand is decrypted by the rp server (meaning that the RP server must know the private key of the certificate used to encrypt the token).
Taking this circumstances into account I am pretty much sure that the public key of the certificate of the RP must be present on the STS and can not be included in the request. Everything else would probably be a dirty hack only working with your custom STS (e.g. including the public key as a paramter).
At least for "passive sign-in" scenarios. For WCF you could attach the certificate of your server as client certificate to your request. But I haven't tried this by myself.