I have a servicefabric cluster deployed (uses thumbprint not commonname), whose cluster certificate is close to expiring. I am a bit confused about the process for adding new certificate and making the rollover.
There is this article that sheds light on it
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-rollover-cert-cn
It mentions that using commonnames makes the process easier, but doesnt mention how commonname based rollover is easier?
I have also seen this command
Add-AzServiceFabricClusterCertificate - This can create the certificate in Keyvault and update servicefabric cluster too.
My questions are:
Is this a replacement for process described in the article above?
Can this be used for certificate rollover?
Once the new certificate is added is the rollover automatic?
https://learn.microsoft.com/en-us/powershell/module/az.servicefabric/add-azservicefabricclustercertificate?view=azps-2.0.0
An update. This command (Add-AzServiceFabricClusterCertificate) did the trick. It updated the servicefabric cluster/vm scaleset and added a secondary certificate. I was able to swap the secondary and primary certificate as a second operation.
Related
Is it possible to have Wazuh Manager served through custom SSL certificates? The wazuh-certs-tool gives you a self cert, and every other way to get it served through SSL has failed.
The closest I've gotten to getting this to work is I've had the dashboard being served by a custom SSL, I had agents connecting to it successfully and providing a heartbeat, but had zero log flows or events happening. When I had it in this state, I saw the API calls were coming from what appeared to be a Java instance, erroring out complaining about receiving certificate. I saw a keystore file located at /etc/wazuh-indexer. Do I also need to add the root-ca cert here as well?
It seems that your indexer's excepted certificates do not match the certificates in your manager or the dashboard.
If you follow the normal installation guide, it shows how and where to place your certificates, that are created using the wazuh-cert-tool. But, certificates can be created from any other source, as long as they have the expected information, you can check that informationenter link description here here.
I would recommend you follow the installation steps in the installation guide, from scratch to make sure you copy each excepted certificate in it's place and that the configuration files for your indexer, dashboard, and manager take into account the correct files. All you would need to change, the creation of the certificates, to have your own custom certs.
In case of further doubt, do not hesitate to ask.
We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.
How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.
For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit
I need to rotate admin.conf for a cluster so old users who used that as their kubeconfig wouldn't be allowed to perform actions anymore.
How can i do that?
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
As mdaniel wrote in his comment:
the answer to your question is "rekey the entire apiserver CA
hierarchy" or wait for admin.conf cert to expire, because those
admin.conf credentials are absolute. Next time, use the provided
oidc mechanism for user auth.
For kubeadm based kubernetes cluster please also refer to Certificate Management with kubeadm. For manual rotation of CA Certificates, please refer to this section. Pay special attention to point 7:
Update certificates for user accounts by replacing the content of client-certificate-data and client-key-data respectively.
For information about creating certificates for individual user
accounts, see Configure certificates for user accounts.
Additionally, update the certificate-authority-data section in the
kubeconfig files, respectively with Base64-encoded old and new
certificate authority data
WMF 5.1 includes new functionality to allow signing of MOF documents and DSC Resource modules (reference). However, this seems very difficult to implement in reality -- or I'm making it more complicated than it is...
My scenario is VMs in Azure and I'd like to leverage Azure Automation for Pull DSC Server; however, I see this applying on premise too. The problem is that the certificate used to sign the MOF configurations and/or modules needs to get placed on the machine before fetching and applying the configuration otherwise configuration will fail because the certificate isn't trusted or present on the machine.
I tried using Azure KeyVault to bootstrap the certificate (just the public key because that's my understanding of how signing works) and that fails using Add-AzureRmVMSecret because the CertificateUrl parameter expects a full certificate with the public/private key pair to install. In an ideal world, this would be the solution but that's not the case...
Other ideas, again in this context, would be to upload the cert to blob storage, use a CustomScriptExtension to pull down the cert and install into the LocalMachine store but that feels nasty as well because, ideally, that script should be signed as well and that puts us back in the same spot.
I suppose another idea would be to first PUSH a configuration that downloaded and installed certificates only but that doesn't sound great either.
Last option would be to rely on an AD GPO or something similar to potentially push the certificate first...but, honestly, trying to move away from much of that if/when possible...
Am I off-base on this? It seems like this should be a solvable problem -- just looking for at least one "good" way of doing it.
Thanks
David Jones has quite a bit of experiencing dealing with this issue in an on-premises environment, but as you stated the same concepts should apply for Azure. Here is a link to his blog. This is a link to his GitHub site with a PKITools module that he created. If all else fails you can reach out to him on Twitter.
While it's quite easy to populate a pre-booted image with public certificates. it's not possible (that I have found) to populate the private key.
DSC would require the private key to decrypt the passwords.
The most common tactic people blog about is to use the unattend to script the import of a PFX. issue there is you have to leave the password for the PFX in plain text. Perhaps that is ok in your environment.
The other option requires a more complicated setup. Use a simple DSC or GPO to auto enroll a unique certificate. then have the system, via first boot script or DSC custom resource, tickle an API (Like Polaris) and that triggers a DSC script that uses PKITools or other script to get the public certificate that the machine has. Then have that API push a new DSC config (or pull settings) to the machine.
Could somebody please point me in the right direction for configuring Hyperledger Fabric to use a custom CA. The docs here suggest that any CA that support ECDSA can be used.
Take a look at the Cryptogen tool.
It produces x509 certificates, and this can be used by you to prime the fabric network entities (orderer, peers, clients).
I recommend you run cryptogen to produce the needed PEM files by reading byfn (take a look at ./byfn.sh -m generate )
If you can replicate this folder structure you're good to go.
Additionally (this is just a thought, never tried it), Fabric-CA has an HTTP API for registering clients.
If you build your own gateway that mocks Fabric-CA's API and does the same things - you can make the client SDK (which includes also a fabric-ca client) talk with your CA as if it was Fabric-CA
I don't understand very well the aim of your question. However, I'm going to answer it.
right direction for configuring Hyperleger Fabric to use a custom CA
Hyperledger Fabric needs some certificates to control and restrict the acces to the Blockchain. For each channel you define who members are going to be part of it, and you configure the MSP. The certificates are used for the MSP. So, you can create those certificaes in any way, then you should pass them to the corresponding members and locate them in the corresponding directories.
However, Hyperledger Fabric gives you the chance to achieve it:
On the one hand, as #yacovm said, by using the cryptogen tool
On the other hand, as you said, by creating your own CA Authorithy, i.e. by your own CA Server