Configuring a custom CA in Hyperledger Fabric - certificate

Could somebody please point me in the right direction for configuring Hyperledger Fabric to use a custom CA. The docs here suggest that any CA that support ECDSA can be used.

Take a look at the Cryptogen tool.
It produces x509 certificates, and this can be used by you to prime the fabric network entities (orderer, peers, clients).
I recommend you run cryptogen to produce the needed PEM files by reading byfn (take a look at ./byfn.sh -m generate )
If you can replicate this folder structure you're good to go.
Additionally (this is just a thought, never tried it), Fabric-CA has an HTTP API for registering clients.
If you build your own gateway that mocks Fabric-CA's API and does the same things - you can make the client SDK (which includes also a fabric-ca client) talk with your CA as if it was Fabric-CA

I don't understand very well the aim of your question. However, I'm going to answer it.
right direction for configuring Hyperleger Fabric to use a custom CA
Hyperledger Fabric needs some certificates to control and restrict the acces to the Blockchain. For each channel you define who members are going to be part of it, and you configure the MSP. The certificates are used for the MSP. So, you can create those certificaes in any way, then you should pass them to the corresponding members and locate them in the corresponding directories.
However, Hyperledger Fabric gives you the chance to achieve it:
On the one hand, as #yacovm said, by using the cryptogen tool
On the other hand, as you said, by creating your own CA Authorithy, i.e. by your own CA Server

Related

Installing SSL Certificates for Wazuh-Dashboard

Is it possible to have Wazuh Manager served through custom SSL certificates? The wazuh-certs-tool gives you a self cert, and every other way to get it served through SSL has failed.
The closest I've gotten to getting this to work is I've had the dashboard being served by a custom SSL, I had agents connecting to it successfully and providing a heartbeat, but had zero log flows or events happening. When I had it in this state, I saw the API calls were coming from what appeared to be a Java instance, erroring out complaining about receiving certificate. I saw a keystore file located at /etc/wazuh-indexer. Do I also need to add the root-ca cert here as well?
It seems that your indexer's excepted certificates do not match the certificates in your manager or the dashboard.
If you follow the normal installation guide, it shows how and where to place your certificates, that are created using the wazuh-cert-tool. But, certificates can be created from any other source, as long as they have the expected information, you can check that informationenter link description here here.
I would recommend you follow the installation steps in the installation guide, from scratch to make sure you copy each excepted certificate in it's place and that the configuration files for your indexer, dashboard, and manager take into account the correct files. All you would need to change, the creation of the certificates, to have your own custom certs.
In case of further doubt, do not hesitate to ask.

IBM Cloud Secrets Manager: Which Lets Encrypt environment to target?

We currently have a cert manager instance in IBM Cloud and have certificates ordered via Lets Encrypt and using the certs with our client to site vpn service . As the cert manager is getting deprecated in favour of secrets manager , we plan to create public engine in Secret manager using same Lets Encrypt CA.
In the ACME creation tool, we have the option of tageting Let's Encrypt prod and staging. Can anyone throw light on which target needs to be chosen?
https://github.com/ibm-cloud-security/acme-account-creation-tool#supported-certificate-authorities
Also, once LE is integrated with Secrets manager , will the certificates be able to be auto renewed?
LE staging is for testing your setup. Once everything works you should use production. See https://letsencrypt.org/docs/staging-environment
Secrets Manager supports secrets rotation, including automatic renewal of certificates. Some conditions must be met. I recommend to check "Automatically rotating secrets" in the Secrets Manager documentation. I have not tested it, but from reading it, LE is supported with domain validation.

Getting started with Vault for existing non-containerized Windows apps

We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.
How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.
For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit

How to bootstrap certificates for LCM to reference for signatureverification settings?

WMF 5.1 includes new functionality to allow signing of MOF documents and DSC Resource modules (reference). However, this seems very difficult to implement in reality -- or I'm making it more complicated than it is...
My scenario is VMs in Azure and I'd like to leverage Azure Automation for Pull DSC Server; however, I see this applying on premise too. The problem is that the certificate used to sign the MOF configurations and/or modules needs to get placed on the machine before fetching and applying the configuration otherwise configuration will fail because the certificate isn't trusted or present on the machine.
I tried using Azure KeyVault to bootstrap the certificate (just the public key because that's my understanding of how signing works) and that fails using Add-AzureRmVMSecret because the CertificateUrl parameter expects a full certificate with the public/private key pair to install. In an ideal world, this would be the solution but that's not the case...
Other ideas, again in this context, would be to upload the cert to blob storage, use a CustomScriptExtension to pull down the cert and install into the LocalMachine store but that feels nasty as well because, ideally, that script should be signed as well and that puts us back in the same spot.
I suppose another idea would be to first PUSH a configuration that downloaded and installed certificates only but that doesn't sound great either.
Last option would be to rely on an AD GPO or something similar to potentially push the certificate first...but, honestly, trying to move away from much of that if/when possible...
Am I off-base on this? It seems like this should be a solvable problem -- just looking for at least one "good" way of doing it.
Thanks
David Jones has quite a bit of experiencing dealing with this issue in an on-premises environment, but as you stated the same concepts should apply for Azure. Here is a link to his blog. This is a link to his GitHub site with a PKITools module that he created. If all else fails you can reach out to him on Twitter.
While it's quite easy to populate a pre-booted image with public certificates. it's not possible (that I have found) to populate the private key.
DSC would require the private key to decrypt the passwords.
The most common tactic people blog about is to use the unattend to script the import of a PFX. issue there is you have to leave the password for the PFX in plain text. Perhaps that is ok in your environment.
The other option requires a more complicated setup. Use a simple DSC or GPO to auto enroll a unique certificate. then have the system, via first boot script or DSC custom resource, tickle an API (Like Polaris) and that triggers a DSC script that uses PKITools or other script to get the public certificate that the machine has. Then have that API push a new DSC config (or pull settings) to the machine.

Production Environment for Spring Cloud Config using Git/Vault

Spring Boot - 2.0.0.M3
Spring cloud - Finchley.M1
I want to know if someone is using Spring Cloud config server with both vault and git support in a production setup using Database storage backend.
I have evaluated Spring cloud config using vault and contemplating whether to go for Oracle JCE to encrypt username/pwd or Vault and seek suggestions on the same. we are working on Springboot/microservices.
Following are my findings -
Vault will introduce an additional layer and thus will introduce additional usecases of security, auditing while communicating with Vault.
Spring cloud Config actuator endpoints are broken for the milestone release at this point for generation of encrypted values and /encrypt /decrypt may not work if we go for Oracle JCE support so we generate encrypted values through stable versions.
We do not wish to use consul server and are trying to use Cassandra as Storage backend.
I used Vault Authentication backend using AppRole and generated a Token (different from root token as it's unsafe to use the same) with read permissions. However, Spring Cloud config at the moment support only Token based authentication from client side. That means we first generate token from Vault and then pass it as commandline/env variable.
Some additional points of concern are expiry of token (though we can have non-expiry token not sure about pros/cons), restarts, safety issues, instantiating new microservices. There is no provision of dynamic tokens/authentication at cloud config side.
For milestone release i found that the client side encryption/decryption is not working as of now using recommended inclusion of RSA jar. Here is the ticket i opened.
https://github.com/spring-cloud/spring-cloud-config/issues/805#issuecomment-332491536
These are some of my observations, please share your thoughts if there is any case study/whitepaper that address spring cloud config vault usecases, setup and challenges for production micro-services environment.
Thanks
Thanks for reaching out to me. One think I would state is that the App Role backend utilizes two distinct tokens, and indeed spring-cloud-config-vault does indeed support this functionality, see: http://cloud.spring.io/spring-cloud-vault/single/spring-cloud-vault.html#_approle_authentication. I leverage vault in the same way I leverage config server, as per the documentation. I don't encrypt any values in my config, I just don't put them there. I put the secret values in vault and let it serve config. As long as keys don't collide, you don't have to mess with anything, otherwise you may need adjust the priority so vault wins, again see the documentation that I pointed to above. I wouldn't mess with encryption/decryption in spring-cloud-config personally. Because you have to check the keys into SCM or distribute them to your teams for local development, you lose the value of having these keys IMO.
Thanks Spring Cloud vault does support but not Spring cloud config with Vault. Only way seems to be passing X-Config-token from Microservice to Config Server. We are bit skeptical with this part of generating tokens manually or through script. Especially with containerization and when new MS instances will be spawn. Not sure about this approach especially in production setup.