Azure Service Fabric, KeyVault, SSL Certificates - azure-service-fabric

I want to secure my own HTTPS end point (node.js express.js server) with a certificate which I have deployed to the cluster (that is, it exists in Cert:\LocalMachine\My).
I of course want to avoid having my certificate in source control. I can't use an EndpointBindingPolicy in the ServiceManifest because as far as I'm aware that is just for http.sys(?) based systems, which this isn't.
What I thought is perhaps run a SetupEntryPoint script to:
grab the certificate from the store
export it as a pfx with a random passphrase (or some appropriate format)
copy it to {pkgroot}/certs/ssl_cert.pfx
replace some sort of token in serverinit.js with the random passphrase
This way the server, er, code base doesn't need to have the certificate present, i just needs to trust that it will be there when the service is run.
However I don't think I can do this, if it even is as sensible idea, as the certificates in the store are marked such that the private key is non-exportable! Or, at least, they are with my RDP account!
Is there a way to export the certificate with its private key?
What are my options here?
I ended up writing a powershell script which runs in my release pipeline, arguments are clientID, clientSecret and certificateName. clientSecret is stored as a protected environmental variable for my agent.
Create new application registration under same subscription as KeyVault (which should be same as SF Cluster) (e.g. in portal.azure.com)
Note down app ID
Create app secret
Modify KeyVault ACL with App as principal, set get only on secrets
use REST api with client ID and secret https://learn.microsoft.com/en-us/rest/api/keyvault/getsecret
I chose this over grabbing the certificate in the SetupEntryPoint, for example, as this hides the client secret better from the open world (e.g. developers who shouldn't/don't need access to it)

Related

What is the best practice for certificates used to protect an Azure App in client credential flow

Many of my (confidential) apps are talking to each other via the client credential flow.
They request a token from the Azure Identity platform and use this token to authenticate against another app.
A while ago I used client secrets to do so, but later I read that this is not recommended for production environments.
For this reason I changed to self-signed certificates that are valid a longer time.
Those certificates are generated by myself with Azure Keyvault.
However, also this is not recommended.
Microsoft states that that in production environments you should use certificates that are signed by an official CA.
If I now use Lets encrypt, this will expire all three months what is also not such a nice solution.
My questions:
Why is the client secret not recommended in production environments?
Why is the self-signed certificate a problem? I do understand this in matters of HTTPS, but where is the security breach if its used for client credential flow? In my case I am the owner of the app and the app registration.
Do I need to buy a certificate that is one-year valid to do it "the right way"?
Do you have any source of best practices here?
• Client secrets include application credentials, SSH keys, API keys, database passwords, encryption keys, connection strings and so on to connect various resources and access the data or functionality for achieving the designated purpose of that application. Thus, if these are breached, they can put your application at great risk of compromise. Also, the client secret generated in Azure AD and used in APIs for connecting to Azure AD for authentication and authorization purpose is listed and mentioned in unencrypted form in the API code itself. Though, we have an option to store that secret in a key vault and refer to that secret through either managed identity or RBAC assignments, but their credentials too can fall in wrong hands and let the application be vulnerable if the managed identity is a user assigned or even if then access scope of the secret is not well defined according to the required specific need. Thus, client secret is not recommended to be used in a production API.
• In client credentials flow, applications are directly granted permissions by an administrator to perform a certain action regarding the API to be called through it via certificate or federated credentials. Thus, when using a self-signed certificate in client credentials grant scenario, the administrator has granted the daemon app requesting access to other API all the required privileges regarding accessibility of code, API, permissions, data, etc. which can result in poor validation and misuse as the it is very easy to generate a certificate’s key pair without reasonable entropy. Also, protecting the private key of the key pair appropriately to its use and strong validation of the same is not promised in a self-signed certificate due to which it is not recommended in client credentials flow.
• For best practices regarding web app service deployment, please refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/app-service/security-recommendations#general
It explains the best security recommendations for deploying a web app service.

Can I safely save my private key on public build servers?

This is a technical question on comprehension:
When I want a cloud server (e.g. GitHub Actions, Azure DevOps, or GitLab CI/CD) to build and publish an app to any of the app stores ... isn't it necessary then that I upload my private key to these servers' key vault, so they can sign my app on my behalf?
Isn't that concept a bit risky?
I mean, I was taught to let my private keys never leave my machines.
What if I accidentally misconfigure the security settings on the uploaded key? What if some black hat gets hold of the key and abuses it? I mean, with each build process, the private key is getting copied from the vault to the build runner, usually residing somewhere else.
What are the techniques used to ensure that private keys are kept safely on a public server? Is there an official audit performed on these departments?
Should I rather use different Authenticode certificates for each of the above providers? Or will a single certificate be resilient enough?
I couldn't find a technical discussion on this question, only marketing docs. Has this security concern been scientifically scrutinized?
It is safe to use certificates in Azure DevOps, Azure DevOps encrypts the certificate and then uses it in the build pipeline.
We could use Azure Key Vault to protect encryption keys and secrets like certificates, connection strings, and passwords in the cloud. You could refer this doc for more details.
We could also save the certificate file in the Library->Secure Files, if someone want to access or use it, he need enough permission.
I found some sample to use certificate in the Azure DevOps Service, you could check it.
 
If you are using SSL certificate thumbprint, you could save it in the variable and then set the variable to Secret via the “lock” icon at the end of the row.
If you are using PFX certificate, you could refer to this doc.
Update1
GitHub action
We could save the certificate here in the GitHub, GitHub also encrypts the certificate, we could not get the value after save the variable in the Secrets, and we will get *** when print the Secrets value in the log.

How to use Hashicorp Vault's AppRole in production?

We have installed and configured Hashicorp Vault AppRole authentication for one server, by storing the role_id and secret_id in a local file on the server, and we're able to have code on the server read the values from file, authenticate to Vault, receive a token and then read the secrets it needs from Vault. So far so good. However, the secret_id expires after 31 days, and so the process fails.
I've read up on the concepts of using AppRoles, and they seem like the perfect fit for our use case, but for this expiration. We don't want to have to re-generate the secret_id every month.
From what I've read, if you create the role without setting secret_id_ttl it should be non-expiring, but that isn't the case. This may be due to how the AppRole auth method is configured, but I haven't seen anything solid on this.
So I found an article on the Hashicorp website where AppRoles are discussed in detail. The article gives good arguments for expiring secret_id's in a CI/CD environment, even illustrating how this works in 8 simple steps. I understand how this works, but the article fails to mention how the CI/CD and Orchestrator systems themselves are authenticated to Vault? Or am I missing something?
In the end, I want to have the secret_id not expire. Ever.
Without additional support from your environment you will have to write some logic in your installer, and have a service manager of some sort to start your services. In many cloud environments, you may already have the equivalent entities (Terraform, Cloud Formation, etc.) and you should leverage their secrets management capabilities where needed.
For custom installations, here is a workflow that I have used.
Have an installation manager process that can be invoked to perform installation / upgrade. Make sure installation / upgrade of services is always through this process.
Have a service manager process that is responsible for starting individual services and monitoring them / restarting them. Make sure service start-ups are always via this service manager.
During installation, generate self-signed certificates for Vault, installation manager and service manager. Vault certificates should trust the certs for the installation manager and the service manager. Store these with limited permission (600) in directories owned by the installation user or the service manager user as the case may be. Set up certificate-based authentication in Vault using these certs.
These credentials should have limited capabilities associated with them. The installation manager should only be able to create new roles and not delete anything. The service manager should only be able to create secrets for the named roles created by the installation manager, and delete nothing.
During installation / upgrade, the installation manager should connect to Vault and create all necessary service-specific roles. It should also be able to set role ids for individual services in per-service config files that the services may read on start-up.
During each service's start-up, the service manager should connect to Vault and create secret ids corresponding to each service's role. It should set the secret id in an environment variable and start the service. The secret id should have time-bound validity (by setting TTLs) so that they cannot be used for much beyond the creation of the auth token (see #7).
Each service should read the role id from the config file, and the secret id from the environment variable. It should then generate the auth token using these two, and use the token to authenticate itself with vault for its lifetime.
It is possible to create a Vault AppRole with a secret_id that essentially never expires. However, this should be limited to use on a Vault development server -- one that does not contain any production credentials -- and for use in a development environment.
That being said, here's the procedure I used based on several articles in the Vault documentation, but primarily AppRole Pull Authentication.
This assumes that the Vault approle authentication method is already installed at approle/ and that you are logged in to Vault, have root or admin privileges on the Vault server and have a valid, non-expired token.
Note: For the values supplied for the fields below, the maximum value that vault seems to accept is 999,999,999. For the TTL fields, that is the number of seconds which comes out to more than 31 years. That's not forever, but it is long enough that renewing the secret_id will probably be somebody else's problem (SEP).
# Vault server address to be used by the Vault CLI.
export VAULT_ADDR="https://vault-dev.example.com:8200/"
# Vault namespace to be used by the CLI.
# Required for Cloud and Enterprise editions
# Not applicable for Open Source edition
export VAULT_NAMESPACE="admin"
# The name of the Vault AppRole
export VAULT_ROLE=my-approle
# Override defaults on the approle authentication method
# NOTE: In this command, the field names, default-lease-ttl
# and max-lease-ttl contain dashes ('-'), NOT
# underscores ('_'), and are preceded by a single
# dash ('-').
vault auth tune \
-default-lease-ttl=999999999 \
-max-lease-ttl=999999999 approle/
# Override defaults on the approle
# NOTE: In this command, the field names, secret_id_ttl and
# secret_id_num contain underscores ('_'), NOT
# dashes ('-'), and are NOT preceded by a single
# dash ('-').
vault write auth/approle/role/my-approle \
secret_id_ttl=999999999 \
secret_id_num_uses=999999999
# Create a new secret_id for the approle which uses the new defaults
vault write -f auth/approle/role/my-approle/secret-id
Update the server config file to use the new secret_id and you are ready to go.
As the OP has noted, the Hashicorp Vault documentation assumes that the application is able to authenticate, somehow, to the vault and then retrieve the secret ID (possibly wrapped) from the vault and then, use that to authenticate and fetch a token used to actually work with secrets. The answers here are posing alternative approaches to retrieving that initial token.
Alan Thatcher wrote a blog article, Vault AppRole Authentication, that provides another well thought out approach:
Create a policy that allows the user to retrieve the secret-id and role-id, but nothing else.
Create a long lived, periodic/renewable token based on that policy.
Store the long lived token securely, e.g. as a Kubernetes secret
At runtime, use the long-lived token to:
acquire the secret-id and role-id,
authenticate to vault using these and acquire short-lived token
use current short-lived token to work with secrets
For Java applications, the Spring Vault project supports this approach if you configure the long-lived token as the "initial token" and the approle authencation name, e.g. chef-ro in the blog case.
My personal feeling is that this approach is about as secure but a bit simpler than the mutual TLS approach. I agree that using an infinite TTL for the secret-id is a less secure practice for Production environments.
Thanks to Mr. Thatcher for thinking this one through.
This is probably not the canonnical answer, but I found it empty so decided to add some pointers.
As per Hashicorp Vault AppRole: role-id and secret-id:
Additional brownie information: Ideally, it's best practice to keep
the TTL low, 30 minutes max - if your application is stateful, or
maybe even less if it's a stateless application. The secret key of
Vault approle should also be rotated every 90 days. Please note by
default, Vault approle backend has 31 days of TTL, so if you want to
set it to 90 days, you need to increase TTL of the approle backend as
well.
However (in the same question):
You can generate secret-id with indefinite validity. But doing so will
be as good as keeping your secrets in the configuration file.
For ephemeral instances you can use configuration management to pass in secrets via a third (broker) role. With regard to a server that exists indefinitely, i'm still working that out...
Ideas:
TLS certificates might work well on Windows, don't know about Linux.
GitHub Personal Access Tokens, but this is not org. friendly.
Review the other auth methods available to see if there's one that fits your requirements (e.g. AWS).

How and where to put passphrase of the Tessera private key when using Hashicorp vault

We are using Quorum and Hashicorp vault in one of our systems. We have been able to successfully integrate these two i.e. we have put the Tessera private and public keys in the Vault and successfully ran the Quorum server.
The problem is, when we are trying to use passphrase for the private key, we could not find a way through which we can achieve this. Even we have observed that when we are using the tessera key generation tool for Hashicorp vault where it generates the keys and internally saves the same in the Vault as well, it does not ask for any passphrase. But when we use the normal key generation tool where it generates the keys and puts the same in the specified directory, it asks for the passphrase.
May you please help us how we can achieve this leveraging Hashicorp Vault and Tessera i.e. we generate a key pair where private is protected with passphrase.
We could not find any help in the Wiki and also we tried to analyze the source code and our impression is if we want to use passphrase protected private key for Tessera, we can't use Hashicorp Vault now.
Please help.
Tessera does not support the storing of passphrase-protected private keys in a Hashicorp Vault as Vault already encrypts the data that it stores.
However, to get access to the data stored in a Vault, the Tessera instance must possess the correct set of credentials (provided as environment variables) in order to authenticate with the Vault. Using these credentials offers more flexibility and control in comparison to the passphrases used to secure file-stored keys.
For example, configuring an authentication method (e.g. AppRole authentication) makes it possible to define the authorisation for a particular Tessera instance, ensuring it is only allowed to access the secrets that it needs. Additionally these credentials can be configured to expire after a certain number of uses or length of time.
Finally, TLS should be enabled on the Vault server to ensure secure communication between Vault and Tessera. The necessary TLS certificates and keys should be included in the Tessera start-up config.
The Tessera wiki provides more details on the exact configuration and environment variables to provide:
https://github.com/jpmorganchase/tessera/wiki/Setting-up-a-Hashicorp-Vault
https://github.com/jpmorganchase/tessera/wiki/Keys#4-hashicorp-vault-key-pairs

Loading a server-side certificate *and* a private key from Windows Server cert store?

I'm trying to get this external REST webservice that requires both a server-side certificate and a private key (both of which I got from the publisher as *.pem files of that service).
For my testing, I googled and found a way to combine these two pieces into a *.pfx file - and loading a X509Certificate2 instance from that binary file on disk works just fine.
Now I was trying to put this into the Cert Store on my production Windows Server 2008.
I can get the X509Certificate2 from the cert store in my C# code - no problem:
X509Store store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certs = store.Certificates.Find(X509FindType.FindBySerialNumber, "serial-number-here", false);
if (certs.Count > 0)
{
X509Certificate2 cert = certs[0];
// set the certificate on the RestClient to call my REST service
_restClient.ClientCertificates.Add(cert);
}
store.Close();
But when I do this, then the web service barfs at me, claiming it needs a "SSL certificate"...
Also: when I was loading the X509Certificate2 from disk, from that *.pfx file, I had to provide a password - nothing needs to be provided here, when loading from the cert store.... odd....
It seems that even though I imported the *.pfx which contains both the server-side certificate and our private key, somehow I cannot get both back from the cert store...
Any idea how I can get this to work? Do I need to load the private key from the cert store in a second step? How?
These certificates still remain mainly a big voodoo-like mystery to me ..... can anyone enlighten me?
The first thing to check is to see whether the certificate store does have the private key.
Open up the certificate management snappin and find your certificate, double click it and make sure it has the red highlighted section like in the image below:
Next, if the private key is in the store then maybe the account accessing the certificate does not have permissions on the private key. There are two ways to check this:
In the certificate management snappin, right click the certificate > All tasks > Manage private keys. (You should be able to check and edit permissions here)
In your code you could access the PrivateKey property (i.e. Do var privateKey = cert.PrivateKey and see whether you get it back).
You did not write how is the web service implemented.
if it is deployed on IIS
if it is self hosted
Your code to get certificate from store is correct. The question is where did you import the pfx - CurrentUser or LocalMachine store. You are using CurrentUser store in the code example. If you imported the certificate to LocalMachine store it will not be found. Also, please specify the store name - StoreName.My (in MMC or certmgr.msc it means Personal) in the constructor of X509Store (it might be default, but who knows all the defaults anyway :) )
But when I do this, then the web service barfs at me, claiming it
needs a "SSL certificate"...
You need to ensure you have Client Authentication in the extended key usage of the certificate.
Also: when I was loading the X509Certificate2 from disk, from that
*.pfx file, I had to provide a password - nothing needs to be provided here, when loading from the cert store.... odd....
It's how it works. When you have a pfx then the private key in it is secured/encrypted with password (password can be an empty string). When you import pfx to certificate store then the private key is secured/encrypted with other key (not exactly sure what key it is). However you can add another level of protection to the private key by specifying strong protection when importing pfx to the store (I do not recommend it when used with ASP.NET, or web services or anything that does not have a desktop). But when it is your personal certificate to sign emails then it might be good to enable it. Windows will then popup a window whenever an application will try to use the private key.
#DanL might be right about the rights to the private key and his
1) - set rights on private key and
2) - accessing private key in X509Certificate2
are written OK. I would just add to 1) that you are trying to connect to the REST service from a ASP.NET application or another web service on IIS then the name of the account that you need to add permission for is IIS APPPOOL\name_of_the_apppool_your_app_runs_under