Getting started with Vault for existing non-containerized Windows apps - hashicorp-vault

We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.

How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.

For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit

Related

What is the best practice for certificates used to protect an Azure App in client credential flow

Many of my (confidential) apps are talking to each other via the client credential flow.
They request a token from the Azure Identity platform and use this token to authenticate against another app.
A while ago I used client secrets to do so, but later I read that this is not recommended for production environments.
For this reason I changed to self-signed certificates that are valid a longer time.
Those certificates are generated by myself with Azure Keyvault.
However, also this is not recommended.
Microsoft states that that in production environments you should use certificates that are signed by an official CA.
If I now use Lets encrypt, this will expire all three months what is also not such a nice solution.
My questions:
Why is the client secret not recommended in production environments?
Why is the self-signed certificate a problem? I do understand this in matters of HTTPS, but where is the security breach if its used for client credential flow? In my case I am the owner of the app and the app registration.
Do I need to buy a certificate that is one-year valid to do it "the right way"?
Do you have any source of best practices here?
• Client secrets include application credentials, SSH keys, API keys, database passwords, encryption keys, connection strings and so on to connect various resources and access the data or functionality for achieving the designated purpose of that application. Thus, if these are breached, they can put your application at great risk of compromise. Also, the client secret generated in Azure AD and used in APIs for connecting to Azure AD for authentication and authorization purpose is listed and mentioned in unencrypted form in the API code itself. Though, we have an option to store that secret in a key vault and refer to that secret through either managed identity or RBAC assignments, but their credentials too can fall in wrong hands and let the application be vulnerable if the managed identity is a user assigned or even if then access scope of the secret is not well defined according to the required specific need. Thus, client secret is not recommended to be used in a production API.
• In client credentials flow, applications are directly granted permissions by an administrator to perform a certain action regarding the API to be called through it via certificate or federated credentials. Thus, when using a self-signed certificate in client credentials grant scenario, the administrator has granted the daemon app requesting access to other API all the required privileges regarding accessibility of code, API, permissions, data, etc. which can result in poor validation and misuse as the it is very easy to generate a certificate’s key pair without reasonable entropy. Also, protecting the private key of the key pair appropriately to its use and strong validation of the same is not promised in a self-signed certificate due to which it is not recommended in client credentials flow.
• For best practices regarding web app service deployment, please refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/app-service/security-recommendations#general
It explains the best security recommendations for deploying a web app service.

Powershell - automated connection to Power BI service without hardcoding password

We have a PowerShell script to pull Power BI activity data (using Get-PowerBIActivityEvent), and I have been trying to automate it so that it can pull this data daily using an unattended account. The problem is the script must necessarily use the Connect-PowerBIServiceAccount cmdlet, which requires a credential. I don't want to have the passwords hard-coded anywhere (obviously) and ideally don't want to be passing it into the script as a plaintext parameter in case of memory leaks.
I've tried using SSIS as a scheduling mechanism since it allows for encrypted parameters in script tasks, but can't call the PS script with a SecureString parameter since the System.Management.Automation namespace isn't in the GAC (a commandline call wouldn't be possible).
I don't believe task scheduler would offer the functionality needed.
Does anyone know of any elegant ways to connect to the power BI service using encrypted credentials?
In the docs of Connect-PowerBIServiceAccount there are 2 options for unattended sign-in:
Using -Credential, where you pass AAD client ID as username and application secret key as password
Using -CertificateThumbprint and -ApplicationId
For both options you need to configure service pricipal and add proper permissions. I'm not going into details how to configure that, but most probably you'd need (at least) the following application permissions:
I'm not really sure what functionalities you need in the script, but in my experience, majority of the cases can be covered by scheduled task, so the explanation below will apply to that solution.
How you can secure the credentials?
There are variuos possible solutions, depending on your preferences. I'd consider certificate-based authentication as more secure (certificate is available only to current user/all users of the machine).
What's important in certificate-based authentication - make sure that the certificate is available for the account running the script (in many cases it's service account, not your user account).
How can I secure more?
If you want, you can store application ID as secure string (I don't have SSIS to test, so I'm not sure if there's any workaround to make it working in there) or use Export-CliXml. They use Windows Data Protection API (DPAPI), so the file can be decrypted only by the account which was used to encrypt.
To add one more level of security (I'm not even mentioning setting correct access rights to the files as it's obvious) you might put the file in the folder encrypted (you might already have a solution for disk encryption, so use it if you wish).
There are probably some solutions to secure the keys even better, but these ones should do the job. I'm using other Microsoft 365 modules with similar approach (Outlook, SharePoint PnP) and it works quite well.
NOTE: If you need to use user account, instead of service principal, make sure that you have MultiFactor Authentication disabled on that account for that specific application.
The relevant documentation to this (https://learn.microsoft.com/en-us/power-bi/developer/embedded/embed-service-principal) states that admin APIs (i.e. those served via Get-PowerBiActivityEvent) do not currently support service principals. This means it's not currently possible to use a registered app to run these cmdlets unattended.
There is a feature request open to provide this at the moment: https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/39641572-need-service-principle-support-for-admin-api

How to use Hashicorp Vault's AppRole in production?

We have installed and configured Hashicorp Vault AppRole authentication for one server, by storing the role_id and secret_id in a local file on the server, and we're able to have code on the server read the values from file, authenticate to Vault, receive a token and then read the secrets it needs from Vault. So far so good. However, the secret_id expires after 31 days, and so the process fails.
I've read up on the concepts of using AppRoles, and they seem like the perfect fit for our use case, but for this expiration. We don't want to have to re-generate the secret_id every month.
From what I've read, if you create the role without setting secret_id_ttl it should be non-expiring, but that isn't the case. This may be due to how the AppRole auth method is configured, but I haven't seen anything solid on this.
So I found an article on the Hashicorp website where AppRoles are discussed in detail. The article gives good arguments for expiring secret_id's in a CI/CD environment, even illustrating how this works in 8 simple steps. I understand how this works, but the article fails to mention how the CI/CD and Orchestrator systems themselves are authenticated to Vault? Or am I missing something?
In the end, I want to have the secret_id not expire. Ever.
Without additional support from your environment you will have to write some logic in your installer, and have a service manager of some sort to start your services. In many cloud environments, you may already have the equivalent entities (Terraform, Cloud Formation, etc.) and you should leverage their secrets management capabilities where needed.
For custom installations, here is a workflow that I have used.
Have an installation manager process that can be invoked to perform installation / upgrade. Make sure installation / upgrade of services is always through this process.
Have a service manager process that is responsible for starting individual services and monitoring them / restarting them. Make sure service start-ups are always via this service manager.
During installation, generate self-signed certificates for Vault, installation manager and service manager. Vault certificates should trust the certs for the installation manager and the service manager. Store these with limited permission (600) in directories owned by the installation user or the service manager user as the case may be. Set up certificate-based authentication in Vault using these certs.
These credentials should have limited capabilities associated with them. The installation manager should only be able to create new roles and not delete anything. The service manager should only be able to create secrets for the named roles created by the installation manager, and delete nothing.
During installation / upgrade, the installation manager should connect to Vault and create all necessary service-specific roles. It should also be able to set role ids for individual services in per-service config files that the services may read on start-up.
During each service's start-up, the service manager should connect to Vault and create secret ids corresponding to each service's role. It should set the secret id in an environment variable and start the service. The secret id should have time-bound validity (by setting TTLs) so that they cannot be used for much beyond the creation of the auth token (see #7).
Each service should read the role id from the config file, and the secret id from the environment variable. It should then generate the auth token using these two, and use the token to authenticate itself with vault for its lifetime.
It is possible to create a Vault AppRole with a secret_id that essentially never expires. However, this should be limited to use on a Vault development server -- one that does not contain any production credentials -- and for use in a development environment.
That being said, here's the procedure I used based on several articles in the Vault documentation, but primarily AppRole Pull Authentication.
This assumes that the Vault approle authentication method is already installed at approle/ and that you are logged in to Vault, have root or admin privileges on the Vault server and have a valid, non-expired token.
Note: For the values supplied for the fields below, the maximum value that vault seems to accept is 999,999,999. For the TTL fields, that is the number of seconds which comes out to more than 31 years. That's not forever, but it is long enough that renewing the secret_id will probably be somebody else's problem (SEP).
# Vault server address to be used by the Vault CLI.
export VAULT_ADDR="https://vault-dev.example.com:8200/"
# Vault namespace to be used by the CLI.
# Required for Cloud and Enterprise editions
# Not applicable for Open Source edition
export VAULT_NAMESPACE="admin"
# The name of the Vault AppRole
export VAULT_ROLE=my-approle
# Override defaults on the approle authentication method
# NOTE: In this command, the field names, default-lease-ttl
# and max-lease-ttl contain dashes ('-'), NOT
# underscores ('_'), and are preceded by a single
# dash ('-').
vault auth tune \
-default-lease-ttl=999999999 \
-max-lease-ttl=999999999 approle/
# Override defaults on the approle
# NOTE: In this command, the field names, secret_id_ttl and
# secret_id_num contain underscores ('_'), NOT
# dashes ('-'), and are NOT preceded by a single
# dash ('-').
vault write auth/approle/role/my-approle \
secret_id_ttl=999999999 \
secret_id_num_uses=999999999
# Create a new secret_id for the approle which uses the new defaults
vault write -f auth/approle/role/my-approle/secret-id
Update the server config file to use the new secret_id and you are ready to go.
As the OP has noted, the Hashicorp Vault documentation assumes that the application is able to authenticate, somehow, to the vault and then retrieve the secret ID (possibly wrapped) from the vault and then, use that to authenticate and fetch a token used to actually work with secrets. The answers here are posing alternative approaches to retrieving that initial token.
Alan Thatcher wrote a blog article, Vault AppRole Authentication, that provides another well thought out approach:
Create a policy that allows the user to retrieve the secret-id and role-id, but nothing else.
Create a long lived, periodic/renewable token based on that policy.
Store the long lived token securely, e.g. as a Kubernetes secret
At runtime, use the long-lived token to:
acquire the secret-id and role-id,
authenticate to vault using these and acquire short-lived token
use current short-lived token to work with secrets
For Java applications, the Spring Vault project supports this approach if you configure the long-lived token as the "initial token" and the approle authencation name, e.g. chef-ro in the blog case.
My personal feeling is that this approach is about as secure but a bit simpler than the mutual TLS approach. I agree that using an infinite TTL for the secret-id is a less secure practice for Production environments.
Thanks to Mr. Thatcher for thinking this one through.
This is probably not the canonnical answer, but I found it empty so decided to add some pointers.
As per Hashicorp Vault AppRole: role-id and secret-id:
Additional brownie information: Ideally, it's best practice to keep
the TTL low, 30 minutes max - if your application is stateful, or
maybe even less if it's a stateless application. The secret key of
Vault approle should also be rotated every 90 days. Please note by
default, Vault approle backend has 31 days of TTL, so if you want to
set it to 90 days, you need to increase TTL of the approle backend as
well.
However (in the same question):
You can generate secret-id with indefinite validity. But doing so will
be as good as keeping your secrets in the configuration file.
For ephemeral instances you can use configuration management to pass in secrets via a third (broker) role. With regard to a server that exists indefinitely, i'm still working that out...
Ideas:
TLS certificates might work well on Windows, don't know about Linux.
GitHub Personal Access Tokens, but this is not org. friendly.
Review the other auth methods available to see if there's one that fits your requirements (e.g. AWS).

How to bootstrap certificates for LCM to reference for signatureverification settings?

WMF 5.1 includes new functionality to allow signing of MOF documents and DSC Resource modules (reference). However, this seems very difficult to implement in reality -- or I'm making it more complicated than it is...
My scenario is VMs in Azure and I'd like to leverage Azure Automation for Pull DSC Server; however, I see this applying on premise too. The problem is that the certificate used to sign the MOF configurations and/or modules needs to get placed on the machine before fetching and applying the configuration otherwise configuration will fail because the certificate isn't trusted or present on the machine.
I tried using Azure KeyVault to bootstrap the certificate (just the public key because that's my understanding of how signing works) and that fails using Add-AzureRmVMSecret because the CertificateUrl parameter expects a full certificate with the public/private key pair to install. In an ideal world, this would be the solution but that's not the case...
Other ideas, again in this context, would be to upload the cert to blob storage, use a CustomScriptExtension to pull down the cert and install into the LocalMachine store but that feels nasty as well because, ideally, that script should be signed as well and that puts us back in the same spot.
I suppose another idea would be to first PUSH a configuration that downloaded and installed certificates only but that doesn't sound great either.
Last option would be to rely on an AD GPO or something similar to potentially push the certificate first...but, honestly, trying to move away from much of that if/when possible...
Am I off-base on this? It seems like this should be a solvable problem -- just looking for at least one "good" way of doing it.
Thanks
David Jones has quite a bit of experiencing dealing with this issue in an on-premises environment, but as you stated the same concepts should apply for Azure. Here is a link to his blog. This is a link to his GitHub site with a PKITools module that he created. If all else fails you can reach out to him on Twitter.
While it's quite easy to populate a pre-booted image with public certificates. it's not possible (that I have found) to populate the private key.
DSC would require the private key to decrypt the passwords.
The most common tactic people blog about is to use the unattend to script the import of a PFX. issue there is you have to leave the password for the PFX in plain text. Perhaps that is ok in your environment.
The other option requires a more complicated setup. Use a simple DSC or GPO to auto enroll a unique certificate. then have the system, via first boot script or DSC custom resource, tickle an API (Like Polaris) and that triggers a DSC script that uses PKITools or other script to get the public certificate that the machine has. Then have that API push a new DSC config (or pull settings) to the machine.

Powershell certificate authentication on standalone server?

I've tried searching for a solution to this for half a day and decided I need to ask the question myself.
My goal is to remotely control standalone Windows servers via PowerShell on our internal network. Our environment is based on MicroFocus eDirectory instead of MS Active Directory so our servers are not connected via GPOs.
Since PowerShell have existed for such a long time and you can control commandline only installs via this I assumed there would be a solution in a form of certificate authentication client to server but I've yet to find anything resembling this.
I'm aware of workarounds including creating private keys to store decrypt in scripts but we do not want to risk losing such information and wish to be able to create certificates to both new clients and servers without having to include any form of credentials in scripts.
Is there no way to simply use certificates to authenticate in place of credentials?
You should take a look at this chapter in PS remoting book, it describes exactly what you need. https://devopscollective.gitbooks.io/secrets-of-powershell-remoting/content/manuscript/accessing-remote-computers.html
The certificate auth part starts in the middle somewhere, look for "Certificate Authentication"