We have installed and configured Hashicorp Vault AppRole authentication for one server, by storing the role_id and secret_id in a local file on the server, and we're able to have code on the server read the values from file, authenticate to Vault, receive a token and then read the secrets it needs from Vault. So far so good. However, the secret_id expires after 31 days, and so the process fails.
I've read up on the concepts of using AppRoles, and they seem like the perfect fit for our use case, but for this expiration. We don't want to have to re-generate the secret_id every month.
From what I've read, if you create the role without setting secret_id_ttl it should be non-expiring, but that isn't the case. This may be due to how the AppRole auth method is configured, but I haven't seen anything solid on this.
So I found an article on the Hashicorp website where AppRoles are discussed in detail. The article gives good arguments for expiring secret_id's in a CI/CD environment, even illustrating how this works in 8 simple steps. I understand how this works, but the article fails to mention how the CI/CD and Orchestrator systems themselves are authenticated to Vault? Or am I missing something?
In the end, I want to have the secret_id not expire. Ever.
Without additional support from your environment you will have to write some logic in your installer, and have a service manager of some sort to start your services. In many cloud environments, you may already have the equivalent entities (Terraform, Cloud Formation, etc.) and you should leverage their secrets management capabilities where needed.
For custom installations, here is a workflow that I have used.
Have an installation manager process that can be invoked to perform installation / upgrade. Make sure installation / upgrade of services is always through this process.
Have a service manager process that is responsible for starting individual services and monitoring them / restarting them. Make sure service start-ups are always via this service manager.
During installation, generate self-signed certificates for Vault, installation manager and service manager. Vault certificates should trust the certs for the installation manager and the service manager. Store these with limited permission (600) in directories owned by the installation user or the service manager user as the case may be. Set up certificate-based authentication in Vault using these certs.
These credentials should have limited capabilities associated with them. The installation manager should only be able to create new roles and not delete anything. The service manager should only be able to create secrets for the named roles created by the installation manager, and delete nothing.
During installation / upgrade, the installation manager should connect to Vault and create all necessary service-specific roles. It should also be able to set role ids for individual services in per-service config files that the services may read on start-up.
During each service's start-up, the service manager should connect to Vault and create secret ids corresponding to each service's role. It should set the secret id in an environment variable and start the service. The secret id should have time-bound validity (by setting TTLs) so that they cannot be used for much beyond the creation of the auth token (see #7).
Each service should read the role id from the config file, and the secret id from the environment variable. It should then generate the auth token using these two, and use the token to authenticate itself with vault for its lifetime.
It is possible to create a Vault AppRole with a secret_id that essentially never expires. However, this should be limited to use on a Vault development server -- one that does not contain any production credentials -- and for use in a development environment.
That being said, here's the procedure I used based on several articles in the Vault documentation, but primarily AppRole Pull Authentication.
This assumes that the Vault approle authentication method is already installed at approle/ and that you are logged in to Vault, have root or admin privileges on the Vault server and have a valid, non-expired token.
Note: For the values supplied for the fields below, the maximum value that vault seems to accept is 999,999,999. For the TTL fields, that is the number of seconds which comes out to more than 31 years. That's not forever, but it is long enough that renewing the secret_id will probably be somebody else's problem (SEP).
# Vault server address to be used by the Vault CLI.
export VAULT_ADDR="https://vault-dev.example.com:8200/"
# Vault namespace to be used by the CLI.
# Required for Cloud and Enterprise editions
# Not applicable for Open Source edition
export VAULT_NAMESPACE="admin"
# The name of the Vault AppRole
export VAULT_ROLE=my-approle
# Override defaults on the approle authentication method
# NOTE: In this command, the field names, default-lease-ttl
# and max-lease-ttl contain dashes ('-'), NOT
# underscores ('_'), and are preceded by a single
# dash ('-').
vault auth tune \
-default-lease-ttl=999999999 \
-max-lease-ttl=999999999 approle/
# Override defaults on the approle
# NOTE: In this command, the field names, secret_id_ttl and
# secret_id_num contain underscores ('_'), NOT
# dashes ('-'), and are NOT preceded by a single
# dash ('-').
vault write auth/approle/role/my-approle \
secret_id_ttl=999999999 \
secret_id_num_uses=999999999
# Create a new secret_id for the approle which uses the new defaults
vault write -f auth/approle/role/my-approle/secret-id
Update the server config file to use the new secret_id and you are ready to go.
As the OP has noted, the Hashicorp Vault documentation assumes that the application is able to authenticate, somehow, to the vault and then retrieve the secret ID (possibly wrapped) from the vault and then, use that to authenticate and fetch a token used to actually work with secrets. The answers here are posing alternative approaches to retrieving that initial token.
Alan Thatcher wrote a blog article, Vault AppRole Authentication, that provides another well thought out approach:
Create a policy that allows the user to retrieve the secret-id and role-id, but nothing else.
Create a long lived, periodic/renewable token based on that policy.
Store the long lived token securely, e.g. as a Kubernetes secret
At runtime, use the long-lived token to:
acquire the secret-id and role-id,
authenticate to vault using these and acquire short-lived token
use current short-lived token to work with secrets
For Java applications, the Spring Vault project supports this approach if you configure the long-lived token as the "initial token" and the approle authencation name, e.g. chef-ro in the blog case.
My personal feeling is that this approach is about as secure but a bit simpler than the mutual TLS approach. I agree that using an infinite TTL for the secret-id is a less secure practice for Production environments.
Thanks to Mr. Thatcher for thinking this one through.
This is probably not the canonnical answer, but I found it empty so decided to add some pointers.
As per Hashicorp Vault AppRole: role-id and secret-id:
Additional brownie information: Ideally, it's best practice to keep
the TTL low, 30 minutes max - if your application is stateful, or
maybe even less if it's a stateless application. The secret key of
Vault approle should also be rotated every 90 days. Please note by
default, Vault approle backend has 31 days of TTL, so if you want to
set it to 90 days, you need to increase TTL of the approle backend as
well.
However (in the same question):
You can generate secret-id with indefinite validity. But doing so will
be as good as keeping your secrets in the configuration file.
For ephemeral instances you can use configuration management to pass in secrets via a third (broker) role. With regard to a server that exists indefinitely, i'm still working that out...
Ideas:
TLS certificates might work well on Windows, don't know about Linux.
GitHub Personal Access Tokens, but this is not org. friendly.
Review the other auth methods available to see if there's one that fits your requirements (e.g. AWS).
I am using a JBoss based vault to secure sensitive data such as the database credentials.
I use a Java based HTTP REST client to create distributed Kafka connectors but ended up with a security concern such that a request for the connector's "config" exposes the sensitive credentials in the response.
I referred this official documentation but could not get much help in the context of JBoss vault.
Any pointers or references that directly addresses this specific problem is very much appreciated.
Any references to alternate open source (and free to use) Vault based solutions would also be of great help.
You'd have to write code that implements the ConfigProvider interface of the Connect API, mentioned there.
You can browse Kafka source code on Github to see the existing File one, but that KIP (which references Hashicorp Vault) and the source files are the only such documentation for now.
Connect doesn't use JBoss, either, so you'd have to find a way around that
I have configured my service fabric services to use Azure Key Vault for configuration. If, after the app is deployed, I change the config in Key Vault, how do I then restart the affected service so it can pick up the new config value?
Or is there another way altogether?
The best way to handle configuration on SF is use your application parameters file for this, if you use a continuous deployment pipeline like VSTS, you could use release variables to set these values for you and deploy a new version of your configuration file and let SF do the rest.
But in case you still need to use Key vault:
if you are using asp.net core, Using Azure Key Vault to store secrets are like loading configuration files, the values are cached until you reload it.
You can use the IConfigurationRoot.Reload() to reload the secrets from your key vault new values. Check it Here
The trick now is to make it automatically you have to:
Enable Key Vault Logging to track the changes, this will emit logs once you update the key vault. check it here and here .
And then:
Create an endpoint in your API to be called and refresh the secrets. Make it secure to avoid abuse.
Create an Azure function to process these logs and trigger the endpoint
Or:
Create a message queue to receive the command and the system read the message to refresh the settings
Or:
Make a timer to refresh on specific periods(I would not recommended this approach because you might end up with outdated config, but it is easy and useful for quick test scenarios, not production)
Or if you prefer more custom designed solution, you could create your own ConfigurationProvider based on KeyVault and do the cache logic according to your app architecture and you don't have to bother with the rest. Please refer to the Asp.Net source here for this.
The documented way to provide configuration to your services is by using the 'configuration' part of your application package.
As this is versioned, it can be upgraded, without requiring your services to be upgraded or even be restarted.
More info here and here.
Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.
We have an application that is deployed into CloudFoundry/Bluemix. The application reads its database connections from the VCAP_SERVICES environment variable. The db password stored in the environment variable is encrypted and we decrypt it when the application boots up.
We are looking at Spring Cloud Service Connectors. Do the cloud connectors provide any hook, so that we can decrypt the password from VCAP_SERVICES before the DataSource instance is created?
Why do you want to do this? Where does the app get its decryption key from? If it's hard-coded in the app, that's an antipattern that will make it hard to rotate the key. If it's through an environment variable, then it's no more secure than storing the database credentials unencrypted as services in Cloud Foundry - services in CF are nothing more than domain-specific groups of environment variables. I can't see that encrypting them adds any security.
To answer the question: Not out-of-the-box, but you could probably intercept the flow of Spring components that act on the environment variables that Cloud Foundry provides to your app.
The abstract class that creates ServiceInfo instances is CloudFoundryServiceInfoCreator. You could look at maybe providing a custom implementation of this? There is a blog post describing how Spring Cloud Connectors works. You might be able to extend CloudFoundryConnector too.