Where should I store the Unseal Key and Root Token for HashiCorp Vault?
The Vault will be used by various members on the team.
In best-practice usage, you wouldn't be storing a root token - once done with it, it should be revoked.
Root tokens are useful in development but should be extremely
carefully guarded in production.
In fact, the Vault team recommends
that root tokens are only used for just enough initial setup (usually,
setting up auth methods and policies necessary to allow administrators
to acquire more limited tokens) or in emergencies, and are revoked
immediately after they are no longer needed.
If a new root token is
needed, the operator generate-root command and associated API endpoint
can be used to generate one on-the-fly.
Unseal keys should be distributed amongst trusted people, with nobody having access to more than one of them.
This then requires more than one person to restart vault or to gain root access to it.
The documentation doesn't suggest any good hiding places for the individual unseal keys that I could find - I'd suggest wherever you normally store passwords, ie a password manager.
For day-to-day usage, users can log in using the user/pass or ldap auth backends.
Related
Many of my (confidential) apps are talking to each other via the client credential flow.
They request a token from the Azure Identity platform and use this token to authenticate against another app.
A while ago I used client secrets to do so, but later I read that this is not recommended for production environments.
For this reason I changed to self-signed certificates that are valid a longer time.
Those certificates are generated by myself with Azure Keyvault.
However, also this is not recommended.
Microsoft states that that in production environments you should use certificates that are signed by an official CA.
If I now use Lets encrypt, this will expire all three months what is also not such a nice solution.
My questions:
Why is the client secret not recommended in production environments?
Why is the self-signed certificate a problem? I do understand this in matters of HTTPS, but where is the security breach if its used for client credential flow? In my case I am the owner of the app and the app registration.
Do I need to buy a certificate that is one-year valid to do it "the right way"?
Do you have any source of best practices here?
• Client secrets include application credentials, SSH keys, API keys, database passwords, encryption keys, connection strings and so on to connect various resources and access the data or functionality for achieving the designated purpose of that application. Thus, if these are breached, they can put your application at great risk of compromise. Also, the client secret generated in Azure AD and used in APIs for connecting to Azure AD for authentication and authorization purpose is listed and mentioned in unencrypted form in the API code itself. Though, we have an option to store that secret in a key vault and refer to that secret through either managed identity or RBAC assignments, but their credentials too can fall in wrong hands and let the application be vulnerable if the managed identity is a user assigned or even if then access scope of the secret is not well defined according to the required specific need. Thus, client secret is not recommended to be used in a production API.
• In client credentials flow, applications are directly granted permissions by an administrator to perform a certain action regarding the API to be called through it via certificate or federated credentials. Thus, when using a self-signed certificate in client credentials grant scenario, the administrator has granted the daemon app requesting access to other API all the required privileges regarding accessibility of code, API, permissions, data, etc. which can result in poor validation and misuse as the it is very easy to generate a certificate’s key pair without reasonable entropy. Also, protecting the private key of the key pair appropriately to its use and strong validation of the same is not promised in a self-signed certificate due to which it is not recommended in client credentials flow.
• For best practices regarding web app service deployment, please refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/app-service/security-recommendations#general
It explains the best security recommendations for deploying a web app service.
i am quite new to kubernetes and I am looking towards certificate based authentication and token based authentication for calling K8 apis. To my understanding, I feel token based approach (openID + OAuth2) is better since id_token will get refreshed by refresh_token at a certain interval and it also works well with the login point(web browser) which is not the case with Certificate based approach . Any more thoughts to this ? I am working using minikube with kubernetes . Can anyone share their thoughts here ?
Prefer OpenID Connect or X509 Client Certificate-based authentication strategies over the others when authenticating users
X509 client certs: decent authentication strategy, but you'd have to address renewing and redistributing client certs on a regular basis
Static Tokens: avoid them due to their non-ephemeral nature
Bootstrap Tokens: same as static tokens above
Basic Authentication: avoid them due to credentials being transmitted over the network in cleartext
Service Account Tokens: should not be used for end-users trying to interact with Kubernetes clusters, but they are the preferred authentication strategy for applications & workloads running on Kubernetes
OpenID Connect (OIDC) Tokens: best authentication strategy for end users as OIDC integrates with your identity provider (e.g. AD, AWS IAM, GCP IAM ...etc)
I advice you to use OpenID Connect. OpenID Connect is based on OAuth 2.0. It is designed with more of an authentication focus in mind however. The explicit purpose of OIDC is to generate what is known as an id-token. The normal process of generating these tokens is much the same as it is in OAuth 2.0.
OIDC brings a step closer to providing with a user-friendly login experience and also to allow us to start restricting their access using RBAC.
Take also look on Dex which acts as a middleman in the authentication chain. It becomes the Identify Provider and issuer of ID tokens for Kubernetes but does not itself have any sense of identity. Instead, it allows you to configure an upstream Identity Provider to provide the users’ identity.
As well as any OIDC provider, Dex supports sourcing user information from GitHub, GitLab, SAML, LDAP and Microsoft. Its provider plugins greatly increase the potential for integrating with your existing user management system.
Another advantage that Dex brings is the ability to control the issuance of ID tokens, specifying the lifetime for example. It also makes it possible force your organization to re-authenticate. With Dex, you can easily revoke all tokens but there is no way to revoke a single token.
Dex also handles refresh tokens for users. When a user logs in to Dex they may be granted an id-token and a refresh token. Programs such as kubectl can use these refresh tokens to re-authenticate the user when the id-token expires. Since these tokens are issued by Dex, this allows you to stop a particular user refreshing by revoking their refresh token. This is really useful in the case of a lost laptop or phone.
Furthermore, by having a central authentication system such as Dex, you need only configure the upstream provider once.
An advantage of this setup is that if any user wants to add a new service to the SSO system, they only need to open a PR to Dex configuration. This setup also provides users with a one-button “revoke access” in the upstream identity provider to revoke their access from all of our internal services. Again this comes in very useful in the event of a security breach or lost laptop.
More information you can find here: kubernetes-single-sign-one-less-identity/, kubernetes-security-best-practices.
We have installed and configured Hashicorp Vault AppRole authentication for one server, by storing the role_id and secret_id in a local file on the server, and we're able to have code on the server read the values from file, authenticate to Vault, receive a token and then read the secrets it needs from Vault. So far so good. However, the secret_id expires after 31 days, and so the process fails.
I've read up on the concepts of using AppRoles, and they seem like the perfect fit for our use case, but for this expiration. We don't want to have to re-generate the secret_id every month.
From what I've read, if you create the role without setting secret_id_ttl it should be non-expiring, but that isn't the case. This may be due to how the AppRole auth method is configured, but I haven't seen anything solid on this.
So I found an article on the Hashicorp website where AppRoles are discussed in detail. The article gives good arguments for expiring secret_id's in a CI/CD environment, even illustrating how this works in 8 simple steps. I understand how this works, but the article fails to mention how the CI/CD and Orchestrator systems themselves are authenticated to Vault? Or am I missing something?
In the end, I want to have the secret_id not expire. Ever.
Without additional support from your environment you will have to write some logic in your installer, and have a service manager of some sort to start your services. In many cloud environments, you may already have the equivalent entities (Terraform, Cloud Formation, etc.) and you should leverage their secrets management capabilities where needed.
For custom installations, here is a workflow that I have used.
Have an installation manager process that can be invoked to perform installation / upgrade. Make sure installation / upgrade of services is always through this process.
Have a service manager process that is responsible for starting individual services and monitoring them / restarting them. Make sure service start-ups are always via this service manager.
During installation, generate self-signed certificates for Vault, installation manager and service manager. Vault certificates should trust the certs for the installation manager and the service manager. Store these with limited permission (600) in directories owned by the installation user or the service manager user as the case may be. Set up certificate-based authentication in Vault using these certs.
These credentials should have limited capabilities associated with them. The installation manager should only be able to create new roles and not delete anything. The service manager should only be able to create secrets for the named roles created by the installation manager, and delete nothing.
During installation / upgrade, the installation manager should connect to Vault and create all necessary service-specific roles. It should also be able to set role ids for individual services in per-service config files that the services may read on start-up.
During each service's start-up, the service manager should connect to Vault and create secret ids corresponding to each service's role. It should set the secret id in an environment variable and start the service. The secret id should have time-bound validity (by setting TTLs) so that they cannot be used for much beyond the creation of the auth token (see #7).
Each service should read the role id from the config file, and the secret id from the environment variable. It should then generate the auth token using these two, and use the token to authenticate itself with vault for its lifetime.
It is possible to create a Vault AppRole with a secret_id that essentially never expires. However, this should be limited to use on a Vault development server -- one that does not contain any production credentials -- and for use in a development environment.
That being said, here's the procedure I used based on several articles in the Vault documentation, but primarily AppRole Pull Authentication.
This assumes that the Vault approle authentication method is already installed at approle/ and that you are logged in to Vault, have root or admin privileges on the Vault server and have a valid, non-expired token.
Note: For the values supplied for the fields below, the maximum value that vault seems to accept is 999,999,999. For the TTL fields, that is the number of seconds which comes out to more than 31 years. That's not forever, but it is long enough that renewing the secret_id will probably be somebody else's problem (SEP).
# Vault server address to be used by the Vault CLI.
export VAULT_ADDR="https://vault-dev.example.com:8200/"
# Vault namespace to be used by the CLI.
# Required for Cloud and Enterprise editions
# Not applicable for Open Source edition
export VAULT_NAMESPACE="admin"
# The name of the Vault AppRole
export VAULT_ROLE=my-approle
# Override defaults on the approle authentication method
# NOTE: In this command, the field names, default-lease-ttl
# and max-lease-ttl contain dashes ('-'), NOT
# underscores ('_'), and are preceded by a single
# dash ('-').
vault auth tune \
-default-lease-ttl=999999999 \
-max-lease-ttl=999999999 approle/
# Override defaults on the approle
# NOTE: In this command, the field names, secret_id_ttl and
# secret_id_num contain underscores ('_'), NOT
# dashes ('-'), and are NOT preceded by a single
# dash ('-').
vault write auth/approle/role/my-approle \
secret_id_ttl=999999999 \
secret_id_num_uses=999999999
# Create a new secret_id for the approle which uses the new defaults
vault write -f auth/approle/role/my-approle/secret-id
Update the server config file to use the new secret_id and you are ready to go.
As the OP has noted, the Hashicorp Vault documentation assumes that the application is able to authenticate, somehow, to the vault and then retrieve the secret ID (possibly wrapped) from the vault and then, use that to authenticate and fetch a token used to actually work with secrets. The answers here are posing alternative approaches to retrieving that initial token.
Alan Thatcher wrote a blog article, Vault AppRole Authentication, that provides another well thought out approach:
Create a policy that allows the user to retrieve the secret-id and role-id, but nothing else.
Create a long lived, periodic/renewable token based on that policy.
Store the long lived token securely, e.g. as a Kubernetes secret
At runtime, use the long-lived token to:
acquire the secret-id and role-id,
authenticate to vault using these and acquire short-lived token
use current short-lived token to work with secrets
For Java applications, the Spring Vault project supports this approach if you configure the long-lived token as the "initial token" and the approle authencation name, e.g. chef-ro in the blog case.
My personal feeling is that this approach is about as secure but a bit simpler than the mutual TLS approach. I agree that using an infinite TTL for the secret-id is a less secure practice for Production environments.
Thanks to Mr. Thatcher for thinking this one through.
This is probably not the canonnical answer, but I found it empty so decided to add some pointers.
As per Hashicorp Vault AppRole: role-id and secret-id:
Additional brownie information: Ideally, it's best practice to keep
the TTL low, 30 minutes max - if your application is stateful, or
maybe even less if it's a stateless application. The secret key of
Vault approle should also be rotated every 90 days. Please note by
default, Vault approle backend has 31 days of TTL, so if you want to
set it to 90 days, you need to increase TTL of the approle backend as
well.
However (in the same question):
You can generate secret-id with indefinite validity. But doing so will
be as good as keeping your secrets in the configuration file.
For ephemeral instances you can use configuration management to pass in secrets via a third (broker) role. With regard to a server that exists indefinitely, i'm still working that out...
Ideas:
TLS certificates might work well on Windows, don't know about Linux.
GitHub Personal Access Tokens, but this is not org. friendly.
Review the other auth methods available to see if there's one that fits your requirements (e.g. AWS).
I'm looking into vault for securing DB credentials used by various web applications. I've looked over a few Youtube videos, slide shares and even downloaded Vault to experiment with. I can't quite wrap my head around it.How does Vault protect credentials for something like a web application which uses a token to authenticate to Vault with? I'm assuming the Apache process would have to own the vault token (user token, not root token) so it can access secrets for the applications it's running. This would, it seems, expose any secrets the Apache process would have access to in the event of an application compromise. I don't see a big win here so I must be missing a lot.
In a nutshell, Vault supports authentication backends which then allow you to generate tokens. Tokens should be seen as temporary access and are not the same as a key.
In particular, Vault supports authentication with many different systems to generate dynamic secrets and credentials as needed. This is well documented here
In terms of security, the idea is to have a authentication backend as the primary, and the token being generated as a consequence. You are correct in saying hard coding tokens is a security risk. Once generated on the fly, they should have strict permissions and short TTLs. Vault makes this easy as you can define the scope of the token with an ACL.
kube-apiserver does not seem to provide an option to use a certification revocation list (CRL).
Is there a way to revoke a client certificate if it's lost or not used anymore?
As far as I know there isn't a way to directly revoke certificates via a CRL. However, what does work, and what we are currently using, is ABAC policies to identify users (set via the Common Name of a certificate), and whether they have access to a given resource on Kubernetes.
As an example, say you have a user called "random". You would generate a Client Certificate for them from your given Certificate Authority, with a Common Name of "random".
From there, you can have an ABAC policy file (a csv file with each line being a bit of JSON), with permissions set for user "random" that would provide them with a certain level of access to the Kubernetes API. You can have them have access to everything or certain namespaces or other API parameters. If you need to revoke permissions, you simply delete that user from the ABAC policy file. We've tested this, and it works well. The unfortunate thing, I will say, is you have to restart the Kubernetes API service for those changes to take effect, so there may be a few seconds of downtime for this change to occur. Obviously in a development environment this isn't a big deal, but on production you may need to schedule time for users to be added.
Hopefully in the future a simple "kube-apiserver reload" will allow for a re-read of that ABAC policy file.
One final thing to note: when using Client Certificates for ABAC authentication, you will need to set permissions for users INDIVIDUALLY. Unlike with auth tokens with ABAC, you cannot set Client Certificate users in "groups." Something that caused us headaches, so figured it was worth passing on. :)
Hope this helps!