How can I use HashiCorp Vault Agent with multiple approles - hashicorp-vault

I'm using Hashicorp Vault to store secrets. I have some servers that need to pull secrets from 2 different backends. I have a role set up for each backend, with corresponding approle credentials. I generally use Vault Agent to maintain tokens, but I can't find a way to set up auto-auth with 2 sets of approle credentials. The Vault Agent config doesn't allow 2 auto-auth sections. I've tried running 2 instances of Vault Agent, but get port-in-use errors.
Is there a way to use Vault Agent with 2 sets of approle credentials on the same server? Is it better practice to create a new role with access to both backends?
Thanks!

Related

app in its own namespace with a service account available in any namespace

I have a very specific scenario I'm trying to solve for:
Using Kubernetes (single cluster)
Installing Vault on that cluster
sending GitLab containers to the same cluster.
I need to install Vault in such a way that:
Vault lives in it's own namespace (easy/solved)
Vault's service account (vault-auth) is available to all other namespaces (unsolved)
GitLab's default behavior is to put all apps/services into their own namespaces with the Project ID; EG: repo_name+project_id. It's predictable but the two options are:
When the app is in its own namespace it cannot access the Vault service account in the 'vault' Namespace. It requires you to create a vault service account in each application namespace; hot garbage, or...
Put ALL apps + Vault in the default namespace and applications can easily find the 'vault-auth' service account. Messy but totally works.
To use GitLab in the way it is intended (and I don't disagree) is to leave each app in it's own namespace. The question then becomes:
How would one create the Kubernetes Service Account for Vault (vault-auth) so that Vault the application is in it's own namespace but the service account itself is available to ALL namespaces?
Then, no matter the namespace that GitLab creates, the containers have equal access to the 'vault-auth' service account.

Does Vault support two CA intermediates to sign two different environment certs?

There are multiple non-production environments, such as DEV, TEST, QA, SIT, PKG, and if we want to manage a separate CA for each environment to sign certs, should we set up a separate vault cluster for each environment?
Is there any way that we can manage all these CA in the same vault cluster?
You can create as many PKI secrets engines as you like within a single vault cluster.
Rather than separate clusters (which would be a lot of overhead), create and mount PKI engines are separate mount points. eg: pki/dev, pki/test, etc... Each engine will hold the CA for the corresponding environment.
This applies to all secrets engines: Vault is happy to have more than one mounted, and all operations are on a specific path. You can of course apply separate policies to those engines.

Importance of password security within kubernetes namespaces?

While setting up a automated deployments with Kubernetes (and Helm), I came across the following question several times:
How important is the safeness of a services password (mysql, for example) inside a single namespace?
My thoughts: It's not important at all. Why? All related pods include the password anyway and the services are not available outside of the specific namespace. Though someone would gain access to a pod in that specific namespace, printenv would give him all he needs.
My specific case (Helm): If I set up my mysql server as a requirement (requirements.yaml), I don't have to use any secrets or make effort to share the mysql password and can provide the password in values.yaml.
While Kubernetes secrets aren't that secret, they are more secret than Helm values. Fundamentally I'd suggest this question is more about how much you trust humans with the database password than any particular process. Three approaches come to mind:
You pass the database password via Helm values. Helm isn't especially access-controlled, so anyone who can helm install or helm rollback can also helm get values and find out the password. If you don't care whether these humans have the password (all deployments are run via an automated system; all deployments are run by the devops team who has all the passwords anyways; you're a 10-person startup) then this works.
The database password is in an RBAC-protected Secret. You can use Kubernetes role-based access control so that ordinary users can't directly read the contents of Secrets. Some administrator creates the Secret, and the Pod mounts it or injects it as an environment variable. Now you don't need the password yourself to be able to deploy, and you can't trivially extract it (but it's not that much work to dump it out, if you can launch an arbitrary container).
The application gets the database password from some external source at startup time. Hashicorp's Vault is the solution I've worked with here: the Pod runs with a Kubernetes service account, which it uses to get a token from Vault, and then it uses that to get the database password. The advanced version of this hands out single-use credentials that can be traced back to a specific Pod and service account. This is the most complex path, but also the most secure.

How to intercept Hashicorp Vault update requests?

I am planning to use Hashicorp Vault for secrets management. Secrets/tokens stored in my Java application can be read by multiple services. Services may store the tokens in memory.
One service is designed to update the secrets in Vault.
Once a secret is updated in Vault, I want the Java application to get notified about the change.
Does vault provide any inbuilt solution for this?
For example: Servlet Filters in Java. All the request can be intercepted using filters.

Kubernetes secrets and service accounts

I've been working with kubernetes for the past 6 months and we've deployed a few services.
We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.
Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.
For now we have one key in a secret and the other we're going to manually post to the single pod.
This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.
Has anyone else came up against the same problem?
How have you gotten around it?
cheers
Requirements
No single person ever has access to both keys (datastore and KMS)
Data access to this must be audited
If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.
For now we have one key in a secret and the other we're going to manually post to the single pod.
You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.
You can also use Vault alongside Google Cloud KMS which is detailed in this article
What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.
I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.
A few items to consider:
If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.