I am planning to use Hashicorp Vault for secrets management. Secrets/tokens stored in my Java application can be read by multiple services. Services may store the tokens in memory.
One service is designed to update the secrets in Vault.
Once a secret is updated in Vault, I want the Java application to get notified about the change.
Does vault provide any inbuilt solution for this?
For example: Servlet Filters in Java. All the request can be intercepted using filters.
Related
I am using Hashicorp Vault to store multiple secrets in the KV Secrets engine, one of which is the database connection string - username, password, host ip and port. I have multiple microservices, which need to use this db secret to connect with db.
Please clarify which of these integration pattern is valid:
Direct Integration with Vault: Each of the microservices will have direct connection with Vault to get the secrets needed for the operation. All the microservices will have the vault token configured (in K8s secrets) for accessing the vault.
Retrieving secrets via another microservice: Should there be an abstract layer i.e. a separate microservice for Vault interaction and all the other microservices will call the APIs of this vault-microservice to get the secrets they want. The vault token (in K8s Secrets) will be accessed by only one microservice.
The other microservice is an abstraction layer. It is extra work that might allow you to change secrets provider in the future.
Unless you can justify writing and maintaining that abstraction layer (because you want to use Vault in some deployments and AWS Secrets Manager in others), then don't bother.
The other issue is that although Vault's KV store is quite common and there are several other implementation, what if you want to use Transit, PKI or SSH CA? These services exist elsewhere (in AWS for example), but they don't have feature parity. You probably don't want to be on the hook to support those differences in your abstraction layer.
A low(er) cost alternative that allows you decouple the implementation from your code would be to wrap the Vault API class using a simple KVSecrets class in your code, a software design pattern know as the facade. But remember that unless you test your class with two services, you can't garantee it will be possible to migrate to another service one day.
So considering all this, just call the API directly or use the Vault library for your programming language.
We have a whole bunch of secrets on our Hashicorp Vault server. We have started testing out spinnaker for deploying on Kubernetes but I do not see any documentation around how to create a secret on kubernetes reading from Hashicorp Vault.
Can someone point me in the right direction for this? Is it even advisable to create secrets using Spinnaker or should we just use it strictly for deployments?
The problem with creating secret via spinnaker is that where do you keep the content of the secret in the first place to be able to create a secret from it. Wherever you keep it it introduces a risk of compromise. So I would suggest to create the secret dynamically at runtime using a sidecar injector.
HashiCorp Vault sidecar injector agent is a tool that can be used for this purpose. The injector is a Kubernetes Mutation Webhook Controller. The controller intercepts pod events and applies mutations to the pod if annotations exist within the request.
Since the secret gets injected directly into the pod as VolumeMounts from the Vault Server the chance of compromise is less compared to creating a secret via Spinnaker
I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.
However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs.
I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly.
This video (should start at the right time) mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since Google mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.
Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.
In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to secret-manager. It's a serverless secret manager with, I think, all the basic requirements that you need.
The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished
The awesome guy on your video link, Seth Vargo, has been involved in this project.
He has also released Berglas. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it.
I built a python library to easily use Berglas secret in Python.
Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!
We are using Hashicorp Vault with Consul as storage, we want to implement a robust backup and recovery strategy for vault.
we are particularly looking to backup all the Vault data and use that file as storage while building new vault server.
I did enough research, not able to find a convincing solution:(
Please provide any suggestions.
This is what we followed in our production environment for the high availability of the Vault server.
As your using Consul as backend, make sure Consul/backend is highly available as all the data/secrets are stored in it.
Just to check the behavior, try running vault server with two instances but pointing to same backend, consul. Observe that both the instances, when UI opened from the browser, points the same data as the backend is same.
When Vault is backed by a persistent/high available storage, Vault can be considered just as front-end/UI service which display data/secrets/policies.
Vault High Availability with Consul that is what was Here_2_learn talking about.
Also, if you using Consul as a storage backend for Vault, you can use the consul snapshot for backing up our data.
I've been working with kubernetes for the past 6 months and we've deployed a few services.
We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.
Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.
For now we have one key in a secret and the other we're going to manually post to the single pod.
This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.
Has anyone else came up against the same problem?
How have you gotten around it?
cheers
Requirements
No single person ever has access to both keys (datastore and KMS)
Data access to this must be audited
If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.
For now we have one key in a secret and the other we're going to manually post to the single pod.
You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.
You can also use Vault alongside Google Cloud KMS which is detailed in this article
What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.
I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.
A few items to consider:
If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.