HashiCorp Vault sealing questions - hashicorp-vault

I've started playing with Hashicorp's Vault to manage secrets and had some questions about the day-to-day of Vault sealing. My workflow has two auth backends; specific users access Vault with write access to add new secrets, servers have readonly access for the secrets they need.
1) Under normal circumstances, does the Vault stay in an unsealed state? I believe it would as a dynamically provisioned server should not have to coordinate an unseal.
2) Is the purpose of sealing to off-board staff to rotate keys and in case of an intrusion?
3) What's the best practice for ensuring the vault process is always running, since if it dies the Vault will seal? Also, in a highly available configuration, if one Vault node's process dies, does it seal the Vault for everyone?

I asked this question on the Vault Google Group and this was the best response:
1) Under normal circumstances, does the Vault stay in an unsealed
state? I believe it would as a dynamically provisioned server should
not have to coordinate an unseal.
Yes. Once Vault is initialized and unsealed, it 'normally' stays in an
unsealed state.
2) Is the purpose of sealing to off-board staff to rotate keys and in
case of an intrusion?
Sealing of Vault enables a turn key mechanism to stop all the services
of Vault. It would require a specific number of unseal key holders to
make Vault operational again.
3) What's the best practice for ensuring the vault process is always
running, since if it dies the Vault will seal? Also, in a highly
available configuration, if one Vault node's process dies, does it
seal the Vault for everyone?
There is no official best practice recommendation for this. But
running Vault in a dedicated instance/cluster with very
limited/no-access to its memory. Running Vault in a HA mode using a
backend which supports it is good. If any of the cluster nodes goes
down or if the Vault process is restarted, it will be in a sealed
state and would require the unseal operation to be performed to make
it operational.
Best,
Vishal

Taken from https://www.vaultproject.io/docs/concepts/seal.html:
"Under normal circumstances, does the Vault stay in an unsealed state?" -
When a Vault server is started, it starts in a sealed state. In this state, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it.
Unsealing is the process of constructing the master key necessary to
read the decryption key to decrypt the data, allowing access to the
Vault.
Prior to unsealing, almost no operations are possible with Vault. For
example authentication, managing the mount tables, etc. are all not possible. The only possible operations are to unseal the Vault and check the status of the unseal.
"Is the purpose of sealing to off-board staff to rotate keys and in case of an intrusion?" -
This way, if there is a detected intrusion, the Vault data can be locked
quickly to try to minimize damages. It can't be accessed again without access to the master key shards.
"since if it dies the Vault will seal?" - yes.

Related

Vault direct integration or via a microservice

I am using Hashicorp Vault to store multiple secrets in the KV Secrets engine, one of which is the database connection string - username, password, host ip and port. I have multiple microservices, which need to use this db secret to connect with db.
Please clarify which of these integration pattern is valid:
Direct Integration with Vault: Each of the microservices will have direct connection with Vault to get the secrets needed for the operation. All the microservices will have the vault token configured (in K8s secrets) for accessing the vault.
Retrieving secrets via another microservice: Should there be an abstract layer i.e. a separate microservice for Vault interaction and all the other microservices will call the APIs of this vault-microservice to get the secrets they want. The vault token (in K8s Secrets) will be accessed by only one microservice.
The other microservice is an abstraction layer. It is extra work that might allow you to change secrets provider in the future.
Unless you can justify writing and maintaining that abstraction layer (because you want to use Vault in some deployments and AWS Secrets Manager in others), then don't bother.
The other issue is that although Vault's KV store is quite common and there are several other implementation, what if you want to use Transit, PKI or SSH CA? These services exist elsewhere (in AWS for example), but they don't have feature parity. You probably don't want to be on the hook to support those differences in your abstraction layer.
A low(er) cost alternative that allows you decouple the implementation from your code would be to wrap the Vault API class using a simple KVSecrets class in your code, a software design pattern know as the facade. But remember that unless you test your class with two services, you can't garantee it will be possible to migrate to another service one day.
So considering all this, just call the API directly or use the Vault library for your programming language.

Openshift secure delete using "oc delete secret"

I have been asked by security auditors to explain the underlying process of oc delete secret What they want to establish is that the secret is not recoverable by forensics tools for example, once deleted.
Thanks in advance for any input.
There are a lot of layers you would have to peel in order to do a thorough assessment/audit. Start by looking at the calls performed by the oc CLI which underneath should be sending a request to the DELETE secret endpoint on the API Server.
Ultimately, In Kubernetes, the delete operation will come down to deleting a key from the etcd datastore as seen here. You can further dig into the etcd Go API and etcd internals in order to determine how the deletion is performed across cluster nodes on commit, and determining whether a forensics tool would be able to pry into the storage blocks on disk.
Also, ensure Pods that mount the secrets are not writing the data to disk or to logs.

backup Hashicorp Vault server and use the backup to build new server

We are using Hashicorp Vault with Consul as storage, we want to implement a robust backup and recovery strategy for vault.
we are particularly looking to backup all the Vault data and use that file as storage while building new vault server.
I did enough research, not able to find a convincing solution:(
Please provide any suggestions.
This is what we followed in our production environment for the high availability of the Vault server.
As your using Consul as backend, make sure Consul/backend is highly available as all the data/secrets are stored in it.
Just to check the behavior, try running vault server with two instances but pointing to same backend, consul. Observe that both the instances, when UI opened from the browser, points the same data as the backend is same.
When Vault is backed by a persistent/high available storage, Vault can be considered just as front-end/UI service which display data/secrets/policies.
Vault High Availability with Consul that is what was Here_2_learn talking about.
Also, if you using Consul as a storage backend for Vault, you can use the consul snapshot for backing up our data.

Is the information stored inside GKE "etcd" encrypted?

I am using GKE(Google Kubernetes Engine) 1.13.6-gke.6 and I need to provide etcd encryption evidence for PCI purposes. I have used --data-encryption-key flag and used a KMS key to encrypt secrets following this documentation.
I need to give a set of commands which will prove that the information stored in etcd of the master node is encrypted.
Here is how we verify that the secrets stored inside a normal Kuebrnetes Cluster (not GKE) are encrypted.As we know GKE is a managed service and master node is managed by GCP. Is there a way to access GKE "etcd" to see the stored secrets and data at rest ?
Why do you have to prove that the information is encrypted? GKE is covered by Google Cloud's PCI DSS certification and since the master is a part of the "cluster as a service" that should be out of scope for what you need to show since you don't (and can't) control the way in which the storage is implemented.
One thing you can do is use Application-layer Secrets Encryption to encrypt your secrets with your own key stored in Cloud KMS. For your secrets you would be able to run commands to prove that additional level of encryption.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security#etcd_security
In Google Cloud, customer content is encrypted at the filesystem layer by default. So disks that host etcd storage for GKE clusters are encrypted at the filesystem layer. For more information, see Encryption at Rest.

Kubernetes secrets and service accounts

I've been working with kubernetes for the past 6 months and we've deployed a few services.
We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.
Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.
For now we have one key in a secret and the other we're going to manually post to the single pod.
This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.
Has anyone else came up against the same problem?
How have you gotten around it?
cheers
Requirements
No single person ever has access to both keys (datastore and KMS)
Data access to this must be audited
If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.
For now we have one key in a secret and the other we're going to manually post to the single pod.
You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.
You can also use Vault alongside Google Cloud KMS which is detailed in this article
What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.
I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.
A few items to consider:
If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.