backup Hashicorp Vault server and use the backup to build new server - hashicorp-vault

We are using Hashicorp Vault with Consul as storage, we want to implement a robust backup and recovery strategy for vault.
we are particularly looking to backup all the Vault data and use that file as storage while building new vault server.
I did enough research, not able to find a convincing solution:(
Please provide any suggestions.

This is what we followed in our production environment for the high availability of the Vault server.
As your using Consul as backend, make sure Consul/backend is highly available as all the data/secrets are stored in it.
Just to check the behavior, try running vault server with two instances but pointing to same backend, consul. Observe that both the instances, when UI opened from the browser, points the same data as the backend is same.
When Vault is backed by a persistent/high available storage, Vault can be considered just as front-end/UI service which display data/secrets/policies.

Vault High Availability with Consul that is what was Here_2_learn talking about.
Also, if you using Consul as a storage backend for Vault, you can use the consul snapshot for backing up our data.

Related

How to setup GKE Cluster and GKE pods has to communicate with cloud sql and cloud sql password stored on google cloud secret manager

I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?
How to setup above requirement? Can you someone please help on the same?
Thanks,
Anand
You can make your deployed application get the secret (password) programmatically, from Google Cloud Secret Manager. You can find and example in many languages in the following link: https://cloud.google.com/secret-manager/docs/samples/secretmanager-access-secret-version
But before make sure that your GKE setup, more specifically your application is able to authenticate to Google Cloud Secret Manager. The following links can help you to choose the appropriate approche:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
You can find information regarding that particular solution in this doc.
There are also good examples on medium here and here.
To answer your question regarding updating the secrets:
Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.

Vault: Data synchronisation between a cloud and an on-premise instanz?

Can hashicorp vault be configured so that there is an instance in the cloud and an instance on an on-premise computer? Both vaults should regularly synchronize their data / secrets.
What are the mechanisms here?
The use case is that the on-premise computer is not constantly connected to the internet. The applications on the internal network should be able to connect to other applications using the stored vault secrets of the on-premise server.
Hashicorp Vault provides replication in Vault Enterprise. Multi-datacenter deployments is one of usage scenarios. The setup you are looking for is primary (cloud) / seconday (on-prem) in your case. More info: https://www.vaultproject.io/docs/enterprise/replication

GKE with Hashicorp Vault - Possible to use Google Cloud Run?

I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.
However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs.
I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly.
This video (should start at the right time) mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since Google mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.
Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.
In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to secret-manager. It's a serverless secret manager with, I think, all the basic requirements that you need.
The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished
The awesome guy on your video link, Seth Vargo, has been involved in this project.
He has also released Berglas. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it.
I built a python library to easily use Berglas secret in Python.
Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!

Is the information stored inside GKE "etcd" encrypted?

I am using GKE(Google Kubernetes Engine) 1.13.6-gke.6 and I need to provide etcd encryption evidence for PCI purposes. I have used --data-encryption-key flag and used a KMS key to encrypt secrets following this documentation.
I need to give a set of commands which will prove that the information stored in etcd of the master node is encrypted.
Here is how we verify that the secrets stored inside a normal Kuebrnetes Cluster (not GKE) are encrypted.As we know GKE is a managed service and master node is managed by GCP. Is there a way to access GKE "etcd" to see the stored secrets and data at rest ?
Why do you have to prove that the information is encrypted? GKE is covered by Google Cloud's PCI DSS certification and since the master is a part of the "cluster as a service" that should be out of scope for what you need to show since you don't (and can't) control the way in which the storage is implemented.
One thing you can do is use Application-layer Secrets Encryption to encrypt your secrets with your own key stored in Cloud KMS. For your secrets you would be able to run commands to prove that additional level of encryption.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security#etcd_security
In Google Cloud, customer content is encrypted at the filesystem layer by default. So disks that host etcd storage for GKE clusters are encrypted at the filesystem layer. For more information, see Encryption at Rest.

Kubernetes secrets and service accounts

I've been working with kubernetes for the past 6 months and we've deployed a few services.
We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.
Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.
For now we have one key in a secret and the other we're going to manually post to the single pod.
This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.
Has anyone else came up against the same problem?
How have you gotten around it?
cheers
Requirements
No single person ever has access to both keys (datastore and KMS)
Data access to this must be audited
If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.
For now we have one key in a secret and the other we're going to manually post to the single pod.
You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.
You can also use Vault alongside Google Cloud KMS which is detailed in this article
What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.
I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.
A few items to consider:
If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.