Vault: Data synchronisation between a cloud and an on-premise instanz? - hashicorp-vault

Can hashicorp vault be configured so that there is an instance in the cloud and an instance on an on-premise computer? Both vaults should regularly synchronize their data / secrets.
What are the mechanisms here?
The use case is that the on-premise computer is not constantly connected to the internet. The applications on the internal network should be able to connect to other applications using the stored vault secrets of the on-premise server.

Hashicorp Vault provides replication in Vault Enterprise. Multi-datacenter deployments is one of usage scenarios. The setup you are looking for is primary (cloud) / seconday (on-prem) in your case. More info: https://www.vaultproject.io/docs/enterprise/replication

Related

How to run an application (like nextcloud) on-premises that has a failover to Azure Kubernetes Service

I want to run an application (like nextcloud) in kubernetes on-premises that can failovers to Azure Kubernetes Service with the same data on it.
I already have an application on-premises and in the cloud but the data needs to be synced.

Can we configure AWS Secrets Manager to integrate with an on-premises k8s cluster

I setup a EKS cluster and integrated AWS Secrets Manager in it following the steps mentioned in https://github.com/aws/secrets-store-csi-driver-provider-aws and it worked as expected.
Now we have a requirement to integrate the AWS Secrets Manager on an on-premises k8s cluster and I am unable to follow the same steps as they seem to be explicitly for AWS EKS based clusters.
I googled around a bit and found you can call the Secrets Manager programmatically using one of the ways in https://docs.aws.amazon.com/secretsmanager/latest/userguide/asm_access.html, but this approach wont work for us.
Is there a k8s way to directly connect to AWS secrets Manager without setting up AWS-CLI and the OIDC cluster ID on the on-premises cluster?
Any help would be highly appreciated.
You can setup external OIDC providers with AWS and also setup K8s to with OIDC, but that is a lot of work.
AWS recently announced IAM Roles Anywhere which will let you use host based certificates to authenticate, but you will still have to call the Secrets Manager APIs.
If you are willing to retrieve secrets through etcd (which may store the secrets base64 encoded on the cluster) you can look at using the opensource External Secrets solution.

Azure DevOps environment with private EKS cluster

I am currently using EKS private cluster with a public API server endpoint in order to use Azure DevOps environments(with Kubernetes service connection).
I have a requirement to make everything private in EKS.
Once EKS becomes private, it breaks everything in Azure DevOps as it is not able to reach the API server.
Any suggestion on how to communicate private kubernetes API server with azure devops would be appreciated.
If you're trying to target the cluster for deployment, you need a self-hosted agent that has a network route to your cluster.
The other capabilities exposed by the environment feature of Azure DevOps (i.e. monitoring the state of the cluster via the environment view) will not work -- they require a public-facing Kubernetes API to work.
If you don't mind the additional cost, VPN can be used to establish connection to the private EKS cluster.

backup Hashicorp Vault server and use the backup to build new server

We are using Hashicorp Vault with Consul as storage, we want to implement a robust backup and recovery strategy for vault.
we are particularly looking to backup all the Vault data and use that file as storage while building new vault server.
I did enough research, not able to find a convincing solution:(
Please provide any suggestions.
This is what we followed in our production environment for the high availability of the Vault server.
As your using Consul as backend, make sure Consul/backend is highly available as all the data/secrets are stored in it.
Just to check the behavior, try running vault server with two instances but pointing to same backend, consul. Observe that both the instances, when UI opened from the browser, points the same data as the backend is same.
When Vault is backed by a persistent/high available storage, Vault can be considered just as front-end/UI service which display data/secrets/policies.
Vault High Availability with Consul that is what was Here_2_learn talking about.
Also, if you using Consul as a storage backend for Vault, you can use the consul snapshot for backing up our data.

Is the information stored inside GKE "etcd" encrypted?

I am using GKE(Google Kubernetes Engine) 1.13.6-gke.6 and I need to provide etcd encryption evidence for PCI purposes. I have used --data-encryption-key flag and used a KMS key to encrypt secrets following this documentation.
I need to give a set of commands which will prove that the information stored in etcd of the master node is encrypted.
Here is how we verify that the secrets stored inside a normal Kuebrnetes Cluster (not GKE) are encrypted.As we know GKE is a managed service and master node is managed by GCP. Is there a way to access GKE "etcd" to see the stored secrets and data at rest ?
Why do you have to prove that the information is encrypted? GKE is covered by Google Cloud's PCI DSS certification and since the master is a part of the "cluster as a service" that should be out of scope for what you need to show since you don't (and can't) control the way in which the storage is implemented.
One thing you can do is use Application-layer Secrets Encryption to encrypt your secrets with your own key stored in Cloud KMS. For your secrets you would be able to run commands to prove that additional level of encryption.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security#etcd_security
In Google Cloud, customer content is encrypted at the filesystem layer by default. So disks that host etcd storage for GKE clusters are encrypted at the filesystem layer. For more information, see Encryption at Rest.