Is it necessary to remount a backend when the vault seals? - hashicorp-vault

I would like to use Hashicorp's Vault with the AWS backend. I've automated the process for unsealing the vault. If the vault were to ever seal, do I have to mount the AWS backend again?
Basically, do mounts get unmounted when the vault seals?
I'm just trying to figure out if I need to add the mount command to my unseal automation.

Found the answer myself. I'll post here for anyone else looking.
Backends are not automatically unmounted. In fact, the documentation states that umounting a backend destroys all data:
When a secret backend is unmounted, all of its secrets are revoked (if they support it), and all of the data stored for that backend in the physical storage layer is deleted.
It would be pretty bad if sealing the vault also destroyed all your data. Heh heh.
I was able to test this for myself:
[vagrant#localhost ~]$ vault mount aws
Successfully mounted 'aws' at 'aws'!
[vagrant#localhost ~]$ vault mounts
Path Type Default TTL Max TTL Force No Cache Replication Behavior Description
aws/ aws system system false replicated
secret/ generic system system false replicated generic secret storage
sys/ system n/a n/a false replicated system endpoints used for control, policy and debugging
[vagrant#localhost ~]$ vault seal
Vault is now sealed.
[vagrant#localhost ~]$ vault unseal
Key (will be hidden):
[vagrant#localhost ~]$ vault mounts
Path Type Default TTL Max TTL Force No Cache Replication Behavior Description
aws/ aws system system false replicated
secret/ generic system system false replicated generic secret storage
sys/ system n/a n/a false replicated system endpoints used for control, policy and debugging

Related

Aws ECS Fargate enforce readonlyfilesystem

I need to enforce on ECS Fargate services 'readonlyrootFileSystem' to reduce Security hub vulnerabilities.
I thought it was an easy task by just setting it true in the task definition.
But it backfired as the service does not deploy because the commands in the dockerfile are not executed because they do not have access to folders and also this is incompatible with ssm execute commands, so I won't be able to get inside the container.
I managed to set the readonlyrootFileSystem To true and have my service back on by mounting a volume. To do I mounted a tmp volume that is used by the container to install dependencies at start and a data volume to store data (updates).
So now according to the documentation the security hub vulnerability should be fixed as the rule needs that variable not be False but still security hub is flagging the task as non complaint.
---More update---
the task definition of my service spins also a datadog image for monitoring. That also needs to have its filesystem as readonly to satisfy security hub.
Here I cannot solve as above because datadog agent needs access to /etc/ folder and if I mount a volume there I will lose files and the service wont' start.
is there a way out of this?
Any ideas?
In case someone stumbles into this.
The solution (or workaround, call it as you please), was to set readonlyrootFileSystem True for both container and sidecard (datadog in this case) and use bind mounts.
The rules for monitoring ECS using datadog can be found here
The bind mount that you need to add for your service depend on how you have setup your dockerfile.
in my case it was about adding a volume for downloading data.
Moreover since with readonly FS ECS exec (SSM) does not work, if you want this you also have to add mounts: if added two mounts in /var/lib/amazon and /var/log/amazon. This will allow to have ssm (docker exec basically into your container)
As for datadog, I just needed to fix the mounts so that the agent could work. In my case, since it was again a custom image, I mounted a volume on /etc/datadog-agent.
happy days!

Kasten K10 does not support backup for Ceph RGW storage provisioner

As the title says (and also here https://docs.kasten.io/latest/restrictions.html).
My company is using K10 latest (v5.0.2) as a backup tool for our Openshift cluster.
We are required to use a S3 compatible storage provisioner.
We moved from MinIO to Ceph because of some issues with MinIO (excessive memory usage, MinIO Pods handling, ...) yet we found out that Ceph RGW is not supported from k10 and it seems this makes our backup fail: from the Kasten console it appears that only the ObjectBucketClaim manifest is backed up but not the data contained within the bucket.
Also, when restoring, the ObjectBucketClaims remain in "pending" status.
I am stuck and I don't know what to suggest to my storage department: I told them to giveup using MinIO and start using Ceph but its RGW is not supported by K10.
Any suggestions on how I can handle this situation?
Thanks in advance.

Hashicorp Vault: Is it possible to make edits to pre-existing server configuration file?

I have a Kubernetes cluster which utilizes Vault secrets. I am attempting to modify the conf.hcl that was used to establish Vault. I went into the pod which contains Vault, and appended:
max_lease_ttl = "999h"
default_lease_ttl = "999h"
I did attempt to apply the changes using the only server option available according to the documentation, but failed due to it already being established:
vault server -config conf.hcl
Error initializing listener of type tcp: listen tcp4 0.0.0.0:8200: bind: address already in use
You can't reinitialize in the pod since it's the port is already bound on the containers (Vault is already running there).
You need to restart the pod/deployment with a new config. Not sure how your Vault deployment is configured but the config could be in the container itself, or in some mounted volume or perhaps a ConfigMap.

Kubernetes - Where does it store secrets and how does it use those secrets on multiple nodes?

Not really a programming question but quite curious to know how does Kubernetes or Minikube manage secrets & uses it on multiple nodes/pods?
Let's say if I create a secret to pull image with kubectl as below -
$ kubectl create secret docker-registry regsecret --docker-server=https://index.docker.io/v1/ --docker-username=$USERNM --docker-password=$PASSWD --docker-email=vivekyad4v#gmail.com
What processes will occur in the backend and how will k8s or Minikube use those on multiple nodes/pods?
All data in Kubernetes is managed by the API Server component that performs CRUD operations on the data store (current only option is etcd).
When you submit a secret with kubectl to the API Server it stores the resource and data in etcd. It is recommended to enable encryption for secrets in in the API Server (through setting the right flags) so that the data is encrypted at rest, otherwise anyone with access to etcd will be able to read your secrets in plain text.
When the secret is needed for either mounting in a Pod or in your example for pulling a Docker image from a private registry, it is requested from the API Server by the node-local kubelet and kept in tmpfs so it never touches any hard disk unencrypted.
Here another security recommendation comes into play, which is called Node Authorization (again set up by setting the right flags and distributing certificates to API Server and Kubelets). With Node Authorization enabled you can make sure that a kubelet can only request resources (incl. secrets) that are meant to be run on that specific node, so a hacked node just exposes the resources on that single node and not everything.
Secrets are stored in the only datastore a kubernetes cluster has: etcd.
As all other resources, they're retrieved when needed by the kubelet executable (that runs in every node) by querying k8s' API server
If you are wandering how to actually access the secrets (the stored files),
kubectl -n kube-system exec -it <etcd-pod-name> ls -l /etc/kubernetes/pki/etcd
You will get a list of all keys (system default keys). you can simply view them using cat command (if they are encrypted you won't see much)

Get kubeconfig by ssh into cluster

If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?
You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube
I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)