I've been working with kubernetes for the past 6 months and we've deployed a few services.
We're just about to deploy another which stores encrypted data and puts the keys in KMS. This requires two service accounts, one for the data and one for the keys.
Data access to this must be audited. Since access to this data is very sensitive we are reluctant to put both service accounts in the name namespace as if compromised in any way the attacker could gain access to the data and the keys without it being audited.
For now we have one key in a secret and the other we're going to manually post to the single pod.
This is horrible as it requires that a single person be trusted with this key, and limits scalability. Luckily this service will be very low volume.
Has anyone else came up against the same problem?
How have you gotten around it?
cheers
Requirements
No single person ever has access to both keys (datastore and KMS)
Data access to this must be audited
If you enable audit logging, every API call done via this service account will be logged. This may not help you if your service isn't ever called via the API, but considering you have a service account being used, it sounds like it would be.
For now we have one key in a secret and the other we're going to manually post to the single pod.
You might consider using Vault for this. If you store the secret in vault, you can use something like this to have the environment variable pushed down into the pod as an environment variable automatically. This is a little more involved than your process, but is considerably more secure.
You can also use Vault alongside Google Cloud KMS which is detailed in this article
What you're describing is pretty common - using a key/ service account/ identity in Kubernetes secrets to access an external secret store.
I'm a bit confused by the double key concept - what are you gaining by having a key in both secrets and in the pod? If secrets are compromised, then etcd is compromised and you have bigger problems. I would suggest you focus instead on locking down secrets, using audit logs, and making the key is easy to rotate in case of compromise.
A few items to consider:
If you're mostly using Kubernetes, consider storing (encrypted) secrets in Kubernetes secrets.
If you're storing secrets centrally outside of Kubernetes, like you're describing, consider just using a single Kubernetes secret - you will get Kubernetes audit logs for access to the secret (see the recommended audit-policy), and Cloud KMS audit logs for use of the key.
Related
I am using Hashicorp Vault to store multiple secrets in the KV Secrets engine, one of which is the database connection string - username, password, host ip and port. I have multiple microservices, which need to use this db secret to connect with db.
Please clarify which of these integration pattern is valid:
Direct Integration with Vault: Each of the microservices will have direct connection with Vault to get the secrets needed for the operation. All the microservices will have the vault token configured (in K8s secrets) for accessing the vault.
Retrieving secrets via another microservice: Should there be an abstract layer i.e. a separate microservice for Vault interaction and all the other microservices will call the APIs of this vault-microservice to get the secrets they want. The vault token (in K8s Secrets) will be accessed by only one microservice.
The other microservice is an abstraction layer. It is extra work that might allow you to change secrets provider in the future.
Unless you can justify writing and maintaining that abstraction layer (because you want to use Vault in some deployments and AWS Secrets Manager in others), then don't bother.
The other issue is that although Vault's KV store is quite common and there are several other implementation, what if you want to use Transit, PKI or SSH CA? These services exist elsewhere (in AWS for example), but they don't have feature parity. You probably don't want to be on the hook to support those differences in your abstraction layer.
A low(er) cost alternative that allows you decouple the implementation from your code would be to wrap the Vault API class using a simple KVSecrets class in your code, a software design pattern know as the facade. But remember that unless you test your class with two services, you can't garantee it will be possible to migrate to another service one day.
So considering all this, just call the API directly or use the Vault library for your programming language.
Is it possible to configure k8s in a way that empty secrets are not possible?
I had a problem in a service that somewhat the secret got overwritten with an empty one (zero bytes) and thereby my service malfunctioned. I see no advantage of having an secret empty at any time and would like to prevent empty secrets all together.
Thans for your help!
While it's not a simple answer to implement, as best I can tell what you are looking for is an Admission Controller, with a very popular one being OPA Gatekeeper
The theory is that kubernetes, as a platform, does not understand your business requirement to keep mistakes from overwriting Secrets. But OPA as a policy rules engine allows you to specify those things without requiring the upstream kubernetes to adopt those policies for everyone
An alternative is to turn on audit logging and track down the responsible party for re-education
A further alternative is to correctly scope RBAC Roles to actually deny writes to Secrets except for those credentials that are known to be trusted
While setting up a automated deployments with Kubernetes (and Helm), I came across the following question several times:
How important is the safeness of a services password (mysql, for example) inside a single namespace?
My thoughts: It's not important at all. Why? All related pods include the password anyway and the services are not available outside of the specific namespace. Though someone would gain access to a pod in that specific namespace, printenv would give him all he needs.
My specific case (Helm): If I set up my mysql server as a requirement (requirements.yaml), I don't have to use any secrets or make effort to share the mysql password and can provide the password in values.yaml.
While Kubernetes secrets aren't that secret, they are more secret than Helm values. Fundamentally I'd suggest this question is more about how much you trust humans with the database password than any particular process. Three approaches come to mind:
You pass the database password via Helm values. Helm isn't especially access-controlled, so anyone who can helm install or helm rollback can also helm get values and find out the password. If you don't care whether these humans have the password (all deployments are run via an automated system; all deployments are run by the devops team who has all the passwords anyways; you're a 10-person startup) then this works.
The database password is in an RBAC-protected Secret. You can use Kubernetes role-based access control so that ordinary users can't directly read the contents of Secrets. Some administrator creates the Secret, and the Pod mounts it or injects it as an environment variable. Now you don't need the password yourself to be able to deploy, and you can't trivially extract it (but it's not that much work to dump it out, if you can launch an arbitrary container).
The application gets the database password from some external source at startup time. Hashicorp's Vault is the solution I've worked with here: the Pod runs with a Kubernetes service account, which it uses to get a token from Vault, and then it uses that to get the database password. The advanced version of this hands out single-use credentials that can be traced back to a specific Pod and service account. This is the most complex path, but also the most secure.
I was doing some research, but could not really find an answer in the K8s documentation. Is it possible to orchestrate that certain pods in a Kubernetes cluster have access to other certain resources outside of the cluster without giving the permissions to the whole cluster?
For example: A pod accesses data from Google storage. To not hard code some credentials I want it to be able to access it via RBAC/IAM, but on the other hand I do not want another pod in the cluster to be able to access the same storage.
This is necessary as users interact with those pods and the data in the storages have privacy restrictions.
The only way I see so far is to create a service account for that resource and pass the credentials of the service account to the pod. So far I am not really satisfied with this solution, as passing around credentials seems to be insecure to me.
Unfortunately, there is only one way to do this, and you wrote it looks insecure for you. I found an example in documentation and they use the way where you store credential of service account in secret and then use it in pod from secret.
I've started playing with Hashicorp's Vault to manage secrets and had some questions about the day-to-day of Vault sealing. My workflow has two auth backends; specific users access Vault with write access to add new secrets, servers have readonly access for the secrets they need.
1) Under normal circumstances, does the Vault stay in an unsealed state? I believe it would as a dynamically provisioned server should not have to coordinate an unseal.
2) Is the purpose of sealing to off-board staff to rotate keys and in case of an intrusion?
3) What's the best practice for ensuring the vault process is always running, since if it dies the Vault will seal? Also, in a highly available configuration, if one Vault node's process dies, does it seal the Vault for everyone?
I asked this question on the Vault Google Group and this was the best response:
1) Under normal circumstances, does the Vault stay in an unsealed
state? I believe it would as a dynamically provisioned server should
not have to coordinate an unseal.
Yes. Once Vault is initialized and unsealed, it 'normally' stays in an
unsealed state.
2) Is the purpose of sealing to off-board staff to rotate keys and in
case of an intrusion?
Sealing of Vault enables a turn key mechanism to stop all the services
of Vault. It would require a specific number of unseal key holders to
make Vault operational again.
3) What's the best practice for ensuring the vault process is always
running, since if it dies the Vault will seal? Also, in a highly
available configuration, if one Vault node's process dies, does it
seal the Vault for everyone?
There is no official best practice recommendation for this. But
running Vault in a dedicated instance/cluster with very
limited/no-access to its memory. Running Vault in a HA mode using a
backend which supports it is good. If any of the cluster nodes goes
down or if the Vault process is restarted, it will be in a sealed
state and would require the unseal operation to be performed to make
it operational.
Best,
Vishal
Taken from https://www.vaultproject.io/docs/concepts/seal.html:
"Under normal circumstances, does the Vault stay in an unsealed state?" -
When a Vault server is started, it starts in a sealed state. In this state, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it.
Unsealing is the process of constructing the master key necessary to
read the decryption key to decrypt the data, allowing access to the
Vault.
Prior to unsealing, almost no operations are possible with Vault. For
example authentication, managing the mount tables, etc. are all not possible. The only possible operations are to unseal the Vault and check the status of the unseal.
"Is the purpose of sealing to off-board staff to rotate keys and in case of an intrusion?" -
This way, if there is a detected intrusion, the Vault data can be locked
quickly to try to minimize damages. It can't be accessed again without access to the master key shards.
"since if it dies the Vault will seal?" - yes.