I set up a managed Kubernetes cluster via Amazon EKS. Now I’m wondering whether Kubernetes Secrets are stored securely (at rest and transit) by default or if there is additional configuration necessary.
The relevant Kubernetes docs are not really helpful in this constellation l, all they say is that secret encryption depends on the cloud provider.
Any help, links or samples are greatly appreciated!
It appears the answer is yes, they are encrypted at rest, but this is hearsay. There's an open github issue on this topic, and nobody else can find definitive docs stating this, either.
Related
Can anyone suggest me How I manage my Kubernetes secrets? till now I used to use kubectl or Helm to apply secrets from my local system but this is not the right way to do I guess. I also refer few docs about managing Kubernetes secrets, from that some are mention below,
I find hashicorp vault which is also used to manage secrets https://www.vaultproject.io/use-cases/kubernetes
Aws secret https://aws.amazon.com/secrets-manager/
but still, I m looking for the other available option to manage secrets. Please suggest me the best and most secure way to store and manage secrets in Kubernetes. for your kind information I m using AWS EKS cluster so please help me out
I setup a EKS cluster and integrated AWS Secrets Manager in it following the steps mentioned in https://github.com/aws/secrets-store-csi-driver-provider-aws and it worked as expected.
Now we have a requirement to integrate the AWS Secrets Manager on an on-premises k8s cluster and I am unable to follow the same steps as they seem to be explicitly for AWS EKS based clusters.
I googled around a bit and found you can call the Secrets Manager programmatically using one of the ways in https://docs.aws.amazon.com/secretsmanager/latest/userguide/asm_access.html, but this approach wont work for us.
Is there a k8s way to directly connect to AWS secrets Manager without setting up AWS-CLI and the OIDC cluster ID on the on-premises cluster?
Any help would be highly appreciated.
You can setup external OIDC providers with AWS and also setup K8s to with OIDC, but that is a lot of work.
AWS recently announced IAM Roles Anywhere which will let you use host based certificates to authenticate, but you will still have to call the Secrets Manager APIs.
If you are willing to retrieve secrets through etcd (which may store the secrets base64 encoded on the cluster) you can look at using the opensource External Secrets solution.
I am trying to setup google kubernetes engine and its pods has to communicate with cloud sql database. The cloud sql database credentials are stored on google cloud secret manger. How pods will fetch credentials from secret manager and if secret manager credentials are updated than how pod will get update the new secret?
How to setup above requirement? Can you someone please help on the same?
Thanks,
Anand
You can make your deployed application get the secret (password) programmatically, from Google Cloud Secret Manager. You can find and example in many languages in the following link: https://cloud.google.com/secret-manager/docs/samples/secretmanager-access-secret-version
But before make sure that your GKE setup, more specifically your application is able to authenticate to Google Cloud Secret Manager. The following links can help you to choose the appropriate approche:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
You can find information regarding that particular solution in this doc.
There are also good examples on medium here and here.
To answer your question regarding updating the secrets:
Usually secrets are pulled when the container is being created, but if you expect the credentials to change often (or for the pods to stick around for very long) you can adjust the code to update the secrets on every execution.
I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.
However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs.
I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly.
This video (should start at the right time) mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since Google mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.
Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.
In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to secret-manager. It's a serverless secret manager with, I think, all the basic requirements that you need.
The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished
The awesome guy on your video link, Seth Vargo, has been involved in this project.
He has also released Berglas. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it.
I built a python library to easily use Berglas secret in Python.
Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!
I was doing some research, but could not really find an answer in the K8s documentation. Is it possible to orchestrate that certain pods in a Kubernetes cluster have access to other certain resources outside of the cluster without giving the permissions to the whole cluster?
For example: A pod accesses data from Google storage. To not hard code some credentials I want it to be able to access it via RBAC/IAM, but on the other hand I do not want another pod in the cluster to be able to access the same storage.
This is necessary as users interact with those pods and the data in the storages have privacy restrictions.
The only way I see so far is to create a service account for that resource and pass the credentials of the service account to the pod. So far I am not really satisfied with this solution, as passing around credentials seems to be insecure to me.
Unfortunately, there is only one way to do this, and you wrote it looks insecure for you. I found an example in documentation and they use the way where you store credential of service account in secret and then use it in pod from secret.