Dynamic token generation before deployment in kubernetes - kubernetes

I am fairly new to kubernetes and learning kubernetes deployments from scratch. For a microservice based projecct that I am working on, each microservice has to authenticate with their own client-id and client-secret to the auth server, before requesting any information (JWT). These ids and secrets are required for each services and needs to be in their environment variables. Initially the auth service will generate those ids and secrets via database seeds. What is the best way in the world of kubernetes to automatically set this values in the environments of a pod deployment before pod creation?

Depends on how automatic you want it to be. A simple approach would be an initContainer to provision a new token, put that in a shared volume file, and then an entrypoint script in the main container which reads the file and sets the env var.
The problem with that is authenticating the initContainer is hard. The big hammer solution would be to write a custom operator to manage this but if you're new to Kubernetes that's going to be super hard and probably overkill anyway.

Related

How can a sidecar restart the app container or its own pod?

I want to have a sidecar manage secret rotation, which requires the app container to restart in order to force it to pick up the updated credentials.
How can a sidecar force a container within the same pod to restart or the whole pod to restart?
Detailed explanation:
Services of different tech stacks need to start using secrets. Secrets can either be injected via CI/CD or fetched at runtime from AWS Secrets Manager.
Secrets need to be rotated every 3 months for security compliance reasons.
Secrets are only used once at startup to create the related client. Since they are not used continuously, they are not naturally refreshed if updated at source storage.
To minimise per-service development time, custom logic within each service to refresh the secrets should be avoided or minimised
There is a following pattern to rotate secrets without need to restart containers if you are running apps in EKS or ECS.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/rotate-credentials-without-restarting-containers.html

Using a Google service account keyfile in a Kubernetes serviceaccount as a testing environment replacement for GKE workload identity

I have a GKE app that uses kubernetes serviceaccounts linked to google service accounts for api authorizations in-app.
Up until now, to test these locally, I had two versions of my images- one with and one without a test-keyfile.json copied into them for authorization. (The production images used the serviceaccount for authorization, the test environment would ignore the serviceaccounts and instead look for a keyfile which gets copied in during the image build.)
I was wondering if there was a way to merge the images into one, and have both prod/test use the Kubernetes serviceaccount for authorization. On production, use GKE's workload identity, and in testing, use a keyfile(s) linked with or injected into a Kubernetes serviceaccount.
Is such a thing possible? Is there a better method for emulating GKE workload identity on a local test environment?
I do not know a way of emulating workload identity on a non-Google Kubernetes cluster, but you could change your app to read the auth credentials from a volume/file or the metadata server, depending on the environment setting. See this article (and particularly the code linked there) for an example of how to authenticate using local credentials or Google SA depending on environmental variables.The article also shows how to use Pod overlays to keep the prod vs dev changes separate from the bulk of the configuration.

Is Kubernetes' ETCD exposed for us to use?

We are working on provisioning our service using Kubernetes and the service needs to register/unregister some data for scaling purposes. Let's say the service handles long-held transactions so when it starts/scales out, it needs to store the starting and ending transaction ids somewhere. When it scales out further, it will need to find the next transaction id and save it with the ending transaction id that is covered. When it scales in, it needs to delete the transaction ids, etc. ETCD seems to make the cut as it is used (by Kubernetes) to store deployment data and not only that it is close to Kubernetes, it is actually inside and maintained by Kubernetes; thus we'd like to find out if that is open for our use. I'd like to ask the question for both EKS, AKS, and self-installed. Any advice welcome. Thanks.
Do not use the kubernetes etcd directly for an application.
Access to read/write data to the kubernetes etcd store is root access to every node in your cluster. Even if you are well versed in etcd v3's role based security model avoid sharing that specific etcd instance so you don't increase your clusters attack surface.
For EKS and GKE, the etcd cluster is hidden in the provided cluster service so you can't break things. I would assume AKS takes a similar approach unless they expose the instances to you that run the management nodes.
If the data is small and not heavily updated, you might be able to reuse the kubernetes etcd store via the kubernetes API. Create a ConfigMap or a custom resource definition for your data and edit it via the easily securable and namespaced functionality in the kubernetes API.
For most application uses run your own etcd cluster (or whatever service) to keep Kubernetes free to do it's workload scheduling. The coreos etcd operator will let you define and create new etcd clusters easily.

Importance of password security within kubernetes namespaces?

While setting up a automated deployments with Kubernetes (and Helm), I came across the following question several times:
How important is the safeness of a services password (mysql, for example) inside a single namespace?
My thoughts: It's not important at all. Why? All related pods include the password anyway and the services are not available outside of the specific namespace. Though someone would gain access to a pod in that specific namespace, printenv would give him all he needs.
My specific case (Helm): If I set up my mysql server as a requirement (requirements.yaml), I don't have to use any secrets or make effort to share the mysql password and can provide the password in values.yaml.
While Kubernetes secrets aren't that secret, they are more secret than Helm values. Fundamentally I'd suggest this question is more about how much you trust humans with the database password than any particular process. Three approaches come to mind:
You pass the database password via Helm values. Helm isn't especially access-controlled, so anyone who can helm install or helm rollback can also helm get values and find out the password. If you don't care whether these humans have the password (all deployments are run via an automated system; all deployments are run by the devops team who has all the passwords anyways; you're a 10-person startup) then this works.
The database password is in an RBAC-protected Secret. You can use Kubernetes role-based access control so that ordinary users can't directly read the contents of Secrets. Some administrator creates the Secret, and the Pod mounts it or injects it as an environment variable. Now you don't need the password yourself to be able to deploy, and you can't trivially extract it (but it's not that much work to dump it out, if you can launch an arbitrary container).
The application gets the database password from some external source at startup time. Hashicorp's Vault is the solution I've worked with here: the Pod runs with a Kubernetes service account, which it uses to get a token from Vault, and then it uses that to get the database password. The advanced version of this hands out single-use credentials that can be traced back to a specific Pod and service account. This is the most complex path, but also the most secure.

Give pod in Kubernetes cluster rights to access Google storage (RBAC/IAM)

I was doing some research, but could not really find an answer in the K8s documentation. Is it possible to orchestrate that certain pods in a Kubernetes cluster have access to other certain resources outside of the cluster without giving the permissions to the whole cluster?
For example: A pod accesses data from Google storage. To not hard code some credentials I want it to be able to access it via RBAC/IAM, but on the other hand I do not want another pod in the cluster to be able to access the same storage.
This is necessary as users interact with those pods and the data in the storages have privacy restrictions.
The only way I see so far is to create a service account for that resource and pass the credentials of the service account to the pod. So far I am not really satisfied with this solution, as passing around credentials seems to be insecure to me.
Unfortunately, there is only one way to do this, and you wrote it looks insecure for you. I found an example in documentation and they use the way where you store credential of service account in secret and then use it in pod from secret.