Kubernetes global variables (for all namespaces) - kubernetes

I need to create and mantain some global variables accessible for applications running in all namespaces, because some tools/apps are standard in my dev cluster.
For example:
APM ENDPOINT
APM User/pass
RabbitMQ endpoint
MongoDB endpoint
For any reason, when i change/migrate any global variable, i want to change one time for all running applications in cluster (just needed restart pod), and if a create an "global" configmap and read in envFrom, i need to change/update the configmap in all namespaces.
Someone have an idea to do this? I thinked to use Hashicorp vault with specific role for global environments, but i need to adapt all applications to use Vault, and maybe have better idea.
Thanks

There is no in-built solution in Kubernetes for it except for creating a ConfigMap, and use envFrom to define all of the ConfigMap's data as Pod environment variables, which will indeed require to update them separately for each namespace. So using HashiCorp Vault is a better solution here; one more option here can be trying to customize env with Kubernetes addons like this.

Related

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

How to use Vault dynamic secretes and inject them as Environment Variables to Kubernetes deployment?

We run a Vault cluster (Deployed by helm) and some microservices all on k8s.
Our MongoDB atlas connection string configured as ENV on microservices deployment.
We want to continue using ENV without changing the code to read the vault config file. So, we tried the examples from here:
https://www.vaultproject.io/docs/platform/k8s/injector/examples
The injection to ENV works but when the vault rotates the credentials we need to recreate the pod that it will inject again to the ENV.
I would like to know How we may use the functionality of dynamic secrets in Vault with ENV on k8s. If you have any suggestions.
Thanks
If you are using an environment variable to inject a secret, you will need to recreate the pod whenever the secret changes (as you've found), because the environment variable is only generated at startup of the pod - it is not possible to change an environment variable for a running application. If you want your application to support credentials that change while it is running, you will need to add support for that to your application I'm afraid (and change from using an env var to reading the details from the file when required).

Best practice for shared K8s Secrets in Helm 3?

I have a couple Charts which all need access to the same Kubernetes Secret. My initial plan was to create a Chart just for those Secrets but it seems Helm doesn't like that. I am thinking this must be a common problem and am wondering what folks generally do to solve this problem?
Thanks!
Best practice is, don't save any sensitive secrets in kubernetes clusters. kubernetes secret is encode, not encrypt.
You can reference the secret via aws ssm/secrets manager, hashicorp Vault or other similars.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/04-path-security-and-networking/401-configmaps-and-secrets
Most charts that follow the common chart development practices allow you to use an existing secret instead of creating one for you. This way, you can create your common secrets normally (without helm), and refer to them from the charts that need them, via a reference like existingSecret config key.
Take minio helm chart for example: it accepts an existingSecret key as an alternative to passing an accessKey and a secretKey.
As you can see in the main charts repo, this is a pretty common practice.

Change the spring boot admin registery unique ID

I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.

How to execute shell commands from within a Kubernetes ConfigMap?

I am using Helm charts to create and deploy applications into my K8s cluster.
One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:
data:
app.conf: |
[sdkkey] # I want to be able to retrieve sdk from aws secrets manager
I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap.
Any ideas or alternative solutions?
Cheers
K
tl;dr; You can't execute a ConfigMap, it is just a static manifest. Use an init container instead.
ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap.
Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called Secrets, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.
Given your description it sounds like your best option would be to use an init container to retrieve the credentials and write them to a shared emptyDir Volume mounted into the container with the application that will use the credentials.