Loading secrets as ENV from init container - kubernetes

We are storing secrets in GCP Secret Manager, during an app deployment we using an init container which fetches secrets and places them in volume (path). Going forward we need the requirement is to load the secrets as env variable on the main container needing the secrets from the init container, instead of the paths. How can it be achieved ? Any workaround ?
Thank you !

You can copy from GSM into a Kubernetes Secret and then use that in a normal envFrom or you can have the init container write a file into a shared emptyDir volume and then change the command on the main container to be something like command: [bash, -c, "source /shared/env && exec original command"]. The latter requires you rewrite the command fully though which is annoying.

Related

set environment variables within a mountPath in Kubernetes

we have deployed an application in Kubernetes as deployment and stored the logs in a folder named /podlogs. whenever the pod has taken the restart, it will create a new folder named with the latest pod name inside the app-log folder and store the actual log files. For example this new folder could be /podlogs/POD_Name
Previously we have mounted /podlogs to ELK and azure blob containers.
By using the subPath, we would like to also mount the /podlogs/POD_Name to a second mount.
How can we pass the env variable in the mount path along with the subPath?
See the DownwardAPI https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
It is possible to expose the pod name as a env variable or to mount
It is conceivable you could then mount or expose this to the ELK container

How can I use an initContainer to set an environment variable or modify the launch command of the main container?

I have an application that needs to know it's assigned NodePort. Unfortunately it is not possible to write a file to a mountable volume and then read that file in the main container. Therefore, i'm looking for a way to either have the initContainer set an environment variable that gets passed to the main container or to modify the main pod's launch command to add an additional CLI argument. I can't seem to find any examples or documentation that would lead me to this answer. TIA.
There's no direct way so you have to get creative. For example you can make a shared emptyDir mount that both containers can access, have the initContainer write export FOO=bar to that file, and then change the main container command to something like [bash, -c, "source /thatfile && exec originalcommand"]

How to execute shell commands from within a Kubernetes ConfigMap?

I am using Helm charts to create and deploy applications into my K8s cluster.
One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:
data:
app.conf: |
[sdkkey] # I want to be able to retrieve sdk from aws secrets manager
I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap.
Any ideas or alternative solutions?
Cheers
K
tl;dr; You can't execute a ConfigMap, it is just a static manifest. Use an init container instead.
ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap.
Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called Secrets, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.
Given your description it sounds like your best option would be to use an init container to retrieve the credentials and write them to a shared emptyDir Volume mounted into the container with the application that will use the credentials.

Append/Extend LD_LIBRARY_PATH using Kubernetes Source Code

When a pod is being scheduled, I dynamically (and transparently) mount some shared libraries folder into the client containers through Kubernetes DevicePlugins. Now, in the container I want to append/extend these dynamically mounted shared libraries to LD_LIBRARY_PATH environmental variables.
Inside the container: This can be achieved by running command on the bash
"export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/some/new/directory"
From the Host: I can add the export command to the pod.yaml file under pod.spec.command and args.
But, I wanted to do it transparently without the client/admin specifying it in the yaml file using Kubernetes DevicePlugins or Extended-Schedulers
I am looking method/hack by which I can append/extend the LD_LIBRARY_PATH inside the container only using Kubernetes source code.
Thanks.
You can just bake into your Dockerfile and create an image that you use in Kubernetes for that. No need to hack the Kubernetes source code.
In your Dockerfile in some line:
ENV LD_LIBRARY_PATH /extra/path:$LD_LIBRARY_PATH
Then:
docker build -t <your-image-tag> .
docker push <your-image-tag>
Then, update your pod or deployment definition and deploy to Kubernetes.
Hope it helps.
If i understand your issue, all you need is to transparently add ld_library_path to the pod as it is scheduled. Maybe you can try to use mutatingadmission webhook. Which allows you to send patch command to kubernetes to modify the manifest. Theres a good documentation from banzai cloud. I have not tried it myself.
https://banzaicloud.com/blog/k8s-admission-webhooks/

How do I tell if my container is running inside a Kubernetes cluster?

How can I tell whether or not I am running inside a kubernetes cluster? With docker I can check if /.dockerinit exist. Is there an equivalent?
You can check for KUBERNETES_SERVICE_HOST environment variable.
This variable is always exported in an environment where the container is executed.
Refer to https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
You can pass environment variables to your containers in the pod spec. You can even expose some pod information to the containers via environment variables using the downward API.
With the default configuration, Kubernetes will mount the serviceaccount secrets into pods. Simply check for the existence of this folder: /var/run/secrets/kubernetes.io.
No need to set environment variables. In ruby I would do the following:
if File.exists?('/.dockerenv')
puts "I'm running in a docker container"
end
if File.exists?('/var/run/secrets/kubernetes.io')
puts "I'm also running in a Kubernetes pod"
end
One option is to check the /etc/hosts file - there is by default the comment that the file is maintained by K8s.
Anyway the best way is to def your own env variable in deployment, so use some template tools like helm to gen deployment and define some general template.