kubernetes envFrom: how to load variables located into pod filesystem file - kubernetes

I need to populate an env file located into pod filesystem filled by an init container previously.
I look up envFrom documentation but I've not been able to figure out how to use it and I've been able to find some relevant examples in internet.
Guess an init container creates a file on /etc/secrets/secrets.env, so the pod container spec has to look up on /etc/secrets/secrets.env in order to populate env variables.

You will not be able to reference any filesystem component to populate an environment variable using the PodSpec, because it creates a chicken-and-egg problem: kubernetes cannot create the filesystem without a complete PodSpec, but it cannot resolve variables in the PodSpec without access to the Pod's filesystem
If /etc/secrets is a volume that the initContainer and the normal container share, then you can supersede the command: of your container to source that into its environment before running the actual command, but that is as close as you're going to get:
containers:
- name: my-app
command:
- bash
- -ec
- |
. /etc/secrets/secrets.env
./bin/run-your-server-command-here

Related

Kubernetes: make pod directory writable

I have docker image with config folder in it.
Logback.xml is located there.
So I want to have ability to change logback.xml in pod (to up log level dinamicly, for example)
First, I've thought to use epmtyDir volume, but this thing rewrite directory and it becomes empty.
So Is there some simple method to make directory writable into pod?
Hello, hope you are enjoying your kubernetes Journey, if I understand, you want to have a file in a pod and be able to modify it when needed,
The thing you need here is to create a configmap based on your logback.xml file (can do it with imperative or declarative kubernetes configuration, here is the imperative one:
kubectl create configmap logback --from-file logback.xml
And after this, just mount this very file to you directory location by using volume and volumeMount subpath in your deployment/pod yaml manifest:
...
volumeMounts:
- name: "logback"
mountPath: "/CONFIG_FOLDER/logback.xml"
subPath: "logback.xml"
volumes:
- name: "logback"
configMap:
name: "logback"
...
After this, you will be able to modify your logback.xml config, by editing / recreating the configmap and restarting your pod.
But, keep in mind:
1: The pod files are not supposed to be modified on the fly, this is against the container philosophy (cf. Pet vs Cattle)
2: Howerver, Depending on your container image user rights, all pods directories can be writable...

Schema initiation in Binami postgresql image for kubernetes cluster

I am using bitnami PostgreSQL image to deploy StatfulSet inside my cluster node. I am not sure how to initiate schema for the PostgreSQL pod without building on top of bitnami image. I have looked around on the internet and someone said to use init containers but I am also not sure how exactly I would do that.
From the Github Readme of the Bitnami Docker image:
When the container is executed for the first time, it will execute the
files with extensions .sh, .sql and .sql.gz located at
/docker-entrypoint-initdb.d.
In order to have your custom files inside the docker image you can
mount them as a volume.
You can just mount such scripts under that directory using a ConfigMap volume. An example could be the following:
First, create the ConfigMap with the scripts, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: p-init-sql
labels:
app: the-app-name
data:
01_init_db.sql: |-
# content of the script goes here
02_second_init_db.sql: |-
# more content for another script goes here
Second, under spec.template.spec.volumes, you can add:
volumes:
- configMap:
name: p-init-sql
Then, under spec.template.spec.containers[0].volumeMounts, you can mount this volume with:
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: p-init-sql
With this said, you may find out that it is more easy to use HELM Charts.
Bitnami provides HELM Charts for all its images which simplify the usage of such images by a lot (as everything is ready to be installed and configured from a simple values.yaml file)
For example, there is such a chart for postgresql which you can find here and that can be of inspiration in how to configure the docker image even if you decide to write your own Kubernetes resources around that image.

How can I mount a dynamically generated file into a pod without creating a ConfigMap?

In my application, I have a control plane component which spawns Jobs on my k8s cluster. I'd like to be able to pass in a dynamically generated (but read-only) config file to each Job. The config file will be different for each Job.
One way to do that would be to create, for each new Job, a ConfigMap containing the desired contents of the config file, and then set the ConfigMap as a VolumeMount in the Job spec when launching the Job. But now I have two entities in the cluster which are semantically tied together but don't share a lifetime, i.e. if the Job ends, the ConfigMap won't automatically go away.
Is there a way to directly "mount a string" into the Job's Pod, without separately creating some backing entity like a ConfigMap to store it? I could pass it in as an environment variable, I guess, but that seems fragile due to length restrictions.
The way that is traditionally done is via an initContainer and an emptyDir volumeMount that allows the two containers to "communicate" over a private shared piece of disk:
spec:
initContainers:
- name: config-gen
image: docker.io/library/busybox:latest
command:
- /bin/sh
- -ec
# now you can use whatever magick you wish to generate the config
- |
echo "my-config: is-generated" > /generated/sample.yaml
echo "some-env: ${SOME_CONFIG}" >> /generated/sample.yaml
env:
- name: SOME_CONFIG
value: subject to injection like any other kubernetes env var
volumeMounts:
- name: shared-space
mountPath: /generated
containers:
- name: main
image: docker.example.com:1234
# now you can do whatever you want with the config file
command:
- /bin/cat
- /config/sample.yaml
volumeMounts:
- name: shared-space
mountPath: /config
volumes:
- name: shared-space
emptyDir: {}
If you want to work around using configmaps and environment variables for passing configuration, the options you are left with are command line arguments and configuration files.
You can pass the config as command line argument to each of your Job.
You can also mount the config as file in to your Job pod. But since your controller which generates the config and Job which consumes it might be running on different nodes, you need a way to pass the config to the Job pod. If you have a network attached storage which is accessible for all nodes, your controller can write to location on shared storage and Job can read from it. Else, if you have a service(database, cache etc) that can act like a data store, your controller can write to the datastore and Job can read from there.
If you do not want to modify your Job to read the config from various sources, you can have an initContainer which does the job of reading the configuration from a certain source and writes it to a local pod volume(emptyDir) and Job can just read from the local file.

How to install the JProfiler agent in a Kubernetes container?

What do I have to put into a container to get the agent to run? Just libjprofilerti.so on its own doesn't work, I get
Could not find agent.jar. The agentpath parameter must point to
libjprofilerti.so in an unmodified JProfiler installation.
which sounds like obvious nonsense to me - surely I can't have to install over 137.5 MB of files, 99% of which will be irrelevant, in each container in which I want to profile something?
-agentpath:/path/to/libjprofilerti.so=nowait
An approach is to use Init Container.
The idea is to have an image for JProfiler separate from the application's image. Use the JProfiler image for an Init Container; the Init Container copies the JProfiler installation to a volume shared between that Init Container and the other Containers that will be started in the Pod. This way, the JVM can reference at startup time the JProfiler agent from the shared volume.
It goes something like this (more details are in this blog article):
Define a new volume:
volumes:
- name: jprofiler
emptyDir: {}
Add an Init Container:
initContainers:
- name: jprofiler-init
image: <JPROFILER_IMAGE:TAG>
command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
volumeMounts:
- name: jprofiler
mountPath: "/tmp/jprofiler"
Replace /jprofiler/ above with the correct path to the installation directory in the JProfiler's image. Notice that the copy command will create /tmp/jprofiler directory under which the JProfiler installation will go - that is used as mount path.
Define volume mount:
volumeMounts:
- name: jprofiler
mountPath: /jprofiler
Add to the JVM startup arguments JProfiler as an agent:
-agentpath:/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849
Notice that there isn't a "nowait" argument. That will cause the JVM to block at startup and wait for a JProfiler GUI to connect. The reason is that with this configuration the profiling agent will receive its profiling settings from the JProfiler GUI.
Change the application deployment to start with only one replica. Alternatively, start with zero replicas and scale to one when ready to start profiling.
To connect from the JProfiler's GUI to the remote JVM:
Find out the name of the pod (e.g. kubectl -n <namespace> get pods) and set up port forwarding to it:
kubectl -n <namespace> <pod-name> port-forward 8849:8849
Start JProfiler up locally and point it to 127.0.0.1, port 8849.
Change the local port 8849 (the number to the left of :) if it isn't available; then, point JProfiler to that different port.
Looks like you are missing the general concept here.
It's nicely explained why to use containers in the official documentation.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Of course you don't need to install the libraries on each containers separately.
Kubernetes is using Volumes to share files between Containers.
So you can create a local type of Volume with JProfiles libs inside.
A local volume represents a mounted local storage device such as a disk, partition or directory.
You also need to keep in mind that if you share the Volume between Pods, those Pods will not know about JProfiles libs being attached. You will need to configure the Pod with correct environment variables/files through the use of Secrets or ConfigMaps.
You can configure your Pod to pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: jp-pod
name: jp-pod
spec:
containers:
- image: k8s.gcr.io/busybox
name: jp
envFrom:
secretRef:
name: jp-secret
jp-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: jp-secret
type: Opaque
data:
JPAGENT_PATH="-agentpath:/usr/local/jprofiler10/bin/linux-x64/libjprofilerti.so=nowait"
I hope this helps you.

Kubernetes - setting custom permissions/file ownership per volume (and not per pod)

Is there any way to set per-volume permissions/ownership in Kubernetes declaratively?
Usecase:
a pod is composed of two containers, running as two distinct users/groups, both of them non-root, and are unable to sudo
the containers mount a volume each, and need to create files in these volumes (e.g. both of them want to write logs)
We know that we can use fsGroup, however that is a pod-level declaration. So even if we pick fsGroup equal to user in first container, then we are going to have permission issues in the other one. (ref: Kubernetes: how to set VolumeMount user group and file permissions)
One solution is to use init-container to change permissions of mounted directories.
The init-container would need to mount both volumes (from both containers), and do the needed chown/chmod operations.
Drawbacks:
extra container that needs to be aware of other containers' specific (ie. uid/gid)
init container needs to run as root to perform chown
It can be done with adding one init container with root access.
initContainers:
- name: changeowner
image: busybox
command: ["sh", "-c", "chown -R 200:200 /<volume>"]
volumeMounts:
- name: <your volume>
mountPath: /<volume>