Is there any way to set per-volume permissions/ownership in Kubernetes declaratively?
Usecase:
a pod is composed of two containers, running as two distinct users/groups, both of them non-root, and are unable to sudo
the containers mount a volume each, and need to create files in these volumes (e.g. both of them want to write logs)
We know that we can use fsGroup, however that is a pod-level declaration. So even if we pick fsGroup equal to user in first container, then we are going to have permission issues in the other one. (ref: Kubernetes: how to set VolumeMount user group and file permissions)
One solution is to use init-container to change permissions of mounted directories.
The init-container would need to mount both volumes (from both containers), and do the needed chown/chmod operations.
Drawbacks:
extra container that needs to be aware of other containers' specific (ie. uid/gid)
init container needs to run as root to perform chown
It can be done with adding one init container with root access.
initContainers:
- name: changeowner
image: busybox
command: ["sh", "-c", "chown -R 200:200 /<volume>"]
volumeMounts:
- name: <your volume>
mountPath: /<volume>
Related
I've been searching and every answer seems to be the same example (https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). In a pod you can create an empty volume, then mount that into two containers and any content written in that mount will be seen on each container. While this is fine my use case is slightly different.
Container A
/opt/content
Container B
/data
Container A has an install of about 4G of data. What I would like to do is mount /opt/content into Container B at /content. This way the 4G of data is accessible to Container B at runtime and I don't have to copy content or specially build Container B.
My question, is this possible. If it is, what would be the proper pod syntax.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /opt/content
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /content
From my research and testing the best I can tell is within a POD two containers can not see each others file system. The volume mount will allow each container to have a mount created in the pod to the specified path (as the example shows) and then any items written to it after that point, will be seen on both. This works great for logs and stuff.
In my context, this proves to not be possible and creating this mount, and then having Container A copy the 4G directory to the newly created mount is to time consuming to make this an option.
Best I can tell is the only way to do this is create a Persistent Volume or other similar and mount that in the Container B. This way Container A contents are stored in the Persistent Volume and it can be easily mounted when needed. The only issue with this is the Persistent Volume will have to be setup in every Kube cluster defined which is the pain point.
If any of this is wrong and I just didn't find the right document please correct me. I would love to be able to do this.
Your code example in your question should work. Both are using the same volume and you mount them under different locations in the container.
nginx-container will have the shared-data content in /opt/content and debian-container will have it in /content.
With mountPath you specify where the volume should be mounted in the container
When a container is started, first the container image (or more precisely the layers of an image) are mounted. Afterwards, your custom volumes are mounted, hiding any data from the image at and below the mount path. So sharing data from an image among several containers without copying them is not possible.
The typical solution is and stays to use an init container which downloads or copies the actual data into an ephemeral volume, which then is shared by one or more other containers (https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container).
initContainers:
- name: init
image: <image-containing-the-data-based-on-some-basic-image>
command: ["sh", "-c", "cp -ar /opt/content/* /mnt/target/"]
volumeMounts:
- name: shared-data
mountPath: /mnt/target
What you actually would need, is a kind of container storage interface (CSI) driver which supports creating volumes from container images. I found two projects which would exactly do that, but none of them states to be ready for production.
https://github.com/kubernetes-csi/csi-driver-image-populator
https://github.com/warm-metal/csi-driver-image
I am using Kubernetes yaml to mount a volume.
I know I can set the mount folder to be for a specific group using this configuration:
securityContext:
fsGroup: 999
but no where I can find a way to also set user ownership and not just the group.
When I access the container folder to check ownership, it is root.
Anyway to do so via kubernetes Yaml?
I would expect fsUser: 999 for example, but there is no such thing. :/
There is no way to set the UID using the definition of Pod but you can use an initContainer with the same volumeMount as the main container to set the required permissions.
It is handy in cases like yours where user ownership needs to be set to a non root value.
Here is a sample configuration (change it as per your need):
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chown -R 999:999 /data/demo"]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
No, there is no such option. To check every option, available in securityContext, you may use
kubectl explain deployment.spec.template.spec.securityContext
As per doc
fsGroup <integer>
A special supplemental group that applies to all containers in a pod. Some
volume types allow the Kubelet to change the ownership of that volume to be
owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit
is set (new files created in the volume will be owned by FSGroup) 3. The
permission bits are OR'd with rw-rw---- If unset, the Kubelet will not
modify the ownership and permissions of any volume.
It's usually a good idea to handle access to files via group ownership, because in restricted kubernetes configurations you can't actually control user id or group id, for example in RedHat Openshift.
What you can do is to use runAsUser, if your kubernetes provider allows it
runAsUser <integer>
The UID to run the entrypoint of the container process. Defaults to user
specified in image metadata if unspecified. May also be set in
SecurityContext. If set in both SecurityContext and PodSecurityContext, the
value specified in SecurityContext takes precedence for that container.
You application will work with uid you want, and naturally will create and access files as you want. As noted earlier, it's usually not the best idea to do it that way, because it makes distribution of your applications harder on different platforms.
In my application, I have a control plane component which spawns Jobs on my k8s cluster. I'd like to be able to pass in a dynamically generated (but read-only) config file to each Job. The config file will be different for each Job.
One way to do that would be to create, for each new Job, a ConfigMap containing the desired contents of the config file, and then set the ConfigMap as a VolumeMount in the Job spec when launching the Job. But now I have two entities in the cluster which are semantically tied together but don't share a lifetime, i.e. if the Job ends, the ConfigMap won't automatically go away.
Is there a way to directly "mount a string" into the Job's Pod, without separately creating some backing entity like a ConfigMap to store it? I could pass it in as an environment variable, I guess, but that seems fragile due to length restrictions.
The way that is traditionally done is via an initContainer and an emptyDir volumeMount that allows the two containers to "communicate" over a private shared piece of disk:
spec:
initContainers:
- name: config-gen
image: docker.io/library/busybox:latest
command:
- /bin/sh
- -ec
# now you can use whatever magick you wish to generate the config
- |
echo "my-config: is-generated" > /generated/sample.yaml
echo "some-env: ${SOME_CONFIG}" >> /generated/sample.yaml
env:
- name: SOME_CONFIG
value: subject to injection like any other kubernetes env var
volumeMounts:
- name: shared-space
mountPath: /generated
containers:
- name: main
image: docker.example.com:1234
# now you can do whatever you want with the config file
command:
- /bin/cat
- /config/sample.yaml
volumeMounts:
- name: shared-space
mountPath: /config
volumes:
- name: shared-space
emptyDir: {}
If you want to work around using configmaps and environment variables for passing configuration, the options you are left with are command line arguments and configuration files.
You can pass the config as command line argument to each of your Job.
You can also mount the config as file in to your Job pod. But since your controller which generates the config and Job which consumes it might be running on different nodes, you need a way to pass the config to the Job pod. If you have a network attached storage which is accessible for all nodes, your controller can write to location on shared storage and Job can read from it. Else, if you have a service(database, cache etc) that can act like a data store, your controller can write to the datastore and Job can read from there.
If you do not want to modify your Job to read the config from various sources, you can have an initContainer which does the job of reading the configuration from a certain source and writes it to a local pod volume(emptyDir) and Job can just read from the local file.
What do I have to put into a container to get the agent to run? Just libjprofilerti.so on its own doesn't work, I get
Could not find agent.jar. The agentpath parameter must point to
libjprofilerti.so in an unmodified JProfiler installation.
which sounds like obvious nonsense to me - surely I can't have to install over 137.5 MB of files, 99% of which will be irrelevant, in each container in which I want to profile something?
-agentpath:/path/to/libjprofilerti.so=nowait
An approach is to use Init Container.
The idea is to have an image for JProfiler separate from the application's image. Use the JProfiler image for an Init Container; the Init Container copies the JProfiler installation to a volume shared between that Init Container and the other Containers that will be started in the Pod. This way, the JVM can reference at startup time the JProfiler agent from the shared volume.
It goes something like this (more details are in this blog article):
Define a new volume:
volumes:
- name: jprofiler
emptyDir: {}
Add an Init Container:
initContainers:
- name: jprofiler-init
image: <JPROFILER_IMAGE:TAG>
command: ["/bin/sh", "-c", "cp -R /jprofiler/ /tmp/"]
volumeMounts:
- name: jprofiler
mountPath: "/tmp/jprofiler"
Replace /jprofiler/ above with the correct path to the installation directory in the JProfiler's image. Notice that the copy command will create /tmp/jprofiler directory under which the JProfiler installation will go - that is used as mount path.
Define volume mount:
volumeMounts:
- name: jprofiler
mountPath: /jprofiler
Add to the JVM startup arguments JProfiler as an agent:
-agentpath:/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849
Notice that there isn't a "nowait" argument. That will cause the JVM to block at startup and wait for a JProfiler GUI to connect. The reason is that with this configuration the profiling agent will receive its profiling settings from the JProfiler GUI.
Change the application deployment to start with only one replica. Alternatively, start with zero replicas and scale to one when ready to start profiling.
To connect from the JProfiler's GUI to the remote JVM:
Find out the name of the pod (e.g. kubectl -n <namespace> get pods) and set up port forwarding to it:
kubectl -n <namespace> <pod-name> port-forward 8849:8849
Start JProfiler up locally and point it to 127.0.0.1, port 8849.
Change the local port 8849 (the number to the left of :) if it isn't available; then, point JProfiler to that different port.
Looks like you are missing the general concept here.
It's nicely explained why to use containers in the official documentation.
The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization. These containers are isolated from each other and from the host: they have their own filesystems, they can’t see each others’ processes, and their computational resource usage can be bounded. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions.
Of course you don't need to install the libraries on each containers separately.
Kubernetes is using Volumes to share files between Containers.
So you can create a local type of Volume with JProfiles libs inside.
A local volume represents a mounted local storage device such as a disk, partition or directory.
You also need to keep in mind that if you share the Volume between Pods, those Pods will not know about JProfiles libs being attached. You will need to configure the Pod with correct environment variables/files through the use of Secrets or ConfigMaps.
You can configure your Pod to pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: jp-pod
name: jp-pod
spec:
containers:
- image: k8s.gcr.io/busybox
name: jp
envFrom:
secretRef:
name: jp-secret
jp-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: jp-secret
type: Opaque
data:
JPAGENT_PATH="-agentpath:/usr/local/jprofiler10/bin/linux-x64/libjprofilerti.so=nowait"
I hope this helps you.
I need to populate an env file located into pod filesystem filled by an init container previously.
I look up envFrom documentation but I've not been able to figure out how to use it and I've been able to find some relevant examples in internet.
Guess an init container creates a file on /etc/secrets/secrets.env, so the pod container spec has to look up on /etc/secrets/secrets.env in order to populate env variables.
You will not be able to reference any filesystem component to populate an environment variable using the PodSpec, because it creates a chicken-and-egg problem: kubernetes cannot create the filesystem without a complete PodSpec, but it cannot resolve variables in the PodSpec without access to the Pod's filesystem
If /etc/secrets is a volume that the initContainer and the normal container share, then you can supersede the command: of your container to source that into its environment before running the actual command, but that is as close as you're going to get:
containers:
- name: my-app
command:
- bash
- -ec
- |
. /etc/secrets/secrets.env
./bin/run-your-server-command-here