Volume Write Permissions - kubernetes

How can I give the non-root user full access to the mounted volume path in Kubernetes (pod)?
I'm using a volume on the host (/workspace/projects path) and writing to the directory as below.
volumeMounts:
-name: workspace
mountPath: /workspace/projects
Since, I'm copying the git repository content to the /projects directory, the git sets the permission to 755 by default. I want to set the permission to 775 as unable to write to the /project directory.
Could you please let me know what is the best way to do this? I saw InitContainers and not sure whether there is any better solution.Appreciate any help! Thanks in advance!

When you have to run process inside container as non-root user and you mount a volume to pod. But the volume have root:root permission.
To give access to specific user initContainer is one way, like following
initContainers:
- name: volume-mount-permission
image: busybox
command: ["sh", "-c", "chmod 775 /workspace/projects && chown -R <user> /workspace/projects"]
volumeMounts:
-name: workspace
mountPath: /workspace/projects
You can also use security context. Create user and group, add user to the group in Dockerfile and set following in spec
spec:
securityContext:
runAsUser: <UID>
runAsGroup: <GID>
fsGroup: <GID>

Related

How can I give my non-root user write permissions on a kubernetes volume mount?

From my understanding (based on this guide https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/), if I have the following security context specified for some kubernetes pod
securityContext:
# Enforce to be run as non-root user
runAsNonRoot: true
# Random values should be fine
runAsUser: 1001
runAsGroup: 1001
# Automatically convert mounts to user group
fsGroup: 1001
# For whatever reasons this is not working
fsGroupChangePolicy: "Always"
I expect this pod to be run as user 1001 with the group 1001. This is working as expected, because running id in the container results in: uid=1001 gid=1001 groups=1001.
The file system of all mounts should automatically be accessible by user group 1001, because we specified fsGroup and fsGroupChangePolicy. I guess that this also works because when running ls -l in one of the mounted folders, I can see that the access rights for the files look like this: -rw-r--r-- 1 50004 50004. Ownership is still with the uid and gid it was initialised with but I can see that the file is now readable for others.
The question now is how can I add write permission for my fsGroup those seem to still be missing?
You need to add an additional init_container in your pod/deployment/{supported_kinds} with the commands to give/change the permissions on the volume mounted on the pod to the userID which container is using while running.
initContainers:
- name: volumepermissions
image: busybox ## any image having linux utilities mkdir,echo,chown will work.
imagePullPolicy: IfNotPresent
env:
- name: "VOLUME_DATA_DIR"
value: mountpath_for_the_volume
command:
- sh
- -c
- |
mkdir -p $VOLUME_DATA_DIR
chown -R 1001:1001 $VOLUME_DATA_DIR
echo 'Volume permissions OK ✓'
volumeMounts:
- name: data
mountPath: mountpath_for_the_volume
This is necessary when a container in a pod is running as a user other than root and needs write permissions on a mounted volume.
if this is a helm template this init container can be created as a template and used in all the pods/deployments/{supported kinds} to change the volume permissions.
Update: As mentioned by #The Fool, it should be working as per the current setup if you are using Kubernetes v1.23 or greater. After v1.23 securityContext.fsGroup and securityContext.fsGroupChangePolicy features went into GA/stable.

Jenkins kubernetes build user permission problem

I am trying to do some ssh + git sub module builds, and initially I was getting this error when trying to pull code:
Submodule 'xxx' (git#host:thing/subdir/repo.git) registered for path 'repo'
Submodule 'yyy' (git#host:thing/subdir/yyy.git) registered for path 'yyy'
Cloning into '/home/jenkins/agent/workspace/thing/AAA/releaser/xxx'...
No user exists for uid 1000
fatal: Could not read from remote repository.
Which led me to this solution:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
Now, this worked, but I was seeing some non critical errors about not being able to save the host key to /.ssh, but it still checked out the code and so I didn't care since every run is like it's the first time.
However, I am now running into a different problem that really requires me to solve this user issue permanently:
+ mc alias set jenkins-user ****
mc: <ERROR> Unable to save new mc config. mkdir /.mc: permission denied.
I need to run the minio mc client - and when I give it creds it tries to save them to /.mc and it can't and fails the aliasing command, and thus the whole build.
So, I need to figure out a better way to solve the initial user problem, or tell it that, when running in this pod, to use a non privileged directory like /tmp.
So, I am open to a solution to either issue if someone can assist
Ok, I ended up solving it with a mix of Kubernetes changes and Dockerfile changes.
First, I had to add the Jenkins user with the right user id to a custom docker image that contained what I needed
FROM minio/mc
RUN \
microdnf update --nodocs && \
microdnf install git zip findutils --nodocs && \
microdnf clean all
RUN adduser --home-dir /home/jenkins/ --shell /bin/sh --uid 1000 jenkins
USER jenkins
WORKDIR /home/jenkins
ENTRYPOINT ["sh"]
Then, I had to update the pod definition to run under the same user that it was complaining about before:
apiVersion: v1
kind: Pod
metadata:
annotations: {}
spec:
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
containers:
-
name: releaser
image: docker.io/chb0docker/releaser:latest
imagePullPolicy: Always
command:
- cat
tty: true

What is "/usr/bin/nsenter -m/proc/1/ns/mnt" in Kubernetes Daemonset?

I have read some tutorials of how to mount a volume in container and run the script on host/node directly. These are the examples given.
DeamonSet pod spec
hostPID: true
nodeSelector:
cloud.google.com/gke-local-ssd: "true"
volumes:
- name: setup-script
configMap:
name: local-ssds-setup
- name: host-mount
hostPath:
path: /tmp/setup
initContainers:
- name: local-ssds-init
image: marketplace.gcr.io/google/ubuntu1804
securityContext:
privileged: true
volumeMounts:
- name: setup-script
mountPath: /tmp
- name: host-mount
mountPath: /host
command:
- /bin/bash
- -c
- |
set -e
set -x
# Copy setup script to the host
cp /tmp/setup.sh /host
# Copy wait script to the host
cp /tmp/wait.sh /host
# Wait for updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/wait.sh
# Give execute priv to script
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/setup.sh
# Wait for Node updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/wait.sh
# If the /tmp folder is mounted on the host then it can run the script
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/setup.sh
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
(There is a configmap for composing the .sh files. I just skip that)
What does "/usr/bin/nsenter -m/proc/1/ns/mnt" mean? Is this a command to run something on host? what is "/proc/1/ns/mnt" ?
Lets start from the namepaces to understand this in detail :
Namespaces in container helps to isolate resources among the process. Namespaces controls the resources from the kernal and allocate to the process. This provides a great isolation among different containers that may run in a system.
Having said that, it will also make things complicated with these access restrictions to the namespaces. so comes the nsenter command , which will give the conatiners access to the namespaces. something similar to the sudo command.
.This command can give us access to mount, UTS, IPC, Network, PID,user,cgroup, and time namespaces.
the -m in your example is --mount which will access to the mount namespace specified by that file.

How to change permission of mapped volume in kubernetes

I try to mount a folder that is non-root user(xxxuser) at kubernetes and I use hostPath for mounting. But whenever container is started, it is mounted with user (1001) not xxxuser. It is always started with user (1001). How can I mount this folder with xxxuser ?
There is many types of volumes but I use hostPath. Before started; I change folder user and group with chown and chgrp commands. And then; mounted this folder as volume. Container started and I checked user of folder but it always user (1001). Such as;
drwxr-x---. 2 1001 1001 70 May 3 14:15 configutil/
volumeMounts:
- name: configs
mountPath: /opt/KOBIL/SSMS/home/configutil
volumes:
- name: configs
hostPath:
path: /home/ssmsuser/configutil
type: Directory
drwxr-x---. 2 xxxuser xxxuser 70 May 3 14:15 configutil/
You may specify the desired owner of mounted volumes using following syntax:
spec:
securityContext:
fsGroup: 2000
I try what you have recomend but my problem is still continue. I add below line to my yaml file:
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroup: 999
I use 999 because I use 999 inside my Dockerfile. Such as;
RUN groupadd -g 999 ssmsuser && \
useradd -r -u 999 -g ssmsuser ssmsuser
USER ssmsuser

chown: changing ownership of '/data/db': Operation not permitted

Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?
I am running the pod with MongoDB. Getting the error
chown: changing ownership of '/data/db': Operation not permitted .
Cloud any body, Please suggest me how to resolve the error? (or)
Is any alternative volume plugin is suggestible to achieve HA- DR in kubernetes cluster?
chown: changing ownership of '/data/db': Operation not permitted .
You'll want to either launch the mongo container as root, so that you can chown the directory, or if the image prohibits it (as some images already have a USER mongo clause that prohibits the container from escalating privileges back up to root), then one of two things: supersede the user with a securityContext stanza in containers: or use an initContainer: to preemptively change the target folder to be the mongo UID:
Approach #1:
containers:
- name: mongo
image: mongo:something
securityContext:
runAsUser: 0
(which may require altering your cluster's config to permit such a thing to appear in a PodSpec)
Approach #2 (which is the one I use with Elasticsearch images):
initContainers:
- name: chmod-er
image: busybox:latest
command:
- /bin/chown
- -R
- "1000" # or whatever the mongo UID is, use string "1000" not 1000 due to yaml
- /data/db
volumeMounts:
- name: mongo-data # or whatever
mountPath: /data/db
containers:
- name: mongo # then run your container as before
/data/db is a mountpoint, even if you don't explicitly mount a volume there. The data is persisted to an overlay specific to the pod.
Kubernetes mounts all volumes as 0755 root.root, regardless of what the permissions for the directory were intially.
Of course mongo cannot chown that.
If you mount the volume somewhere below /data/db, you will get the same error.
And if you mount the volume above at /data, the data will not be stored on the NFS because the mountpoint at /data/db will write to the overlay instead. But you won't get that error anymore.
By adding command:["mongod"] in your Deployment Manifest, it will override the default entrypoint script and will prevent executing the chown.
...
spec:
containers:
- name: mongodb
image: mongo:4.4.0-bionic
command: ["mongod"]
...
Instead of mounting /data/db, we could mount /data. Internally mongo will create /data/db. During entrypoint, mongo tries to chown this directory but if we mount a volume directory to this mount point, as a mongo container user - it will not be able to chown. That's the cause of the issue
Here is a sample of working mongo deployment yaml
...
spec:
containers:
- name: mongo
image: mongo:latest
volumeMounts:
- mountPath: /data
name: mongo-db-volume
volumes:
- hostPath:
path: /Users/name/mongo-data
type: Directory
name: mongo-db-volume
...