How to change permission of mapped volume in kubernetes - kubernetes

I try to mount a folder that is non-root user(xxxuser) at kubernetes and I use hostPath for mounting. But whenever container is started, it is mounted with user (1001) not xxxuser. It is always started with user (1001). How can I mount this folder with xxxuser ?
There is many types of volumes but I use hostPath. Before started; I change folder user and group with chown and chgrp commands. And then; mounted this folder as volume. Container started and I checked user of folder but it always user (1001). Such as;
drwxr-x---. 2 1001 1001 70 May 3 14:15 configutil/
volumeMounts:
- name: configs
mountPath: /opt/KOBIL/SSMS/home/configutil
volumes:
- name: configs
hostPath:
path: /home/ssmsuser/configutil
type: Directory
drwxr-x---. 2 xxxuser xxxuser 70 May 3 14:15 configutil/

You may specify the desired owner of mounted volumes using following syntax:
spec:
securityContext:
fsGroup: 2000

I try what you have recomend but my problem is still continue. I add below line to my yaml file:
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroup: 999
I use 999 because I use 999 inside my Dockerfile. Such as;
RUN groupadd -g 999 ssmsuser && \
useradd -r -u 999 -g ssmsuser ssmsuser
USER ssmsuser

Related

How can I give my non-root user write permissions on a kubernetes volume mount?

From my understanding (based on this guide https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/), if I have the following security context specified for some kubernetes pod
securityContext:
# Enforce to be run as non-root user
runAsNonRoot: true
# Random values should be fine
runAsUser: 1001
runAsGroup: 1001
# Automatically convert mounts to user group
fsGroup: 1001
# For whatever reasons this is not working
fsGroupChangePolicy: "Always"
I expect this pod to be run as user 1001 with the group 1001. This is working as expected, because running id in the container results in: uid=1001 gid=1001 groups=1001.
The file system of all mounts should automatically be accessible by user group 1001, because we specified fsGroup and fsGroupChangePolicy. I guess that this also works because when running ls -l in one of the mounted folders, I can see that the access rights for the files look like this: -rw-r--r-- 1 50004 50004. Ownership is still with the uid and gid it was initialised with but I can see that the file is now readable for others.
The question now is how can I add write permission for my fsGroup those seem to still be missing?
You need to add an additional init_container in your pod/deployment/{supported_kinds} with the commands to give/change the permissions on the volume mounted on the pod to the userID which container is using while running.
initContainers:
- name: volumepermissions
image: busybox ## any image having linux utilities mkdir,echo,chown will work.
imagePullPolicy: IfNotPresent
env:
- name: "VOLUME_DATA_DIR"
value: mountpath_for_the_volume
command:
- sh
- -c
- |
mkdir -p $VOLUME_DATA_DIR
chown -R 1001:1001 $VOLUME_DATA_DIR
echo 'Volume permissions OK ✓'
volumeMounts:
- name: data
mountPath: mountpath_for_the_volume
This is necessary when a container in a pod is running as a user other than root and needs write permissions on a mounted volume.
if this is a helm template this init container can be created as a template and used in all the pods/deployments/{supported kinds} to change the volume permissions.
Update: As mentioned by #The Fool, it should be working as per the current setup if you are using Kubernetes v1.23 or greater. After v1.23 securityContext.fsGroup and securityContext.fsGroupChangePolicy features went into GA/stable.

how to make chown command worked in nfs share folder

I am make a nfs file share and using it in kubernetes pods, but when I start pods, it give me tips :
2020-05-31 03:00:06+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.30-1debian10 started.
chown: changing ownership of '/var/lib/mysql/': Operation not permitted
I searching from internet and understand the nfs default map other root login to nfsnobody account, if the privillege not correct, this error should happen, but I follow the steps and still not solve it. This is the ways I having tried:
1 addd unsecure config no_root_squash in /etc/exports:
/mnt/data/apollodb/apollopv *(rw,sync,no_subtree_check,no_root_squash)
2 remove the PVC and PV and directly using nfs in pod like this:
volumes:
- name: apollo-mysql-persistent-storage
nfs:
server: 192.168.64.237
path: /mnt/data/apollodb/apollopv
containers:
- name: mysql
image: 'mysql:5.7'
ports:
- name: mysql
containerPort: 3306
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: gfwge4LucnXwfefewegLwAd29QqJn4
resources: {}
volumeMounts:
- name: apollo-mysql-persistent-storage
mountPath: /var/lib/mysql
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
this tell me the problem not in pod define but in the nfs config itself.
3 give every privillege using this command
chmod 777 /mnt/data/apollodb/apollopv
4 chown to nfsnobody like this
sudo chown nfsnobody:nfsnobody -R apollodb/
sudo chown 999:999 -R apollodb
but the problem still not solved,so what should I try to make it works?
You wouldn't set this via chown, you would use fsGroup security setting instead.

Volume Write Permissions

How can I give the non-root user full access to the mounted volume path in Kubernetes (pod)?
I'm using a volume on the host (/workspace/projects path) and writing to the directory as below.
volumeMounts:
-name: workspace
mountPath: /workspace/projects
Since, I'm copying the git repository content to the /projects directory, the git sets the permission to 755 by default. I want to set the permission to 775 as unable to write to the /project directory.
Could you please let me know what is the best way to do this? I saw InitContainers and not sure whether there is any better solution.Appreciate any help! Thanks in advance!
When you have to run process inside container as non-root user and you mount a volume to pod. But the volume have root:root permission.
To give access to specific user initContainer is one way, like following
initContainers:
- name: volume-mount-permission
image: busybox
command: ["sh", "-c", "chmod 775 /workspace/projects && chown -R <user> /workspace/projects"]
volumeMounts:
-name: workspace
mountPath: /workspace/projects
You can also use security context. Create user and group, add user to the group in Dockerfile and set following in spec
spec:
securityContext:
runAsUser: <UID>
runAsGroup: <GID>
fsGroup: <GID>

Write permissions on volume mount with OpenShift

Using OpenShift 3.11, I've mounted an nfs persistent volume, but the application cannot copy into the new volume, saying:
oc logs my-project-77858bc694-6kbm6
cp: cannot create regular file '/config/dbdata/resdb.lock.db': Permission denied
...
I've tried to change the ownership of the folder by doing a chown in an InitContainers, but it tells me the operation not permitted.
initContainers:
- name: chowner
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- ls -alt /config/dbdata; chown 1001:1001 /config/dbdata;
volumeMounts:
- name: my-volume
mountPath: /config/dbdata/
oc logs my-project-77858bc694-6kbm6 -c chowner
total 12
drwxr-xr-x 3 root root 4096 Nov 7 03:06 ..
drwxr-xr-x 2 99 99 4096 Nov 7 02:26 .
chown: /config/dbdata: Operation not permitted
I expect to be able to write to the mounted volume.
You can give your Pods permission to write into a volume by using fsGroup: GROUP_ID in a Security Context. fsGroup makes your volumes writable by GROUP_ID and makes all processes inside your container part of that group.
For example:
apiVersion: v1
kind: Pod
metadata:
name: POD_NAME
spec:
securityContext:
fsGroup: GROUP_ID
...

The server must be started by the user that owns the data directory

I am trying to get some persistant storage for a docker instance of PostgreSQL running on Kubernetes. However, the pod fails with
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
This is the NFS configuration:
% exportfs -v
/srv/nfs/postgresql/postgres-registry
kubehost*.example.com(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)
$ ls -ldn /srv/nfs/postgresql/postgres-registry
drwxrwxrwx. 3 999 999 4096 Jul 24 15:02 /srv/nfs/postgresql/postgres-registry
$ ls -ln /srv/nfs/postgresql/postgres-registry
total 4
drwx------. 2 999 999 4096 Jul 25 08:36 pgdata
The full log from the pod:
2019-07-25T07:32:50.617532000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T07:32:50.618113000Z This user must also own the server process.
2019-07-25T07:32:50.619048000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T07:32:50.619496000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T07:32:50.619943000Z The default text search configuration will be set to "english".
2019-07-25T07:32:50.620826000Z Data page checksums are disabled.
2019-07-25T07:32:50.621697000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T07:32:50.647445000Z creating subdirectories ... ok
2019-07-25T07:32:50.765065000Z selecting default max_connections ... 20
2019-07-25T07:32:51.035710000Z selecting default shared_buffers ... 400kB
2019-07-25T07:32:51.062039000Z selecting default timezone ... Etc/UTC
2019-07-25T07:32:51.062828000Z selecting dynamic shared memory implementation ... posix
2019-07-25T07:32:51.218995000Z creating configuration files ... ok
2019-07-25T07:32:51.252788000Z 2019-07-25 07:32:51.251 UTC [79] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T07:32:51.253339000Z 2019-07-25 07:32:51.251 UTC [79] HINT: The server must be started by the user that owns the data directory.
2019-07-25T07:32:51.262238000Z child process exited with exit code 1
2019-07-25T07:32:51.263194000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T07:32:51.380205000Z running bootstrap script ...
The deployment has the following in:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
What am I doing wrong?
Edit: Added storage.yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.3.7
path: /srv/nfs/postgresql/postgres-registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Edit: And the full deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
subPath: "pgdata"
name: postgredb-registry-persistent-storage
volumes:
- name: postgredb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
Even more debugging adding:
command: ["/bin/bash", "-c"]
args:["id -u; ls -ldn /var/lib/postgresql/data"]
Which returned:
999
drwx------. 2 99 99 4096 Jul 25 09:11 /var/lib/postgresql/data
Clearly, the UID/GID are wrong. Why?
Even with the work around suggested by Jakub Bujny, I get this:
2019-07-25T09:32:08.734807000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T09:32:08.735335000Z This user must also own the server process.
2019-07-25T09:32:08.736976000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T09:32:08.737416000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T09:32:08.737882000Z The default text search configuration will be set to "english".
2019-07-25T09:32:08.738754000Z Data page checksums are disabled.
2019-07-25T09:32:08.739648000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T09:32:08.766606000Z creating subdirectories ... ok
2019-07-25T09:32:08.852381000Z selecting default max_connections ... 20
2019-07-25T09:32:09.119031000Z selecting default shared_buffers ... 400kB
2019-07-25T09:32:09.145069000Z selecting default timezone ... Etc/UTC
2019-07-25T09:32:09.145730000Z selecting dynamic shared memory implementation ... posix
2019-07-25T09:32:09.168161000Z creating configuration files ... ok
2019-07-25T09:32:09.200134000Z 2019-07-25 09:32:09.199 UTC [70] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T09:32:09.200715000Z 2019-07-25 09:32:09.199 UTC [70] HINT: The server must be started by the user that owns the data directory.
2019-07-25T09:32:09.208849000Z child process exited with exit code 1
2019-07-25T09:32:09.209316000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T09:32:09.274741000Z running bootstrap script ... 999
2019-07-25T09:32:09.278124000Z drwx------. 2 99 99 4096 Jul 25 09:32 /var/lib/postgresql/data
Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine.
You're also missing an 's' in your name: postgredb-registry-persistent-storage
And with your subPath: "pgdata" do you need to change the $PGDATA? I didn't include the subpath for this.
$ sudo mount 172.29.0.218:/test/nfs ./nfs
$ sudo su -c "ls -al ./nfs" postgres
total 8
drwx------ 2 postgres postgres 4096 Jul 25 14:44 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
$ kubectl apply -f nfspv.yaml
persistentvolume/postgres-registry-pv-volume created
persistentvolumeclaim/postgres-registry-pv-claim created
$ kubectl apply -f postgres.yaml
deployment.extensions/postgres-registry created
$ sudo su -c "ls -al ./nfs" postgres
total 124
drwx------ 19 postgres postgres 4096 Jul 25 14:46 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
drwx------ 3 postgres postgres 4096 Jul 25 14:46 base
drwx------ 2 postgres postgres 4096 Jul 25 14:46 global
drwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts
. . .
I noticed using nfs: directly in the persistent volume took significantly longer to initialize the database, whereas using hostPath: to the mounted nfs volume behaved normally.
So after a few minutes:
$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 3
2019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections
done
server started
$ kubectl exec -it postgres-registry-675869694-9fp52 psql
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.
postgres=#
Checking the uid/gid
$ kubectl exec -it postgres-registry-675869694-9fp52 bash
postgres#postgres-registry-675869694-9fp52:/$ whoami && id -u && id -g
postgres
999
999
nfspv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 172.29.0.218
path: /test/nfs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
postgres.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb-registry-persistent-storage
volumes:
- name: postgresdb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
I cannot explain why those 2 IDs are different but as workaround I would try to override postgres's entrypoint with
command: ["/bin/bash", "-c"]
args: ["chown -R 999:999 /var/lib/postgresql/data && ./docker-entrypoint.sh postgres"]
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control. The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM