Unable to write file. Volume mounted as root - kubernetes

I am spinning up a Pod (comes up with Non Root user) that needs to write data to a volume. The volume comes from a PVC.
The pod definition is simple
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: task-pv-container
image: jnlp/jenkins-slave:latest
command: ["/bin/bash"]
args: ["-c", "sleep 500"]
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
When I exec into the Pod and try to write into /usr/share/nginx/html
I get
jenkins#task-pv-pod:/usr/share/nginx/html$ touch test
touch: cannot touch ‘test’: Permission denied
Looking at the permissions of the directory
jenkins#task-pv-pod:~$ ls -ld /usr/share/nginx/html
drwxr-xr-x 3 root root 4096 Mar 29 15:52 /usr/share/nginx/html
Its clear that ONLY root user can write to /usr/share/nginx/html but thats not what I want.
Is there a way to change the permissions for mounted volumes ?

You can consider using an initContainer to mount your volume and change permissions. The initContainer will be run before the main container(s) start up. The usual pattern for this usage is to have a busybox image (~22 MB) to mount the volume and run a chown or chmod on the directory. When your pod's primary container runs, the volume(s) will have the correct ownership/access privileges.
Alternatively, you can consider using the initContainer to inject the proper files as shown in this example.
Hope this helps!

A security context defines privilege and access control settings for a Pod or Container. Just try securityContext:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
securityContext:
fsGroup: $jenkins_uid
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
...

Related

s3fs mount to shared volume between init container and default container

I want to mount from a s3 bucket to a container and then further access the mounted files inside pod.
I can successfully mount the files to init container, however the mounted files are not accessible to the main/default container with the shared volume. I tried it with another simpler example of a simple textfile where no s3fs mounting was done, it worked fine. However, the s3fs mounted files could not be shared between the init and main container using similar approach.
below is the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
......
spec:
..
template:
..
spec:
containers:
- image: docker-image-of-application
name: main-demo-container
volumeMounts:
- name: shared-volume-location
mountPath: /mount/script/
securityContext:
privileged: true
initContainers:
- image: docker-image-of-application
name: init-demo-container
command: ['sh', '-c', '/scripts/somefile.sh']
volumeMounts:
- name: init-script
mountPath: /scripts
- name: shared-volume-location
mountPath: /mount/files/
securityContext:
privileged: true
volumes:
- name: init-script
configMap:
name: init-script-configmap
defaultMode: 0755
- name: shared-volume-location
emptyDir: {}
where the init-script-configmap is made from somefile.sh which mounts the bucket and is executed inside init container. I do not want to use persistent volume either. And I have confirmed that inside init container the bucket is mounted successfully.

How to mount the same directory to multiple containers in a pod

I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.
According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.
Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory

unable to understand mounting postgres data path onto minikube kubernetes deployment with permission errors

I’m getting started with kubernetes, and I want to create a simple app with a single webserver & postgres database. The problem I’m running into is the deployment of the postgres is giving me permission errors. The following are discussions around this:
https://github.com/docker-library/postgres/issues/116
https://github.com/docker-library/postgres/issues/103
https://github.com/docker-library/postgres/issues/696
Can't get either Postgres permissions or PVC working in AKS
Kubernetes - Pod which encapsulates DB is crashing
Mount local directory into pod in minikube
https://serverfault.com/questions/981459/minikube-using-a-storageclass-to-provision-data-outside-of-tmp
EDIT
spec:
OSX - 10.15.4
minikube - v1.9.2
kubernetes - v1.18.2
minikube setup
minikube start --driver=virtualbox --cpus=2 --memory=5120 --kubernetes-version=v1.18.2 --container-runtime=docker --mount=true --mount-string=/Users/holmes/kubernetes/pgdata:/data/pgdata
The permission error: chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
I am trying to mount a local OS directory into minikube to be used with the postgres deployment/pod/container volume mount.
After I run the above setup I ssh into minikube (minikube ssh) and check the permissions
# minikube: /
drwxr-xr-x 3 root root 4096 May 13 19:31 data
# minikube: /data
drwx------ 1 docker docker 96 May 13 19:27 pgdata
By running the script below the chmod permission error surfaces. If I change the --mount-string=/Users/holmes/kubernetes/pgdata:/data (leave out /pgdata) and then minikube ssh to create the pgdata directory:
mkdir -p /data/pgdata
chmod 777 /data/pgdata
I get a different set of permissions before deployment
# minikube: /
drwx------ 1 docker docker 96 May 13 20:10 data
# minikube: /data
drwxrwxrwx 1 docker docker 64 May 13 20:25 pgdata
and after deployment
# minikube: /
drwx------ 1 docker docker 128 May 13 20:25 data
# minikube: /data
drwx------ 1 docker docker 64 May 13 20:25 pgdata
Not sure why this changes, and the chmod permission error persists. It seems like the above reference links are bouncing around different methods on different machines on different vms which I don’t understand nor can I get this to work. Can someone walk me getting this to work? Super confused going through all the above discussions.
postgres.yaml
apiVersion: v1
kind: Namespace
metadata:
name: data-block
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: data-block
labels:
type: starter
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: docker
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
namespace: data-block
labels:
app: postgres
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/pgdata
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
namespace: data-block
labels:
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: data-block
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.2
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-vol
persistentVolumeClaim:
claimName: postgres-pv-claim
UPDATE
I went ahead and updated the deployment script to a simple pod. The goal is map the postgres /var/lib/postgresql/data to my local file directory /Users/<my-path>/database/data to persist the data.
---
apiVersion: v1
kind: Pod
metadata:
name: postgres-pod
namespace: data-block
labels:
name: postgres-pod
spec:
containers:
- name: postgres
image: postgres:12.3
imagePullPolicy: IfNotPresent
ports:
- name: postgres-port
containerPort: 5432
envFrom:
- configMapRef:
name: postgres-env-config
- secretRef:
name: postgres-secret
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-vol
hostPath:
path: /Users/<my-path>/database/data
restartPolicy: Never
The error: initdb: error: could not access directory "/var/lib/postgresql/data": Permission denied
How to go about mounting the local file directory?
You are declaring the PGDATA field that maybe the cause of the issue. I faced the same error, this comes because there's as LOST+FOUND folder already in that directory however, the container wants it to be a empty dir. Giving the subPath field solves this issue. Please try this it should solve the issue and you need not need any PGDATA field. Try omitting it from your configmap and add subPath to some folder. Please go through following manifests.
https://github.com/mendix/kubernetes-howto/blob/master/postgres-deployment.yaml
https://www.bmc.com/blogs/kubernetes-postgresql/
it's a statefulset that usually you should go with and not a deployment when it comes to Database deployment.
- name: postgredb
mountPath: /var/lib/postgresql/data
#setting subPath will fix your issue it can be pgdata or
postgres or any other folder name according to your
choice.
subPath: postgres

Local Persistent Volume in its own directory

I have got the local persistent volumes to work, using local directories as mount points, storage class, PVC etc, all using standard documentation.
However, when I use this PVC in a Pod, all the files are getting created in the base of the mount point, i.e if /data is my mount point, all my application files are stored in the /data folder. I see this creating conflicts in the future, with more than one application writing to the same folder.
Looking for any suggestions or advice to make each PVC or even application files of a Pod into separate directories in the PV.
If you store your data in different directories on your volume, you can use subPath to separate your data into different directories using multiple mount points.
E.g.
apiVersion: v1
kind: Pod
metadata:
name: podname
spec:
containers:
- name: containername
image: imagename
volumeMounts:
- mountPath: /path/to/mount/point
name: volumename
subPath: volume_subpath
- mountPath: /path/to/mount/point2
name: volumename
subPath: volume_subpath2
volumes:
- name: volumename
persistentVolumeClaim:
claimName: pvcname
Another approach is using subPathExpr.
Note:
The subPath and subPathExpr properties are mutually exclusive
apiVersion: v1
kind: Pod
metadata:
name: pod3
spec:
containers:
- name: pod3
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
persistentVolumeClaim:
claimName: pvc1
As described here.
In addition please follow Fixing the Subpath Volume Vulnerability in Kubernetes here and here
You can simply change the mount path and sperate the each application mount path so that files of POD into separate directories.

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519