Read-only file system error in Kubernetes - kubernetes

I am getting an error while adding NFS in the Kubernetes cluster. I was able to mount the NFS but not able to add a file or directory in the mount location.
This is my yaml file
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.01.26.81
path: /nfs_data/nfs_share_home_test/testuser
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /home/kube/testuser
Then I ran the following commands for building the pod
kubectl apply -f session.yaml
kubectl exec -it pod-using-nfs sh
After I exec to the pod,
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
touch: hello: Read-only file system
Expected output is
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
/home/kube/testuser# ls
hello
Should I need to add any securityContext to the yaml for fixing this?
Any help would be appreciated!

Related

OpenShift-Job to copy data from sftp to persistent volume

I would like to deploy a job which copies multiple files from sftp to a persistent volume and then completes.
My current version of this job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: init-pv
image: w0wka91/ubuntu-sshpass
command: ["sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume"]
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume
envFrom:
- secretRef:
name: ftp-secrets
restartPolicy: Never
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: myvolume
backoffLimit: 3
When I deploy the job, the pod starts but it always fails to create the container:
sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume: no such file or directory
It seems like the command gets executed before the volume is mounted but I couldnt find any documentation about it.
When I debug the pod and execute the command manually it all works fine so the command is definetely working.
Any ideas how to overcome this issue?
The volume mount is incorrect, change it to:
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume

How to share netns between different containers in k8s?

I want to share the netns between two containers which belong to the same pod.
I know all containers in the same pod share the network by default. And I try to mount the same host path to /var/run/netns for both containers. I can create netns in first container, but when I access the netns in second container, it says "Invalid argument". I run "file /var/run/netns/n1", it reports normal empty file. What can I do? Is it possible?
But below manifest file works perfectly in my case. You can check also by following steps.
Step 1: kubectl apply -f (below manifest file)
Step 2: kubectl exec -ti test-pd -c test1 /bin/bash
Step 3: go to test-pd directory and create a .txt file using touch command and exit.
Step 4: kubectl exec -ti test-pd -c test2 sh
Spep 5: go to test-pd directory and check the created file here. I found the created file here.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test1
volumeMounts:
- mountPath: /test-pd
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1
name: test2
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate

minikube - Create a file from Job on local machine

I want to create a file from a job on my local machine, where I use minikube. I want to do that in /tmp directory.
Here's my CronJob definition
apiVersion: batch/v1beta1
kind: CronJob
metadata:
generateName: test-
spec:
schedule: 1 2 * * *
jobTemplate:
spec:
template:
spec:
volumes:
- name: test-volume
hostPath:
path: /tmp
containers:
- name: test-job
image: busybox
args:
- /bin/sh
- '-c'
- touch /data/ok.txt
volumeMounts:
- mountPath: /data
name: test-volume
restartPolicy: OnFailure
I mounted /tmp from my machine to /data on minikube. I did that using minikube mount "/tmp:data", and then I checked via minikube ssh if it works fine.
The problem is that with this configuration I cannot see the file ok.txt being created in /tmp, I can't even see it in /data on my minikube. I added a sleep command before to check if it works and get into the container to check if the file has been created. I listed all files from /data from the Pod, and the file was there.
How can I create this ok.txt file on my machine, in the /tmp?
Thanks in advance for help!!
Root cause
Root cause of this issue is related to paths in YAML.
In Minikube documentation regarding mount command, you can find information:
minikube mount [flags] <source directory>:<target directory>
where:
source directory is directory path on your machine
target directory is directory path mounted in minikube
To achieve what you want, you can to use `minikube mount "/tmp:/data" .
Side Note
Minikube driver was set to Docker.
Please keep in mind that mount must be stay alive for whole time.
📁 Mounting host path /tmp into VM as /data ...
...
🚀 Userspace file server: ufs starting
✅ Successfully mounted /tmp to /data
📌 NOTE: This process must stay alive for the mount to be accessible ...
Solution
Minikube Mount:
$ minikube mount "/tmp:/data"
CronJob YAML:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/1 * * * *" # I have changed schedule time for testing
jobTemplate:
spec:
template:
spec:
containers:
- name: test-job
image: busybox
args:
- /bin/sh
- '-c'
- touch /data/ok.txt
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /data
restartPolicy: OnFailure
Output
On Host:
$ pwd
/home/user
user#minikube:~$ cd /tmp
user#minikube:/tmp$ ls | grep ok.txt
ok.txt
On Minikube
user#minikube:~$ minikube ssh
docker#minikube:~$ pwd
/home/docker
docker#minikube:~$ ls
docker#minikube:~$ cd /data
docker#minikube:/data$ ls | grep ok
ok.txt
Please let me know if it worked for you.

Mount / copy a file from host to Pod in kubernetes using minikube

I'm writing a kubectl configuration to start an image and copy a file to the container.
I need the file Config.yaml in the / so /Config.yaml needs to be a valid file.
I need that file in the Pod before it starts, so kubectl cp does not work.
I have the Config2.yaml in my local folder, and I'm starting the pod like:
kubectl apply -f pod.yml
Here follows my pod.yml file.
apiVersion: v1
kind: Pod
metadata:
name: python
spec:
containers:
- name: python
image: mypython
volumeMounts:
- name: config
mountPath: /Config.yaml
volumes:
- name: config
hostPath:
path: Config2.yaml
type: File
If I try to use like this it also fails:
- name: config-yaml
mountPath: /
subPath: Config.yaml
#readOnly: true
If you just need the information contained in the config.yaml to be present in the pod from the time it is created, use a configMap instead.
Create a configMap that contains all the data stored in the config.yaml and mount that into the correct path in the pod. This would not work for read/write, but works wonderfully for read-only data
You can try postStart lifecycle handler here to validate the file before pod starts.
Please refer here
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /config.yaml
name: config
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install yamllint -y && yamllint /config.yaml"]
volumes:
- name: config
hostPath:
path: /tmp/config.yaml
type: File
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
If config.yaml is invalid. Pod won't start.

Hosting local directory to Kubernetes Pod

I have a single node Kubernetes cluster. I want the pod I make to have access to /mnt/galahad on my local computer (which is the host for the cluster).
Here is my Kubernetes config yaml:
apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: vergilkilla/distributor:v9
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
I start my pod like such:
kubectl apply -f ./create-distributor.yaml -n galahad-test
I get a terminal into my newly-made pod:
kubectl exec -it galahad-test-distributor -n galahad-test -- /bin/bash
I go to /mnt in my pod and it doesn't have anything from /mnt/galahad. I make a new file in the host /mnt/galahad folder - doesn't reflect in the pod. How do I achieve this functionality to have the host path files/etc. reflect in the pod? Is this possible in the somewhat straightforward way I am trying here (defining it per-pod definition without creating separate PersistentVolumes and PersistentVolumeRequests)?
Your yaml file looks good.
Using this configuration:
apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
I ran this and everything worked as expected:
>>> kubectl apply -f create-distributor.yaml # side node: you don't need
# to specify the namespace here
# since it's inside the yaml file
pod/galahad-test-distributor created
>>> touch /mnt/galahad/file
>>> kubectl -n galahad-test exec galahad-test-distributor ls /mnt
file
Are you sure you are adding your files in the right place? For instance, if you are running your cluster inside a VM (e.g. minikube), make sure you are adding the files inside the VM, not on the machine hosting the VM.