minikube - Create a file from Job on local machine - kubernetes

I want to create a file from a job on my local machine, where I use minikube. I want to do that in /tmp directory.
Here's my CronJob definition
apiVersion: batch/v1beta1
kind: CronJob
metadata:
generateName: test-
spec:
schedule: 1 2 * * *
jobTemplate:
spec:
template:
spec:
volumes:
- name: test-volume
hostPath:
path: /tmp
containers:
- name: test-job
image: busybox
args:
- /bin/sh
- '-c'
- touch /data/ok.txt
volumeMounts:
- mountPath: /data
name: test-volume
restartPolicy: OnFailure
I mounted /tmp from my machine to /data on minikube. I did that using minikube mount "/tmp:data", and then I checked via minikube ssh if it works fine.
The problem is that with this configuration I cannot see the file ok.txt being created in /tmp, I can't even see it in /data on my minikube. I added a sleep command before to check if it works and get into the container to check if the file has been created. I listed all files from /data from the Pod, and the file was there.
How can I create this ok.txt file on my machine, in the /tmp?
Thanks in advance for help!!

Root cause
Root cause of this issue is related to paths in YAML.
In Minikube documentation regarding mount command, you can find information:
minikube mount [flags] <source directory>:<target directory>
where:
source directory is directory path on your machine
target directory is directory path mounted in minikube
To achieve what you want, you can to use `minikube mount "/tmp:/data" .
Side Note
Minikube driver was set to Docker.
Please keep in mind that mount must be stay alive for whole time.
📁 Mounting host path /tmp into VM as /data ...
...
🚀 Userspace file server: ufs starting
✅ Successfully mounted /tmp to /data
📌 NOTE: This process must stay alive for the mount to be accessible ...
Solution
Minikube Mount:
$ minikube mount "/tmp:/data"
CronJob YAML:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/1 * * * *" # I have changed schedule time for testing
jobTemplate:
spec:
template:
spec:
containers:
- name: test-job
image: busybox
args:
- /bin/sh
- '-c'
- touch /data/ok.txt
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /data
restartPolicy: OnFailure
Output
On Host:
$ pwd
/home/user
user#minikube:~$ cd /tmp
user#minikube:/tmp$ ls | grep ok.txt
ok.txt
On Minikube
user#minikube:~$ minikube ssh
docker#minikube:~$ pwd
/home/docker
docker#minikube:~$ ls
docker#minikube:~$ cd /data
docker#minikube:/data$ ls | grep ok
ok.txt
Please let me know if it worked for you.

Related

OpenShift-Job to copy data from sftp to persistent volume

I would like to deploy a job which copies multiple files from sftp to a persistent volume and then completes.
My current version of this job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: init-pv
image: w0wka91/ubuntu-sshpass
command: ["sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume"]
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume
envFrom:
- secretRef:
name: ftp-secrets
restartPolicy: Never
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: myvolume
backoffLimit: 3
When I deploy the job, the pod starts but it always fails to create the container:
sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume: no such file or directory
It seems like the command gets executed before the volume is mounted but I couldnt find any documentation about it.
When I debug the pod and execute the command manually it all works fine so the command is definetely working.
Any ideas how to overcome this issue?
The volume mount is incorrect, change it to:
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume

How to share netns between different containers in k8s?

I want to share the netns between two containers which belong to the same pod.
I know all containers in the same pod share the network by default. And I try to mount the same host path to /var/run/netns for both containers. I can create netns in first container, but when I access the netns in second container, it says "Invalid argument". I run "file /var/run/netns/n1", it reports normal empty file. What can I do? Is it possible?
But below manifest file works perfectly in my case. You can check also by following steps.
Step 1: kubectl apply -f (below manifest file)
Step 2: kubectl exec -ti test-pd -c test1 /bin/bash
Step 3: go to test-pd directory and create a .txt file using touch command and exit.
Step 4: kubectl exec -ti test-pd -c test2 sh
Spep 5: go to test-pd directory and check the created file here. I found the created file here.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test1
volumeMounts:
- mountPath: /test-pd
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1
name: test2
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate

how do scripts/files get mounted to kubernetes pods

I'd like to create a cronjob that runs a python script mounted as a pvc, but I don't understand how to put test.py into the container from my local file system
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
You have a volume called application-code. In there lies the test.py file. Now you mount the volume, but you are not setting the mountPath according to your shell command.
The argument is pyhton /client/test.py, so you expect the file to be placed in the /client directory. You just have to mount the volume with this path:
volumeMounts:
- name: application-code
mountPath: /client
Update
If you don't need the file outside the cluster it would be much easier to integrate it into your docker image. Here an example Dockerfile:
FROM python:3.6.2-slim
WORKDIR /data
COPY test.py .
ENTRYPOINT['/bin/bash', '-c', 'python /data/test.py']
Push the image to your docker registry and reference it from your yml.
containers:
- name: update-fingerprints
image: <your-container-registry>:<image-name>

Read-only file system error in Kubernetes

I am getting an error while adding NFS in the Kubernetes cluster. I was able to mount the NFS but not able to add a file or directory in the mount location.
This is my yaml file
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.01.26.81
path: /nfs_data/nfs_share_home_test/testuser
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /home/kube/testuser
Then I ran the following commands for building the pod
kubectl apply -f session.yaml
kubectl exec -it pod-using-nfs sh
After I exec to the pod,
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
touch: hello: Read-only file system
Expected output is
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
/home/kube/testuser# ls
hello
Should I need to add any securityContext to the yaml for fixing this?
Any help would be appreciated!

What is the VM folder when using Linux as OS and kvm as driver in kubernetes?

The kubernetes docs provides for each OS and each driver the name VM when mounting a volume of type hostPath.
Nevertheless, it is missing that case:
OS: linux
Driver: kvm
Host folder: /home
VM folder: ???
This is the targeted deployment I would like to use in order to avoid to recreate the image after each change of the code.
This is only for the development env. In the production env, the code will be directly in the image.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /hosthome/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube/src/
Thanks...
Based on this doc, Host folder sharing is not implemented in the KVM driver yet. This is the driver I am using actually.
To overcome this, there are 2 solutions:
Use the virtualbox driver so that you can mount your hostPath volume by changing the path on you localhost /home/THE_USR/... to /hosthome/THE_USR/...
Mount your volume to the minikube VM based on the command $ minikube mount /home/THE_USR/.... The command will return you the path of your mounted volume on the minikube VM. Example is given down.
Example
(a) mounting a volume on the minikube VM
the minikube mount command returned that path /mount-9p
$ minikube mount -v 3 /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube
Mounting /home/amine/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube into /mount-9p on the minikubeVM
This daemon process needs to stay alive for the mount to still be accessible...
2017/03/31 06:42:27 connected
2017/03/31 06:42:27 >>> 192.168.42.241:34012 Tversion tag 65535 msize 8192 version '9P2000.L'
2017/03/31 06:42:27 <<< 192.168.42.241:34012 Rversion tag 65535 msize 8192 version '9P2000'
(b) Specification of the path on the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: php-hostpath
spec:
replicas: 1
template:
metadata:
labels:
app: php-hostpath
spec:
containers:
- name: php-hostpath
image: php:7.0-apache
ports:
- containerPort: 80
volumeMounts:
- name: vol-php-hostpath
mountPath: /var/www/html
volumes:
- name: vol-php-hostpath
hostPath:
path: /mount-9p
(c) Checking if mounting the volume worked well
amine#amine-Inspiron-N5110:~/DockerProjects/gcloud-kubernetes/application/06-hostPath-volume-example-minikube$ kubectl exec -ti php-hostpath-3498998593-6mxsn bash
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo "This is my first docker project";
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html# cat index.php
<?php
echo 'This is my first hostPath on kubernetes';
root#php-hostpath-3498998593-6mxsn:/var/www/html#
NB: this kind of volume mounting is only development environment. If I were in production environment, the code will not be mounted: it will be in the image.
Hope it helps others.