I'd like to create a cronjob that runs a python script mounted as a pvc, but I don't understand how to put test.py into the container from my local file system
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
You have a volume called application-code. In there lies the test.py file. Now you mount the volume, but you are not setting the mountPath according to your shell command.
The argument is pyhton /client/test.py, so you expect the file to be placed in the /client directory. You just have to mount the volume with this path:
volumeMounts:
- name: application-code
mountPath: /client
Update
If you don't need the file outside the cluster it would be much easier to integrate it into your docker image. Here an example Dockerfile:
FROM python:3.6.2-slim
WORKDIR /data
COPY test.py .
ENTRYPOINT['/bin/bash', '-c', 'python /data/test.py']
Push the image to your docker registry and reference it from your yml.
containers:
- name: update-fingerprints
image: <your-container-registry>:<image-name>
Related
Having an issue with a kubernetes cronjob, when it runs it says that /usr/share/nginx/html is not a directory, "no such file or directory", yet it definitely is, it's baked into the image, if i load the image up straight on docker the folder is definitely there.
Here is the yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: php-cron
spec:
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 1800
completions: 2
parallelism: 2
template:
spec:
containers:
- name: php-cron-video
image: my-image-here
command:
- "cd /usr/share/nginx/html"
- "php bin/console processvideo"
volumeMounts:
- mountPath: /usr/share/nginx/html/uploads
name: uploads-volume
- mountPath: /usr/share/nginx/html/private_uploads
name: private-uploads-volume
restartPolicy: Never
volumes:
- name: uploads-volume
hostPath:
path: /data/website/uploads
type: DirectoryOrCreate
- name: private-uploads-volume
hostPath:
path: /data/website/private_uploads
type: DirectoryOrCreate
schedule: "* * * * *"
docker run -it --rm my-image-here bash
Loads up straight into the /usr/share/nginx/html folder
What's going on here? The same image works fine as well as a normal deployment.
Assumed your image truly has /usr/share/nginx/html/bin baked in, try changed the command to:
...
command: ["sh","-c","cd /usr/share/nginx/html && php bin/console processvideo"]
...
We had build a docker image that contains an existing file inside. /opt/app/agent/oneagent. When I test the docker image individually, i can see the file inside the directory.
However, when I use it as an initcontainer to mount that directory to the main container, it does not see the directory.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /data
# These containers are run during pod initialization
initContainers:
- name: dynainit
image: localhost:5000/dyna1
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["ls -la /opt/app/agent/oneagent"]
volumeMounts:
- name: workdir
mountPath: "/data"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
the init container logs it shows
/bin/sh: can't open 'ls -la /opt/app/agent/oneagent': No such file or directory
What i really want to do is mount /opt/app/agent/oneagent from the init container to the main container.
Is that possible to mount existing files or init containers can only work by downloading the file in the beginning?
You can copy files from your init container /opt/app/agent/oneagent to workdir volume. When your main container mount workdir it will see the files copied. The error in your question is not volume related, just add -c to your command will do.
...
command: ["/bin/sh","-c"]
args: ["ls -la /opt/app/agent/oneagent"]
...
I got it working using "-c" inside the command key.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
initContainers:
- name: dynainit
image: localhost:5000/dyna1
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args: ["cp -R /opt/app/agent /data"]
volumeMounts:
- name: workdir
mountPath: "/data"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /opt/app
I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.
According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.
Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
I'm testing out whether I can mount data from S3 using initContainer. What I intended and expected was same volume being mounted to both initContainer and Container. Data from S3 gets downloaded using InitContainer to mountPath called /s3-data, and as the Container is run after the initContainer, it can read from the path the volume was mounted to.
However, the Container doesn't show me any logs, and just says 'stream closed'. The initContainer shows logs that data were successfully downloaded from S3.
What am I doing wrong? Thanks in advance.
apiVersion: batch/v1
kind: Job
metadata:
name: train-job
spec:
template:
spec:
initContainers:
- name: data-download
image: <My AWS-CLI Image>
command: ["/bin/sh", "-c"]
args:
- aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: check-proper-data-mount
image: <My Image>
command: ["/bin/sh", "-c"]
args:
- cd /s3-data
- echo "Just s3-data dir"
- ls
- echo "After making a sample file"
- touch sample.txt
- ls
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
emptyDir: {}
restartPolicy: OnFailure
backoffLimit: 6
You can try like the below mentioned the argument part
---
apiVersion: v1
kind: Pod
metadata:
labels:
purpose: demonstrate-command
name: command-demo
spec:
containers:
-
args:
- cd /s3-data;
echo "Just s3-data dir";
ls;
echo "After making a sample file";
touch sample.txt;
ls;
command:
- /bin/sh
- -c
image: "<My Image>"
name: containername
for reference:
How to set multiple commands in one yaml file with Kubernetes?
I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519