The data is not being shared across containers - kubernetes

I am trying to create two containers within a pod with one container being an init container. The job of the init container is to download a jar and make it available for the app container. I am able to create everything and the logs look good but when i check, i do not see the jar in my app container. Below is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service-test
labels:
app: web-service-test
spec:
replicas: 1
selector:
matchLabels:
app: web-service-test
template:
metadata:
labels:
app: web-service-test
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: web-service-test
image: some image
ports:
- containerPort: 8081
volumeMounts:
- name: shared-data
mountPath: /tmp/jar
initContainers:
- name: init-container
image: busybox
volumeMounts:
- name: shared-data
mountPath: /jdbc-jar
command:
- wget
- "https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"

You need to save jar in the /jdbc-jar folder
try updating your yaml to following
command: ["/bin/sh"]
args: ["-c", "wget -O /pod-data/ojdbc8-19.3.0.0.jar https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"]

Add following block of code to your init container section:
command: ["/bin/sh","-c"]
args: ["wget -O /jdbc-jar/ojdbc8-19.3.0.0.jar https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/19.3.0.0/ojdbc8-19.3.0.0.jar"]
The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands. In the wget command I have added the -O flag to download the jar from the specified url and save it as /jdbc-jar/ojdbc8-19.3.0.0.jar .
To check if jar is persistent in container. Simply execute command:
$ kubectl exec -it web-service-test -- /bin/bash
Then go to folder /jdbc-jar ( $ cd jdbc-jar ) and list files in it ($ ls -al). You should see your jar there.
See examples: commands-in-containers, initcontainers-running.

Related

why does this curl fetch from tmp inside container in k8s pod?

This deployment creates 1 pod that has init container in it. The container mounts volume into tmp/web-content and writes 1 single line 'check this out' to index.html
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-init-container
namespace: mars
spec:
replicas: 1
selector:
matchLabels:
id: test-init-container
template:
metadata:
labels:
id: test-init-container
spec:
volumes:
- name: web-content
emptyDir: {}
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
I fire up temporary pod to check if i can see this line 'check this out' using curl.
k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67
It does show the line. However, how does curl know which directory to go in?
I dont specify it should go to /tmp/web-content explicitly
Its because of the mountPath:
The root directive indicates the actual path on your hard drive where
this virtual host's assets (HTML, images, CSS, and so on) are located,
ie /usr/share/nginx/html. The index setting tells Nginx what file or
files to serve when it's asked to display a directory
When you curl to default port 80, curl serve you back content of the html directory.
.
Just an elaboration of #P.... answer .
Following is creating a common storage space "tagged" with name "web-content"
​volumes:
​- name: web-content
​emptyDir: {}
web-content is mount in location /tmp/web-content/ on init container named init-con which is writing check this out in index.html
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
same web-content volume which has the index.html is being mounted as directory /usr/share/nginx/html so the index.html location will be seen as /usr/share/nginx/html/index.html . (default landing page for nginx ) for container nginx hence it shows check this out when you curl to it.
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html

Kubernetes initContainers to copy file and execute as part of Lifecycle Hook PostStart

I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.
based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference)
Still, I tried it and that also gives the same error.
Here is my ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap-initscripts
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Here is my StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
spec:
....
serviceName: postgres-service
replicas: 1
template:
...
spec:
initContainers:
- name: "postgres-ghost"
image: alpine
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
containers:
- name: postgres
image: postgres
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "/scripts/poststart.sh" ]
ports:
- containerPort: 5432
name: dbport
....
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
volumes:
- name: postgres-scripts
configMap:
name: postgres-configmap-initscripts
items:
- key: poststart.sh
path: poststart.sh
The error I am getting:
postStart hook will be call at least once but may be call more than once, this is not a good place to run script.
The poststart.sh file that mounted as ConfigMap will not have execute mode hence the permission error.
It is better to run script in initContainers, here's an quick example that do a simple chmod; while in your case you can execute the script instead:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: busybox
data:
test.sh: |
#!/bin/bash
echo "It's done"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
volumes:
- name: scripts
configMap:
name: busybox
items:
- key: test.sh
path: test.sh
- name: runnable
emptyDir: {}
initContainers:
- name: prepare
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["cp /scripts/test.sh /runnable/test.sh && chmod +x /runnable/test.sh"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["while :; do . /runnable/test.sh; sleep 1; done"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
EOF

OpenShift-Job to copy data from sftp to persistent volume

I would like to deploy a job which copies multiple files from sftp to a persistent volume and then completes.
My current version of this job looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: job
spec:
template:
spec:
containers:
- name: init-pv
image: w0wka91/ubuntu-sshpass
command: ["sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume"]
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume
envFrom:
- secretRef:
name: ftp-secrets
restartPolicy: Never
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: myvolume
backoffLimit: 3
When I deploy the job, the pod starts but it always fails to create the container:
sshpass -p $PASSWORD scp -o StrictHostKeyChecking=no -P 22 -r user#sftp.mydomain.com:/RESSOURCES/* /mnt/myvolume: no such file or directory
It seems like the command gets executed before the volume is mounted but I couldnt find any documentation about it.
When I debug the pod and execute the command manually it all works fine so the command is definetely working.
Any ideas how to overcome this issue?
The volume mount is incorrect, change it to:
volumeMounts:
- mountPath: /mnt/myvolume
name: myvolume

Creating a Docker container that runs forever using bash

I'm trying to create a Pod with a container in it for testing purposes that runs forever using the K8s API. I have the following yaml spec for the Pod which would run a container and exit straight away:
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- name: ubuntu
image: ubuntu:trusty
command: ["echo"]
args: ["Hello World"]
I can't find any documentation around the command: tag but ideally I'd like to put a while loop in there somewhere printing out numbers forever.
If you want to keep printing Hello every few seconds you can use:
apiVersion: v1
kind: Pod
metadata:
name: busybox2
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
ports:
- containerPort: 80
command: ["/bin/sh", "-c", "while :; do echo 'Hello'; sleep 5 ; done"]
You can see the output using kubectl logs <pod-name>
Other option to keep a container running without printing anything is to use sleep command on its own, for example:
command: ["/bin/sh", "-ec", "sleep 10000"]

how do scripts/files get mounted to kubernetes pods

I'd like to create a cronjob that runs a python script mounted as a pvc, but I don't understand how to put test.py into the container from my local file system
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
You have a volume called application-code. In there lies the test.py file. Now you mount the volume, but you are not setting the mountPath according to your shell command.
The argument is pyhton /client/test.py, so you expect the file to be placed in the /client directory. You just have to mount the volume with this path:
volumeMounts:
- name: application-code
mountPath: /client
Update
If you don't need the file outside the cluster it would be much easier to integrate it into your docker image. Here an example Dockerfile:
FROM python:3.6.2-slim
WORKDIR /data
COPY test.py .
ENTRYPOINT['/bin/bash', '-c', 'python /data/test.py']
Push the image to your docker registry and reference it from your yml.
containers:
- name: update-fingerprints
image: <your-container-registry>:<image-name>