Docker init container mounting existing files issue - kubernetes

We had build a docker image that contains an existing file inside. /opt/app/agent/oneagent. When I test the docker image individually, i can see the file inside the directory.
However, when I use it as an initcontainer to mount that directory to the main container, it does not see the directory.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /data
# These containers are run during pod initialization
initContainers:
- name: dynainit
image: localhost:5000/dyna1
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["ls -la /opt/app/agent/oneagent"]
volumeMounts:
- name: workdir
mountPath: "/data"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
the init container logs it shows
/bin/sh: can't open 'ls -la /opt/app/agent/oneagent': No such file or directory
What i really want to do is mount /opt/app/agent/oneagent from the init container to the main container.
Is that possible to mount existing files or init containers can only work by downloading the file in the beginning?

You can copy files from your init container /opt/app/agent/oneagent to workdir volume. When your main container mount workdir it will see the files copied. The error in your question is not volume related, just add -c to your command will do.
...
command: ["/bin/sh","-c"]
args: ["ls -la /opt/app/agent/oneagent"]
...

I got it working using "-c" inside the command key.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
initContainers:
- name: dynainit
image: localhost:5000/dyna1
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args: ["cp -R /opt/app/agent /data"]
volumeMounts:
- name: workdir
mountPath: "/data"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /opt/app

Related

s3fs mount to shared volume between init container and default container

I want to mount from a s3 bucket to a container and then further access the mounted files inside pod.
I can successfully mount the files to init container, however the mounted files are not accessible to the main/default container with the shared volume. I tried it with another simpler example of a simple textfile where no s3fs mounting was done, it worked fine. However, the s3fs mounted files could not be shared between the init and main container using similar approach.
below is the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
......
spec:
..
template:
..
spec:
containers:
- image: docker-image-of-application
name: main-demo-container
volumeMounts:
- name: shared-volume-location
mountPath: /mount/script/
securityContext:
privileged: true
initContainers:
- image: docker-image-of-application
name: init-demo-container
command: ['sh', '-c', '/scripts/somefile.sh']
volumeMounts:
- name: init-script
mountPath: /scripts
- name: shared-volume-location
mountPath: /mount/files/
securityContext:
privileged: true
volumes:
- name: init-script
configMap:
name: init-script-configmap
defaultMode: 0755
- name: shared-volume-location
emptyDir: {}
where the init-script-configmap is made from somefile.sh which mounts the bucket and is executed inside init container. I do not want to use persistent volume either. And I have confirmed that inside init container the bucket is mounted successfully.

why does this curl fetch from tmp inside container in k8s pod?

This deployment creates 1 pod that has init container in it. The container mounts volume into tmp/web-content and writes 1 single line 'check this out' to index.html
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-init-container
namespace: mars
spec:
replicas: 1
selector:
matchLabels:
id: test-init-container
template:
metadata:
labels:
id: test-init-container
spec:
volumes:
- name: web-content
emptyDir: {}
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
I fire up temporary pod to check if i can see this line 'check this out' using curl.
k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67
It does show the line. However, how does curl know which directory to go in?
I dont specify it should go to /tmp/web-content explicitly
Its because of the mountPath:
The root directive indicates the actual path on your hard drive where
this virtual host's assets (HTML, images, CSS, and so on) are located,
ie /usr/share/nginx/html. The index setting tells Nginx what file or
files to serve when it's asked to display a directory
When you curl to default port 80, curl serve you back content of the html directory.
.
Just an elaboration of #P.... answer .
Following is creating a common storage space "tagged" with name "web-content"
​volumes:
​- name: web-content
​emptyDir: {}
web-content is mount in location /tmp/web-content/ on init container named init-con which is writing check this out in index.html
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
same web-content volume which has the index.html is being mounted as directory /usr/share/nginx/html so the index.html location will be seen as /usr/share/nginx/html/index.html . (default landing page for nginx ) for container nginx hence it shows check this out when you curl to it.
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html

How to share a file from initContainer to base container in Kubernetes

I have created a custom alpine image (alpine-audit) which includes a jar file in the /tmp directory. What I need is to use that alpine-audit image as the initContainers base image and copy that jar file that I've included, to a location where the Pod container can access.
My yaml file is like below
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', 'auditlogger-1.0.0.jar']
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
I think there is some mistake in the command line.
I assumed initContainer copy that jar file to emptyDir then the nginx based container can access that jar file using the mountPath.
But it does not even create the Pod. Can someone point me where it has gone wrong?
When you are mouting a volume to a directory in pod, that directory have only content of the volume. If you are mounting emptyDir into your alpine-audit:0.1.0 the /tmp directory becomes empty. I would mount that volume on some other dir, like /app, then copy the .jar from /tmp to /app.
The container is not starting probably because the initContainer failes running the command.
Try this configuration:
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', '/app/auditlogger-1.0.0.jar'] # <--- copy from /tmp to new mounted EmptyDir
volumeMounts:
- name: workdir
mountPath: "/app" # <-- change the mount path not to overwrite your .jar
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

how do scripts/files get mounted to kubernetes pods

I'd like to create a cronjob that runs a python script mounted as a pvc, but I don't understand how to put test.py into the container from my local file system
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: update_db
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: update-fingerprints
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /client/test.py"]
volumeMounts:
- name: application-code
mountPath: /where/ever
restartPolicy: OnFailure
volumes:
- name: application-code
persistentVolumeClaim:
claimName: application-code-pv-claim
You have a volume called application-code. In there lies the test.py file. Now you mount the volume, but you are not setting the mountPath according to your shell command.
The argument is pyhton /client/test.py, so you expect the file to be placed in the /client directory. You just have to mount the volume with this path:
volumeMounts:
- name: application-code
mountPath: /client
Update
If you don't need the file outside the cluster it would be much easier to integrate it into your docker image. Here an example Dockerfile:
FROM python:3.6.2-slim
WORKDIR /data
COPY test.py .
ENTRYPOINT['/bin/bash', '-c', 'python /data/test.py']
Push the image to your docker registry and reference it from your yml.
containers:
- name: update-fingerprints
image: <your-container-registry>:<image-name>

Kubernetes: Perform command shell only container

I'm using this spec that contains one init container and two containers.
Init container creates a file on /etc/secrets/secrets.env that the first container has to source: source /etc/secrets/secrets.env.
I'm trying to do that using this spec:
spec:
containers:
- name: source-envs
image: ????
command: ["/bin/sh", "-c", "source /etc/secrets/secrets.env"]
volumes:
- name: sidekick-backend-volume
emptyDir: {}
I don't quite figure out how to do that.
Any ideas?
It should work by sharing a volume between the init container and the first container, mounted on /etc/secrets:
spec:
initContainers:
- name: create-envs
image: ????
command: ["/bin/sh", "-c", "touch /etc/secrets/secrets.env"]
volumeMounts:
- mountPath: /etc/secrets/
name: sidekick-backend-volume
containers:
- name: source-envs
image: ????
command: ["/bin/sh", "-c", "source /etc/secrets/secrets.env"]
volumeMounts:
- mountPath: /etc/secrets/
name: sidekick-backend-volume
volumes:
- name: sidekick-backend-volume
emptyDir: {}