why does this curl fetch from tmp inside container in k8s pod? - kubernetes

This deployment creates 1 pod that has init container in it. The container mounts volume into tmp/web-content and writes 1 single line 'check this out' to index.html
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-init-container
namespace: mars
spec:
replicas: 1
selector:
matchLabels:
id: test-init-container
template:
metadata:
labels:
id: test-init-container
spec:
volumes:
- name: web-content
emptyDir: {}
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
I fire up temporary pod to check if i can see this line 'check this out' using curl.
k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67
It does show the line. However, how does curl know which directory to go in?
I dont specify it should go to /tmp/web-content explicitly

Its because of the mountPath:
The root directive indicates the actual path on your hard drive where
this virtual host's assets (HTML, images, CSS, and so on) are located,
ie /usr/share/nginx/html. The index setting tells Nginx what file or
files to serve when it's asked to display a directory
When you curl to default port 80, curl serve you back content of the html directory.
.

Just an elaboration of #P.... answer .
Following is creating a common storage space "tagged" with name "web-content"
​volumes:
​- name: web-content
​emptyDir: {}
web-content is mount in location /tmp/web-content/ on init container named init-con which is writing check this out in index.html
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
same web-content volume which has the index.html is being mounted as directory /usr/share/nginx/html so the index.html location will be seen as /usr/share/nginx/html/index.html . (default landing page for nginx ) for container nginx hence it shows check this out when you curl to it.
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html

Related

Docker init container mounting existing files issue

We had build a docker image that contains an existing file inside. /opt/app/agent/oneagent. When I test the docker image individually, i can see the file inside the directory.
However, when I use it as an initcontainer to mount that directory to the main container, it does not see the directory.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /data
# These containers are run during pod initialization
initContainers:
- name: dynainit
image: localhost:5000/dyna1
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["ls -la /opt/app/agent/oneagent"]
volumeMounts:
- name: workdir
mountPath: "/data"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
the init container logs it shows
/bin/sh: can't open 'ls -la /opt/app/agent/oneagent': No such file or directory
What i really want to do is mount /opt/app/agent/oneagent from the init container to the main container.
Is that possible to mount existing files or init containers can only work by downloading the file in the beginning?
You can copy files from your init container /opt/app/agent/oneagent to workdir volume. When your main container mount workdir it will see the files copied. The error in your question is not volume related, just add -c to your command will do.
...
command: ["/bin/sh","-c"]
args: ["ls -la /opt/app/agent/oneagent"]
...
I got it working using "-c" inside the command key.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
initContainers:
- name: dynainit
image: localhost:5000/dyna1
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args: ["cp -R /opt/app/agent /data"]
volumeMounts:
- name: workdir
mountPath: "/data"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /opt/app

How to share a file from initContainer to base container in Kubernetes

I have created a custom alpine image (alpine-audit) which includes a jar file in the /tmp directory. What I need is to use that alpine-audit image as the initContainers base image and copy that jar file that I've included, to a location where the Pod container can access.
My yaml file is like below
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', 'auditlogger-1.0.0.jar']
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
I think there is some mistake in the command line.
I assumed initContainer copy that jar file to emptyDir then the nginx based container can access that jar file using the mountPath.
But it does not even create the Pod. Can someone point me where it has gone wrong?
When you are mouting a volume to a directory in pod, that directory have only content of the volume. If you are mounting emptyDir into your alpine-audit:0.1.0 the /tmp directory becomes empty. I would mount that volume on some other dir, like /app, then copy the .jar from /tmp to /app.
The container is not starting probably because the initContainer failes running the command.
Try this configuration:
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', '/app/auditlogger-1.0.0.jar'] # <--- copy from /tmp to new mounted EmptyDir
volumeMounts:
- name: workdir
mountPath: "/app" # <-- change the mount path not to overwrite your .jar
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

How to mount the same directory to multiple containers in a pod

I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.
According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.
Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory

How to cp data from one container to another using kubernetes

Say we have a simple deployment.yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ikg-api-demo
name: ikg-api-demo
spec:
selector:
matchLabels:
app: ikg-api-demo
replicas: 3
template:
metadata:
labels:
app: ikg-api-demo
spec:
containers:
- name: ikg-api-demo
imagePullPolicy: Always
image: example.com/main_api:private_key
ports:
- containerPort: 80
the problem is that this image/container depends on another image/container - it needs to cp data from the other image, or use some shared volume.
How can I tell kubernetes to download another image, run it as a container, and then copy data from it to the container declared in the above file?
It looks like this article explains how.
but it's not 100% clear how it works. It looks like you create some shared volume, launch the two containers, using that shared volume?
so I according to that link, I added this to my deployment.yml:
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/nltk_data:latest
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
my primary hesitation is that mounting /nltk_data as a shared volume will overwrite what might be there already.
So I assume what I need to do is mount it at some other location, and then make the ENTRYPOINT for the source data container:
ENTRYPOINT ['cp', '-r', '/nltk_data_source', '/nltk_data']
so that will write it to the shared volume, once the container is launched.
So I have two questions:
How to run one container and finish a job, before another container starts using kubernetes?
How to write to a shared volume without having that shared volume overwrite what's in your image? In other words, if I have /xyz in the image/container, I don't want to have to copy /xyz to /shared_volume_mount_location if I don't have to.
How to run one container and finish a job, before another container starts using kubernetes?
Use initContainers - updated your deployment.yml, assuming example.com/nltk_data:latest is your data image
How to write to a shared volume without having that shared volume overwrite?
As you know what is there in your image, you need to select an appropriate mount path. I would use /mnt/nltk_data
Updated deployment.yml with init containers
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-ikg-api-demo
imagePullPolicy: Always
# You can use command, if you don't want to change the ENTRYPOINT
command: ['sh', '-c', 'cp -r /nltk_data_source /mnt/nltk_data']
volumeMounts:
- name: shared-data
mountPath: /mnt/nltk_data
image: example.com/nltk_data:latest
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519