s3fs mount to shared volume between init container and default container - kubernetes

I want to mount from a s3 bucket to a container and then further access the mounted files inside pod.
I can successfully mount the files to init container, however the mounted files are not accessible to the main/default container with the shared volume. I tried it with another simpler example of a simple textfile where no s3fs mounting was done, it worked fine. However, the s3fs mounted files could not be shared between the init and main container using similar approach.
below is the deployment file,
apiVersion: apps/v1
kind: Deployment
metadata:
......
spec:
..
template:
..
spec:
containers:
- image: docker-image-of-application
name: main-demo-container
volumeMounts:
- name: shared-volume-location
mountPath: /mount/script/
securityContext:
privileged: true
initContainers:
- image: docker-image-of-application
name: init-demo-container
command: ['sh', '-c', '/scripts/somefile.sh']
volumeMounts:
- name: init-script
mountPath: /scripts
- name: shared-volume-location
mountPath: /mount/files/
securityContext:
privileged: true
volumes:
- name: init-script
configMap:
name: init-script-configmap
defaultMode: 0755
- name: shared-volume-location
emptyDir: {}
where the init-script-configmap is made from somefile.sh which mounts the bucket and is executed inside init container. I do not want to use persistent volume either. And I have confirmed that inside init container the bucket is mounted successfully.

Related

How to share a file from initContainer to base container in Kubernetes

I have created a custom alpine image (alpine-audit) which includes a jar file in the /tmp directory. What I need is to use that alpine-audit image as the initContainers base image and copy that jar file that I've included, to a location where the Pod container can access.
My yaml file is like below
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', 'auditlogger-1.0.0.jar']
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
I think there is some mistake in the command line.
I assumed initContainer copy that jar file to emptyDir then the nginx based container can access that jar file using the mountPath.
But it does not even create the Pod. Can someone point me where it has gone wrong?
When you are mouting a volume to a directory in pod, that directory have only content of the volume. If you are mounting emptyDir into your alpine-audit:0.1.0 the /tmp directory becomes empty. I would mount that volume on some other dir, like /app, then copy the .jar from /tmp to /app.
The container is not starting probably because the initContainer failes running the command.
Try this configuration:
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', '/app/auditlogger-1.0.0.jar'] # <--- copy from /tmp to new mounted EmptyDir
volumeMounts:
- name: workdir
mountPath: "/app" # <-- change the mount path not to overwrite your .jar
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

two container pod creation in kubernetes

create a pod that runs two containers and ensure that the pod has shared volume that can be used by both containers to communicate with each other write an HTML file in one container and try accessing it from another container
can anyone tell me how to do it
Example pod with multiple containers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Above example is using the empty dir so if you POD restart or Start again you will lose the data.
If you have any requirements to save the data i would suggest using the PVC instead of the empty dir.
i would recommend using the NFS if you can.

How to mount the same directory to multiple containers in a pod

I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.
According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.
Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519

Directories creation inside the Kubernetes Persistent Volume

How would we create a directory inside the kubernetes persistent volume to mount to use in the container as subPath ? eg: mysql directory should be created while claiming the persistent volume
I would probably put an init container into my podspec that simply mounts the volume and runs a mkdir -p to create the directory and then exit. You could also do this in the target container itself with some kind of script.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
This is how I implemented the wise solution of #brett-wagner with initContainer and mkdir -p. I create two sub-diretctories, my-app-data and my-app-media, in my NFS server volume /exports:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nfs-server-deploy
labels:
app: my-nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: my-nfs-server
template:
spec:
containers:
- name: my-nfs-server-cntr
image: k8s.gcr.io/volume-nfs:0.8
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
initContainers:
- name: volume-dirs-init-cntr
image: busybox:1.35
command:
- "/bin/mkdir"
args:
- "-p"
- "/exports/my-app-data"
- "/exports/my-app-media"
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
volumes:
- name: my-nfs-server-exports
persistentVolumeClaim:
claimName: my-nfs-server-pvc
I think you could use the readinessProbe where you could use the execAction to create the sub folder. It will make sure your folder ready before container is ready to accept requests.
Otherwise you could use the COMMAND option to create it. But that will be executed after container starts.