Directories creation inside the Kubernetes Persistent Volume - kubernetes

How would we create a directory inside the kubernetes persistent volume to mount to use in the container as subPath ? eg: mysql directory should be created while claiming the persistent volume

I would probably put an init container into my podspec that simply mounts the volume and runs a mkdir -p to create the directory and then exit. You could also do this in the target container itself with some kind of script.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

This is how I implemented the wise solution of #brett-wagner with initContainer and mkdir -p. I create two sub-diretctories, my-app-data and my-app-media, in my NFS server volume /exports:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nfs-server-deploy
labels:
app: my-nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: my-nfs-server
template:
spec:
containers:
- name: my-nfs-server-cntr
image: k8s.gcr.io/volume-nfs:0.8
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
initContainers:
- name: volume-dirs-init-cntr
image: busybox:1.35
command:
- "/bin/mkdir"
args:
- "-p"
- "/exports/my-app-data"
- "/exports/my-app-media"
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
volumes:
- name: my-nfs-server-exports
persistentVolumeClaim:
claimName: my-nfs-server-pvc

I think you could use the readinessProbe where you could use the execAction to create the sub folder. It will make sure your folder ready before container is ready to accept requests.
Otherwise you could use the COMMAND option to create it. But that will be executed after container starts.

Related

container level securityContext fsGroup

I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-

Kubernetes Persistant Volume overwrites image data

I have a pod that reads from an image that contains data within var/www/html. I want this data to be stored in a persistent volume. This is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
container: app
template:
metadata:
labels:
container: app
spec:
containers:
- name: app
image: my/toolkit-app:working
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html
name: toolkit-volume
subPath: html
volumes:
- name: toolkit-volume
persistentVolumeClaim:
claimName: azurefile
imagePullSecrets:
- name: my-cred
However when I look into the pod I can see the directory is empty:
If I comment out the persistent volume:
#volumeMounts:
# - mountPath: /var/www/html
# name: toolkit-volume
# subPath: html
I can see that the image data is there:
So it seems like the persistent volume is overwriting the existing directory - is there a way round this? Ideally I want /var/www/html to be stored in a separate volume and for any existing files within the image to be stored there too.
This is more a problem of visibility: If you mount an empty volume at a specific path, you won't be able to see, what was placed there in the container image.
From your question I assume that you want to be able to rollout updates by the means of a new container image, but at the same time retain variable data, that was created at the same directory from your application.
You could achieve this with the following method:
Use an init container with the same image and mount your persistent directory to a different path, for example /data
As command for the init container copy the contents of /var/www/html to /data.
In the regular container image use the mount you already have, it will contain your variable data and the updated data from the init container.

How to cp data from one container to another using kubernetes

Say we have a simple deployment.yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ikg-api-demo
name: ikg-api-demo
spec:
selector:
matchLabels:
app: ikg-api-demo
replicas: 3
template:
metadata:
labels:
app: ikg-api-demo
spec:
containers:
- name: ikg-api-demo
imagePullPolicy: Always
image: example.com/main_api:private_key
ports:
- containerPort: 80
the problem is that this image/container depends on another image/container - it needs to cp data from the other image, or use some shared volume.
How can I tell kubernetes to download another image, run it as a container, and then copy data from it to the container declared in the above file?
It looks like this article explains how.
but it's not 100% clear how it works. It looks like you create some shared volume, launch the two containers, using that shared volume?
so I according to that link, I added this to my deployment.yml:
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/nltk_data:latest
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
my primary hesitation is that mounting /nltk_data as a shared volume will overwrite what might be there already.
So I assume what I need to do is mount it at some other location, and then make the ENTRYPOINT for the source data container:
ENTRYPOINT ['cp', '-r', '/nltk_data_source', '/nltk_data']
so that will write it to the shared volume, once the container is launched.
So I have two questions:
How to run one container and finish a job, before another container starts using kubernetes?
How to write to a shared volume without having that shared volume overwrite what's in your image? In other words, if I have /xyz in the image/container, I don't want to have to copy /xyz to /shared_volume_mount_location if I don't have to.
How to run one container and finish a job, before another container starts using kubernetes?
Use initContainers - updated your deployment.yml, assuming example.com/nltk_data:latest is your data image
How to write to a shared volume without having that shared volume overwrite?
As you know what is there in your image, you need to select an appropriate mount path. I would use /mnt/nltk_data
Updated deployment.yml with init containers
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-ikg-api-demo
imagePullPolicy: Always
# You can use command, if you don't want to change the ENTRYPOINT
command: ['sh', '-c', 'cp -r /nltk_data_source /mnt/nltk_data']
volumeMounts:
- name: shared-data
mountPath: /mnt/nltk_data
image: example.com/nltk_data:latest
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80

Write to Secret file in pod

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519