Kubernetes. Is it possible to mount volume to path containing pod id? - kubernetes

I want to define pvc and create volume in order to get some internal files from container outside (I am using helm charts definitions). I want to know is there any way to use POD IP in mountPath that I am defining in deployment.yaml.
At the end I want to get folder structure in my node
/dockerdata-nfs//path
volumeMounts:
- name: volumeName
mountPath: /abc/path
volumes:
- name: volumeName
hostPath:
path: /dockerdata-nfs/podID/

You can create a mountPath based on the POD UID using the subPathExpr. Yaml below:
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
image: busybox
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(UID)
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
This feature was introduced in Kubernetes version 1.14+.

POD on recreation will get a new UID so why will you try to hard code this value !!
Pods are considered to be relatively ephemeral (rather than durable) entities. As discussed in pod lifecycle, Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a Node dies, the Pods scheduled to that node are scheduled for deletion, after a timeout period. A given Pod (as defined by a UID) is not “rescheduled” to a new node; instead, it can be replaced by an identical Pod, with even the same name if desired, but with a new UID (see replication controller for more details).

Related

Kubernetes: Restart pods when config map values change

I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization

Share a volume in KubernetesPodOperator?

I'm using a ffmpeg docker image from a KubernetesPodOperator() inside Airflow for extracting frames from a video.
It works fine, but I am not able to retrieve the frames stored: how can store the frames generated by the Pod directly into my file system (host-machine)?
Update:
From https://airflow.apache.org/kubernetes.html# I think I figured out that I need to work on the volume_mount, volume_config and volume parameters, but still no luck.
Error message:
"message":"Not found: \"test-volume\"","field":"spec.containers[0].volumeMounts[0].name"
PV and PVC:
command kubectl get pv,pvc test-volume gives:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/test-volume 10Gi RWO Retain Bound default/test-volume manual 3m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-volume Bound test-volume 10Gi RWO manual 3m
Code:
volume_mount = VolumeMount('test-volume',
mount_path='/',
sub_path=None,
read_only=False)
volume_config= {
'persistentVolumeClaim':
{
'claimName': 'test-volume' # uses the persistentVolumeClaim given in the Kube yaml
}
}
volume = Volume(name="test-volume", configs=volume_config)
with DAG('test_kubernetes',
default_args=default_args,
schedule_interval=schedule_interval,
) as dag:
extract_frames = KubernetesPodOperator(namespace='default',
image="jrottenberg/ffmpeg:3.4-scratch",
arguments=[
"-i", "http://www.jell.yfish.us/media/jellyfish-20-mbps-hd-hevc-10bit.mkv",
"test_%04d.jpg"
],
name="extract-frames",
task_id="extract_frames",
volume=[volume],
volume_mounts=[volume_mount],
get_logs=True
)
Here's some speculation as to what may be wrong:
(Where your error is most likely coming from)
KubernetesPodOperator expects parameter "volumes", not "volume"
In general, it's bad practice to mount onto "/" since you will be deleting everything that comes on the image you're running. i.e. you should probably change "mount_path" in your VolumeMount object to something else like "/stored_frames"
You should create a test pod to verify your k8s objects (volumes, pod, configmap, secrets,etc) before wrapping that pod creation in the DAG with KubernetesPodOperator. Based from your code above, it can look like this:
apiVersion: v1
kind: Pod
metadata:
name: "extract-frames-pod"
namespace: "default"
spec:
containers:
- name: "extract-frames"
image: "jrottenberg/ffmpeg:3.4-scratch"
command:
args: ["-i", "http://www.jell.yfish.us/media/jellyfish-20-mbps-hd-hevc-10bit.mkv", "test_%04d.jpg"]
imagePullPolicy: IfNotPresent
volumeMounts:
- name: "test-volume"
# do not use "/" for mountPath.
mountPath: "/images"
restartPolicy: Never
volumes:
- name: "test-volume"
persistentVolumeClaim:
claimName: "test-volume"
serviceAccountName: default
I expect you will get the same error that you had: "message":"Not found: \"test-volume\"","field":"spec.containers[0].volumeMounts[0].name"
Which I think is an issue with your PersistentVolume manifest file.
Did you set the path test-volume? Something like:
path: /test-volume
and does the path exists in the target volume? If not create that directory/folder. That might solve your problem.

Creating a pod/container in kubernetes - how to copy a bunch of files into it

Sorry if this is a noob question:
I am creating a pod in a kubernetes cluster using a pod defintion yaml file.
This pod defines just one container. I'd like to ... copy a few files to a particular directory in the container.
sort of like in docker-compose:
volumes:
- ./testHelpers/certs:/var/private/ssl/certs
Is it possible to do that at this point (point of defining the pod?)
If not, what could my alternatives be?
PS - I understand that the sample from docker-compose is very different since this maps local directory to a directory in container
It's better to use volumes in pod definition.
Initialize the pod before the container runs
Apart from this, you can also use ConfigMap to store certs and other config files you needed and than can access them in the container as volumes.
More details here
You should create a config map, you can do it from files or a directory.
kubectl create configmap my-config --from-file=configuration/
And then mount the config map as directory:
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: my-config
mountPath: /etc/config
volumes:
- name: my-config
configMap:
name: my-config

StatefulSet - Get starting pod during volumemount

I have a StatefulSet that starts a MYSQL cluster. The only downside at it for the moment is that for every replica I need to create a Persistent Volume and a Persistent Volume Claim with a select that matches label and podindex.
This means I cannot dynamically add replicas whithout manual interaction.
For this reason I'm searching for a soluction that gives me the option to have only 1 Volume and 1 Claim. And during the pod creation it knows his own pod name for the subPath during mount. (initContainer would be used to check and create the directories on the volume before the application container starts).
So I search a correct way for a code like:
volumeMounts:
- name: mysql-datadir
mountPath: /var/lib/mysql
subPath: "${PODNAME}/datadir"
You can get POD_NAME from the metadata ( the downward API ) by setting ENV var:
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
But, you you cannot use ENV vars in volumes declarations (as far as i know...). So, everything else could be reached via workarounds. One of the workarounds is described here

Does Kubernetes mount an emtpyDir volume on the host?

Kubernetes features quite a few types of volumes, including emptyDir:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
...
By default, emptyDir volumes are stored on whatever medium is backing the node.
Is the emtpyDir actually mounted on the node, and accessible to a container outside the pod, or to the node FS itself?
Yes it is also accessible on the node. It is bind mounted into the container (sort of). The source directories are under /var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/VOLUMENAME
You can find the location on the host like this:
sudo ls -l /var/lib/kubelet/pods/`kubectl get pod -n mynamespace mypod -o 'jsonpath={.metadata.uid}'`/volumes/kubernetes.io~empty-dir
You can list all emptyDir volumes on the host using this command
df
To view only volumes mapped to a specific volume
df | grep -i cache-volume
where cache-volume is the volume name in your pod definition
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}