create an empty file inside a volume in Kubernetes pod - kubernetes

I have a legacy app which keep checking an empty file inside a directory and perform certain action if the file timestamp is changed.
I am migrating this app to Kubernetes so I want to create an empty file inside the pod. I tried subpath like below but it doesn't create any file.
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: '/volume-name-path'
subPath: emptyFile
volumes:
- name: volume-name
emptyDir: {}
describe pods shows
Containers:
demo:
Container ID: containerd://0b824265e96d75c5f77918326195d6029e22d17478ac54329deb47866bf8192d
Image: alpine
Image ID: docker.io/library/alpine#sha256:08d6ca16c60fe7490c03d10dc339d9fd8ea67c6466dea8d558526b1330a85930
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Running
Started: Wed, 10 Feb 2021 12:23:43 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4gp4x (ro)
/volume-name-path from volume-name (rw,path="emptyFile")
ls on the volume also shows nothing.
k8 exec -it demo-pod -c demo ls /volume-name-path
any suggestion??
PS: I don't want to use a ConfigMap and simply wants to create an empty file.

If the objective is to create a empty file when the Pod starts, then the most easy way is to either use the entrypoint of the docker image or an init container.
With the initContainer, you could go with something like the following (or with a more complex init image which you build and execute a whole bash script or something similar):
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
initContainers:
- name: create-empty-file
image: alpine
command: ["touch", "/path/to/the/directory/empty_file"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
volumes:
- name: volume-name
emptyDir: {}
Basically the init container gets executed first, runs its command and if it is successful, then it terminates and the main container starts running. They share the same volumes (and they can also mount them at different paths) so in the example, the init container mount the emptyDir volume, creates an empty file and then complete. When the main container starts, the file is already there.
Regarding your legacy application which is getting ported on Kubernetes:
If you have control of the Dockerfile, you could simply change it create an empty file at the path you are expecting it to be, so that when the app starts, the file is already created there, empty, from the beginning, just exactly as you add the application to the container, you can add also other files.
For more info on init container, please check the documentation (https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)

I think you may be interested in Container Lifecycle Hooks.
In this case, the PostStart hook may help create an empty file as soon as the container is started:
This hook is executed immediately after a container is created.
In the example below, I will show you how you can use the PostStart hook to create an empty file-test file.
First I created a simple manifest file:
# demo-pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: demo-pod
name: demo-pod
spec:
containers:
- image: alpine
name: demo-pod
command: ["sleep", "3600"]
lifecycle:
postStart:
exec:
command: ["touch", "/mnt/file-test"]
After creating the Pod, we can check if the demo-pod container has an empty file-test file:
$ kubectl apply -f demo-pod.yml
pod/demo-pod created
$ kubectl exec -it demo-pod -- sh
/ # ls -l /mnt/file-test
-rw-r--r-- 1 root root 0 Feb 11 09:08 /mnt/file-test
/ # cat /mnt/file-test
/ #

Related

kubectl copy logs from pod when terminating

We are trying to get the logs of pods after multiple restarts but we dont want to use any external solution like efk.
i tried below config but its not working. does the below cmd run on the pod or it will run on node level
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "kubectl logs appworks-0 > /container-stoped.txt"]
i tried below config but its not working. does the below cmd run on
the pod or it will run on node level
it will run on the POD level, not on Node level
You can use the Hostpath in POD configuration
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: alpine
name: test-container
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /host
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /
type: Directory
Hostpath will directly will create one Dir at the Node level and save logs over there, if you don't want this solution you can add your solution of lifecycle hook also however when you can directly write app logs to Host don't add lifecycle hook extra.
Note : Make sure if your Node goes down hostpath or emptyDir logs you will miss.

There's any way, in K8S, to source an env file, dynamically generated during an initcontainer?

I'm planning to have an initcontainer that will handle some crypto stuff and then generate a source file to be sourced by a container.
The source file will be dynamically generated, the VARS will be dynamic, this means I will never know the VAR names or it's contents. This also means I cannot use k8s env.
The file name will always be the same.
I know I can change the Dockerfile from my applications and include an entrypoint to execute a script before running the workload to source the file, but, still, is this the only option?
There's no way to achieve this in k8s?
My container can mount the dir where the file was created by the initcontainer. But it can't, somehow, source the file?
apiVersion: v1
kind: Pod
metadata:
name: pod-init
namespace: default
spec:
nodeSelector:
env: sm
initContainers:
name: genenvfile
image: busybox
imagePullPolicy: Always
command: ["/bin/sh"]
# just an example, there will be a software here that will translate some encrypted stuff into VARS and then append'em to a file
args: ["-c", "echo MYVAR=func > /tmp/sm/filetobesourced"]
volumeMounts:
- mountPath: /tmp/sm
name: tmpdir
containers:
image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
imagePullPolicy: IfNotPresent
name: mypod-cm
tty: true
volumeMounts:
- mountPath: /tmp/sm
readOnly: true
name: tmpdir
volumes:
name: tmpdir
emptyDir:
medium: Memory
The step-by-step that I'm thinking would be:
initcontainer mounts /tmp/sm and generates a file called /tmp/sm/filetobesourced
container mounts the /tmp/sm
container source the /tmp/sm/filetobesourced
workload runs using all the vars sourced by the last step
Am I missing something to get the third step done?
Change the command and/or args on the main container to be more like bash -c 'source /tmp/sm/filetobesourced && exec whatevertheoriginalcommandwas'.

Can we create a POD from two existing Yamls each having their own container?

My project have 2 Yamls to create which create 2 PODS each.
Can we create a single POD with these yamls, without merging the yamls, with 2 containers ?
Thanks
Yes, you run multiple containers inside the single pod. In single YAML manifest, you can add your both containers spec and run it.
however, you cannot without merging YAML you can not run multiple containers inside one pod.
for single file example :
apiVersion: v1
kind: Pod
metadata:
name: mc1
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: 1st
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: 2nd
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
more details you can also refer official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
If you don't want to merge the containers definition in the same file and in the same containers block, then no you can't.

Why the path does not get mount?

I've created the manifest file, that looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/home/developer/kubernetes/exercises"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
ports:
- containerPort: 8080
name: http
protocol: TCP
As you can see, the hostpath is:
path: "/home/developer/kubernetes/exercises"
and the mountPath is:
mountPath: "/data"
I've created a hello.txt file in the folder /home/developer/kubernetes/exercises and when I enter into the pod via kubectl exec -it kuard ash I can not find the file hello.txt.
Where is the file?
kind is using Docker containers to simulate Kubernetes nodes. So when you are creating files on your host (your ubuntu machine) the containers will not automatically have access to them.
(This gets even more complicated when using macos or windows and docker is running in a separate virtual machine...)
I assume that there are some shared folders visible inside the kind-docker-nodes, but I could not find it documented.
You can verify the filesystem content of the docker node from inside the container using docker exec -it kind-control-plane /bin/sh and then work with the usual tools.
If you need to make content from your development machine available you might want to have a look at ksync: https://github.com/vapor-ware/ksync

Replication Controller replica ID in an environment variable?

I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test