Kubernetes NFS volume with dynamic path - kubernetes

I am trying to mount my applications' logs directory to nfs dynamically including node_name.
No success so far.
I tried as below:
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
subPath: /$(NODE_NAME)
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: nfs-volume
nfs:
server: ip_adress_here
path: /mnt/events

I think instead of subPath you should use subPathExpr, as mentioned in the documentation.
Use the subPathExpr field to construct subPath directory names from Downward API environment variables. This feature requires the VolumeSubpathEnvExpansion feature gate to be enabled. It is enabled by default starting with Kubernetes 1.15. The subPath and subPathExpr properties are mutually exclusive.
In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
Hope that´s it.

Related

How to mount hostpath on a pod with podname as a subdirectory

I have a use-case where I need to mount a hostpath to a pod but that hostpath should have a subdirectory with PODNAME, I know something like this can be used with statefulsets but due to some constraints a hostpath mount is needed in deployment where the root directory will have sub directories as podnames.
Can this be achieved?
You can use the downward API of Kubernetes as Environment variables :
- name: pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
Feel free to refer my GitHub YAML : https://github.com/harsh4870/OCI-public-logging-uma-agent/blob/main/deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: app
image: <App image>
env:
- name: pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- mountPath: /app_logs
name: app-logs
subPathExpr: $(pod_name)
volumes:
- name: mydir
hostPath:
path: /var/log/app_logs
type: DirectoryOrCreate
Here is full article i have used downwardAPI for logging : https://medium.com/#harsh.manvar111/oke-logging-with-uma-agent-in-oci-d6f55a8bcc02

Show Pod IP Address using environment variable

I want to display the pod IP address in an nginx pod. Currently I am using an init container to initialize the pod by writing to a volume.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- echo
- $(POD_IP) >> /work-dir/index.html
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
This should work in theory, but the file redirect doesn't work and the mounted file in the nginx container is blank. There's probably an easier way to do this, but I'm curious why this doesn't work.
Nothing is changed, except how command is passed in the init container. See this for an explanation.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- 'sh'
- '-c'
- 'echo $(POD_IP) > /work-dir/index.html'
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}

subPathExpr for parallel pods

We have parallel jobs on EKS and we would like the jobs to write to hostPath.
We are using subPathExpr with environment variable as according to the documentation. However, after the run, the hostPath contains only the one folder probably due to racing condition from the parallel jobs and whichever job get hold of the hostPath.
We are on Kubernetes 1.17. Is subPathExpr meant for this use case of allowing parallel jobs to write to the same hostPath? What are other options to allow parallel jobs to write to host volume?
apiVersion: batch/v1
kind: Job
metadata:
name: gatling-job
spec:
ttlSecondsAfterFinished: 300 # delete after 5 minutes
completions: 5
parallelism: 5
backoffLimit: 0
template:
spec:
restartPolicy: "Never"
containers:
- name: gatling
image: GATLING_IMAGE_NAME
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: perftest-results
mountPath: /opt/gatling/results
subPathExpr: $(POD_NAME)
volumes:
- name: perftest-results
hostPath:
path: /data/perftest-results
Tested with a simple job template as below, and files were created in respective folder and worked as expected.
Will investigate the actual project. Closing for now.
apiVersion: batch/v1
kind: Job
metadata:
name: subpath-jobs
labels:
name: subpath-jobs
spec:
completions: 5
parallelism: 5
backoffLimit: 0
template:
spec:
restartPolicy: "Never"
containers:
- name: busybox
image: busybox
workingDir: /outputs
command: [ "touch" ]
args: [ "a_file.txt" ]
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- name: job-output
mountPath: /outputs
subPathExpr: $(POD_NAME)
volumes:
- name: job-output
hostPath:
path: /data/outputs
type: DirectoryOrCreate
# ls -R /data
/data:
outputs
/data/outputs:
subpath-jobs-6968q subpath-jobs-6zp4x subpath-jobs-nhh96 subpath-jobs-tl8fx subpath-jobs-w2h9f
/data/outputs/subpath-jobs-6968q:
a_file.txt
/data/outputs/subpath-jobs-6zp4x:
a_file.txt
/data/outputs/subpath-jobs-nhh96:
a_file.txt
/data/outputs/subpath-jobs-tl8fx:
a_file.txt
/data/outputs/subpath-jobs-w2h9f:
a_file.txt

How to run consul via kubernetes?

I tried running the pod (https://www.consul.io/docs/platform/k8s/run.html)
It failed with... containers with unready status: [consul]
kubectl create -f consul-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: "consul:latest"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- "/bin/sh"
- "-ec"
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never

Is there a way to get UID in pod spec

What I want to do is providing pod with unified log store, currently persisted to hostPath, but I also want this path including UID so I can easily get its path after pod destroyed.
For example:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
volumeMounts:
- mountPath: /logs
name: log-dir
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/{metadata.uid}
type: DirectoryOrCreate
metadata.uid is what I want to fill in, but I do not how to do it.
For logging it's better to use another strategy.
I suggest you to look at this link.
Your logs are best managed if streamed to stdout and grabbed by an agent, like shown in this picture:
Don't persist your log on filesystem, but gather them using an agent and put them together for further analysis.
Fluentd is very popular and deserves to be known.
After searching the doc from kubernetes, I finally see a solution for my specific problem. This feature is exactly what I wanted.
So I can create the pod with
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
env:
- name: POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
volumeMounts:
- mountPath: /logs
name: log-dir
subPath: $(POD_UID)
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/
type: DirectoryOrCreate