How to copy all data from an empty dir volume to my local machine? - kubernetes

I have a container with an empty dir volume
- volumes:
emptyDir: {}
name: someName
I would like to copy all data to my machine using kubectl cp.
I do not know where the someName volume is located. How can I find out and how can I copy the data from the volume to my local machine?

You have to check in your pod where the volume is mounted. Check in the container sections, for a mount with the name someName, e.g:
containers:
volumeMounts:
- name: someName
mountPath: "/mnt/path"
So you know that the emptyDir is mounted at the given mountPath.
Afterwards you can copy the files via
kubectl cp my-namespace/my-pod:/mnt/path /tmp/local/path

Related

How to mount volume without using docker

In order to mount a directory to a container i used bind mounts https://docs.docker.com/storage/bind-mounts/
Now i'm trying to find a way to replace $docker run -v command.
If you are using kubernetes as that is there in your tag. You can mount a volume as hostpath.
In Pod spec:
volumeMounts:
- name: config
mountPath: <PATH IN CONTAINER>
volumes:
- name: config
hostPath:
path: <YOUR LOCAL DIR PATH>
Check out https://kubernetes.io/docs/concepts/storage/volumes/ for more details

Kubernetes copy image data to volume mounts

I need to share a directory between two containers: myapp and monitoring and to achieve this I created an emptyDir: {} and then volumeMount on both the containers.
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: myapp
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
- name: monitoring
volumeMounts:
- name: shared-data
mountPath: /var/read
This works fine as the data I write to the shared-data directory is visible in both containers. However, the config file that is created when creating the container under /etc/myapp/myapp.config is hidden as the shared-data volume is mounted over /etc/myapp path (overlap).
How can I force the container to first mount the volume to /etc/myapp path and then cause the docker image to place the myapp.config file under the default path /etc/myapp except that it is the mounted volume thus allowing the config file to be accessible by the monitoring container under /var/read?
Summary: let the monitoring container read the /etc/myapp/myapp.config file sitting on myapp container.
Can anyone advice please?
Can you mount shared-data at /var/read in an init container and copy config file from /etc/myapp/myapp.config to /var/read?
Consider using ConfigMaps with SubPaths.
A ConfigMap is an API object used to store non-confidential data in
key-value pairs. Pods can consume ConfigMaps as environment variables,
command-line arguments, or as configuration files in a volume.
Sometimes, it is useful to share one volume for multiple uses in a
single pod. The volumeMounts.subPath property specifies a sub-path
inside the referenced volume instead of its root.
ConfigMaps can be used as volumes. The volumeMounts inside the template.spec are the same as any other volume. However, the volumes section is different. Instead of specifying a persistentVolumeClaim or other volume type you reference the configMap by name. Than you can add the subPath property which would look something like this:
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
subPath: myapp.config
Here are the resources that would show you how to set it up:
Configure a Pod to Use a ConfigMap: official docs
Using ConfigMap SubPaths to Mount Files: step by step guide
Mount a file in your Pod using a ConfigMap: supplement

What is the best way to exec into another container and access its directory?

I have a container running inside a pod and I want to be able to monitor its content every week. I want to write a Kube cronjob for it. Is there a best way to do this?
At the moment I am doing this by running a script in my local machine that does kubectl exec my-container and monitors the content of the directory in that container.
kubectl exec my-container sounds perfectly fine to me. You might want to look at this if you want to run kubectl in a pod (Kubernetes CronJob).
There are other ways but depending on what you are trying to do in the long term it might be an overkill. For example:
You can set up a Fluentd or tail/grep sidecar (or ls, if you are using a binary file?) to send the content or part of the content of that file to an Elasticsearch cluster.
You can set up Prometheus in Kubernetes to scrape metrics on the pod mounted filesystems. You will probably have to use a custom exporter in the pod or something else that exports files in mount points in the pod. This is a similar example.
You can run your script in another sidecar of your pod.
Define a empty directory volume
Mount this volume as your content directory
Also mount this directory to sidecar, so that it can access and able to monitor.
Example:
apiVersion: v1
kind: Pod
metadata:
name: monitor-by-sidecar
spec:
restartPolicy: Never
volumes: # empty directory volume
- name: shared-data
emptyDir: {}
containers:
- name: container-which-produce-content # This container is main application which generate contect. Suppose in /usr/share/nginx/html directory
image: debian
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
command: ["/bin/bash", "-c"]
args:
- while true;
do
echo "hello world";
echo "----------------" > /usr/share/nginx/html/index.html;
cat /usr/share/nginx/html/index.html;
done
- name: container-which-run-script-to-monitor # this container will run your monitor scripts. this container mount main application's volume in /pod-data directory and run required scripts.
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh", "-c"]
args:
- while true;
do
echo "hello";
sleep 10;
ls -la /pod-data/;
cat /pod-data/index.html;
done
Example Description
First container(named container-which-produce-content) is main application, which mount a emptyDir volume in /usr/share/nginx/html. In this directory main application will generate data.
Second container(named container-which-run-script-to-monitor) will mount same emptyDir volume (named shared-data which also mounted by main application in /usr/share/nginx/html dir) in /pod-data directory. This /pod-data contains whole data which main application generated in /usr/share/nginx/html directory. You can then run your scripts on this directory.

Can I share a single file between containers in a pod?

My pod has two containers - a primary container, and a sidecar container that monitors the /var/run/utmp file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.
This page describes how to use an emptyDir volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire /var/run/ directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.
I tried to work around this by creating a symlink to utmp in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.
Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:
apiVersion: v1
kind: Pod
metadata:
name: utmp-demo
spec:
restartPolicy: Never
containers:
- name: main
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /var/run # or /var/run/utmp, which crashes
- name: helper
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /tmp/main-run
volumes:
- name: main-run
emptyDir: {}
If you can move the file to be shared in an empty subfolder this could be a simple solution.
For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.

emptyDir in minikube

Very simple question, where is the emptyDir located in my minikube VM? Since the emptyDir volume is pod dependent, it should exist on the VM otherwise it will die with a container exiting. When I do minikube ssh I cannot locate the volume. I need to inspect it and see if my containers are behaving how I want them to, copying some files to the volume mounted on them. Trying find / -type d -name cached results in many permission denieds and the volume is not in the rest of the dirs. My YAML has the following part:
...
volumes:
- name: cached
emptyDir: {}
and also commands in a container where the container copies some files to the volume:
containers:
- name: plum
image: plumsempy/plum
command: ["/bin/sh", "-c"]
args: ["mkdir /plum/cached"]
volumeMounts:
- mountPath: /plum/cached
name: cahced
command: ["bin/sh/", "-c"]
args: ["cp /plum/prune/cert.crt /plume/cached/"]
The container naturally exists after doing its job.
A better way to see if your containers are behaving is by logging in into the container using the kubectl command.
That said: The location should of emptyDir should be in /var/lib/kubelet/pods/{podid}/volumes/kubernetes.io~empty-dir/ on the given node where your pod is running.