Unable to access volume content using initContainers - kubernetes

I have a simple image (mdw:1.0.0) with some content in it:
FROM alpine:3.9
COPY /role /mdw
WORKDIR /mdw
I was expecting that my container 'nginx' would see the content of /mdw folder, but there is no file.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
- name: install
image: mdw:1.0.0
imagePullPolicy: Never
volumeMounts:
- name: workdir
mountPath: "/mdw"
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /mdw
command: ["ls", "-l", "/mdw"]
volumes:
- name: workdir
emptyDir: {}
Do you know what is the reason and how to fix it ?
Thank you very much

When mounting volume if directory already exists will get wiped. It's intentional and no fix really.
Only way would be to populate the directory after mounting is done.

Your init container doesn't do anything: the Dockerfile doesn't have a CMD and the Kubernetes deployment spec doesn't set a command: either. It starts and immediately exits. (The base Linux distribution images generally have a default command to launch an interactive shell, but absent a tty this will also immediately exit.)
Meanwhile, your Kubernetes setup is also mounting an empty directory over the only content you've put into the image, which prevents the init container from having an effect.
You can build a custom nginx image that directly copies the content in:
FROM nginx
COPY /role /usr/share/nginx/html
Don't use initContainers:, and use that image as the main containers: image.
There is a Docker-specific feature, using Docker named volumes, that can populate a named volume on first use, and you're probably thinking of this feature. This comes with a couple of important caveats (it only takes effect the very first time you run a container, and ignores updates to the image; it doesn't work with bind mounts). This is a plain-Docker-specific feature: Kubernetes will never auto-populate a volume for you.

Related

Copy file inside Kubernetes pod from another container

I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.

Volume shared between two containers "is busy or locked"

I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.
I want to use a shared volume to place the javascript bundle after it's built.
So far, I have the following deployment file (with irrelevant pieces removed):
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
To the best of my ability, I have followed these guides:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
One other things to point out is that I'm trying to run this locally atm using minikube.
EDIT: The Dockerfile I used to build this image is:
FROM node:alpine
WORKDIR /var/app
COPY . .
RUN npm install
RUN npm install -g #vue/cli#latest
CMD ["npm", "run", "build"]
I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.
Problem
The logs from the personal-site container reveal:
- Building for production...
ERROR Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
I'm not sure why the build is trying to remove /dist, but also have a feeling that this is irrelevant. I could be wrong?
I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".
Question
What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?
The best way is to customize your image's entrypoint as following:
Once you finish building the /var/app/dist folder, copy(or move) this folder to another empty path (.e.g: /opt/dist)
cp -r /var/app/dist/* /opt/dist
PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.
Now use /opt/dist instead..:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
Good luck!
If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.

How to place configuration files inside pods?

For example I want to place an application configuration file inside:
/opt/webserver/my_application/config/my_config_file.xml
I create a ConfigMap from file and then place it in a volume like:
/opt/persistentData/
The idea is to run afterwards an script that does something like:
cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/
But it could be any startup.sh script that does needed actions.
How do I run this command/script? (during Pod initialization before Tomcat startup).
I would first try if this works.
spec:
containers:
- volumeMounts:
- mountPath: /opt/webserver/my_application/config/my_config_file.xml
name: config
subPath: my_config_file.xml
volumes:
- configMap:
items:
- key: KEY_OF_THE_CONFIG
path: my_config_file.xml
name: config
name: YOUR_CONFIGMAP_NAME
If not, add an init container to copy the file.
spec:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', '/bin/cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/']
How about mounting the ConfigMap where you actually want it instead of copying over?
update:
The init container #ccshih mentioned should do, but one can try other options too:
Build a custom image modyfying the base one, using a Docker recipe. The example below takes a java+tomcat7 openshift image, adds an additional folder to the app classpath, so you can mount your ConfigMap to /mnt/config without overwriting anything, keeping both folders available.
.
FROM openshift/webserver31-tomcat7-openshift:1.2-6
# add classpaths to config
RUN sed -i 's/shared.loader=/shared.loader=\/mnt\/config/'
/opt/webserver/conf/catalina.properties
Change the ENTRYPOINT of the application, either by modifying the image, or by the DeploymentConfig hooks, see: https://docs.okd.io/latest/dev_guide/deployments/deployment_strategies.html#pod-based-lifecycle-hook
With the hooks one just needs to remember to call the original entrypoint or launch script after all the custom stuff is done.
.
spec:
containers:
-
name: my-app
image: 'image'
command:
- /bin/sh
args:
- '-c'
- cp /wherever/you/have/your-config.xml /wherever/you/want/it/ && /opt/webserver/bin/launch.sh

Can I share a single file between containers in a pod?

My pod has two containers - a primary container, and a sidecar container that monitors the /var/run/utmp file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.
This page describes how to use an emptyDir volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire /var/run/ directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.
I tried to work around this by creating a symlink to utmp in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.
Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:
apiVersion: v1
kind: Pod
metadata:
name: utmp-demo
spec:
restartPolicy: Never
containers:
- name: main
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /var/run # or /var/run/utmp, which crashes
- name: helper
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /tmp/main-run
volumes:
- name: main-run
emptyDir: {}
If you can move the file to be shared in an empty subfolder this could be a simple solution.
For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}