Bash script mounted as configmap with 777 permissions cannot be ran - kubernetes

This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?

See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777

Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.

It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation

Related

kubectl cp bitnami apache helm chart: cannot copy to exact location of pod filesystem

I'm trying to deploy a react app on my local machine with docker-desktop and its kubernetes cluster with bitnami apache helm chart.
I'm following this this tutorial.
The tutorial makes you publish the image on a public repo (step 2) and I don't want to do that. It is indeed possible to pass the app files through a persistent volume claim.
This is described in the following tutorial.
Step 2 of this second tutorial lets you create a pod pointing to a PVC and then asks you to copy the app files there by using command
kubectl cp /myapp/* apache-data-pod:/data/
My issues:
I cannot use the * wildcard or else I get an error. To avoid this I just run
kubectl cp . apache-data-pod:/data/
This instruction copies the files in the pod but it creates another data folder in the already existing data folder in the pod filesystem
After this command my pod filesystem looks like this
I tried executing
kubectl cp . apache-data-pod:/
But this copies the file in the root of the pod filesystem at the same location where first data folder is.
I need to copy the data directly in <my_pod>:/data/.
How can I achieve such behaviour?
Regards
**Use the full path in the command as mentioned below to copy local files to POD : *
kubectl cp apache-pod:/var/www/html/index.html /tmp
*If there are multiple containers on the POD, Use the below syntax to copy a file from local to pod:
kubectl cp /<path-to-your-file>/<file-name> <pod-name>:<fully-qualified-file-name> -c <container-name>
Points to remember :
While referring to the file path on the POD. It is always relative to the WORKDIR you have defined on your image.
Unlike Linux, the base directory does not always start from the / workdir is the base directory
When you have multiple containers on the POD you need to specify the container to use with the copy operation using -c parameter
Quick Example of kubectl cp : Here is the command to copy the index.html file from the POD’s /var/www/html to the local /tmp directory.
No need to mention the full path, when the doc root is the workdir or the default directory of the image.
kubectl cp apache-pod:index.html /tmp
To make it less confusing, you can always write the full path like this
kubectl cp apache-pod:/var/www/html/index.html /tmp
*Also refer to this stack question for more information.

Helm chart MongoDb cannot create directory permisions

I try to deploy mongodb with helm and it gives this error:
mkdir: cannot create directory /bitnami/mongodb/data : permision denied.
I also tried this solution:
sudo chown -R 1001 /tmp/mongo
but it says no this directory.
You have permission denied on /bitnami/mongodb/data and you are trying to modify another path: /tmp/mongo. It is possible that you do not have such a directory at all.
You need to change the owner of the resource for which you don't have permissions, not random (non-related) paths :)
You've probably seen this github issue and this answer:
You are getting that error message because the container can't mount the /tmp/mongo directory you specified in the docker-compose.yml file.
As you can see in our changelog, the container was migrated to the non-root user approach, that means that the user 1001 needs read/write permissions in the /tmp/mongo folder so it can be mounted and used. Can you modify the permissions in your local folder and try to launch the container again?
sudo chown -R 1001 /tmp/mongo
This method will work if you are going to mount the /tmp/mongo folder, which is actually not quite a common behavior. Look for another answer:
Please note that mounting host path volumes is not the usual way to work with these containers. If using docker-compose, it would be using docker volumes (which already handle the permission issue), the same would apply with Kubernetes and the MongoDB helm chart, which would use the securityContext section to ensure the proper permissions.
In your situation, you'll just have change owner to the path /bitnami/mongodb/data or to use Security Context on your Helm chart and everything should work out for you.
Probably here you can find the most interesting part with example context:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"

K8s PersistentVolume - smart way to view data

Using Google Cloud & Kubernetes engine:
Is there a smart way to view or mount a
PersistentVolume(physical Storage, in the case of Google PD) to a local drive/remote computer/macos, or anything able to view data on the volume - to be able to backup or just view files.
Maybe using something like FUSE and in my case osxfuse.
Obviously I can mount a container and exec,
but maybe there are other ways?
Tried to ssh into the node and cd to /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet
But I get cd: pods: Permission denied
Regarding sharing PersistnetDisk between other VM's it was discussed here. If you want to use the same PD on many nodes, it would work only in read-only mode.
Easiest way to check what's inside the PD is to SSH to node (like you mentioned), but it will require superuser privileges (sudo) rights.
- SSH to node
$ sudo su
$ cd /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts
$ ls
Now you will get a few records, depends on how many PVC you have. Name of folder is the same as name you get from kubectl get pv.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-53091548-57af-11ea-a629-42010a840131 1Gi RWO Delete Bound default/pvc-postgres standard 42m
Enter to it using cd
$ cd <pvc_name>
in my case:
$ cd gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131
now you can list all files inside this PersistentDisk
...gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131 # ls
lost+found text.txt
$ cat text.txt
This is test
It's not empty
There is tutorial on Github where user used sshfs but on MacOS.
===
Alternative way to mount PD to your local machine is to use NFS. However, you would need to configure it. Later you could specify mount in your Deployment and your local machine.
More details can be found here.
===
To create backup's you can consider Persistent disk snapshots.

Why lost + found directory inside OpenEBS volume?

Every time when I create a new OpenEBS volume, and mounting the same on the host/application there is a lost+found directory created.
Is there some way to avoid this and what is need of this?
lost+found directory is created by ext4.
It can be deleted manually, but will get created on the next mount/fsck. In your application yaml,use the following parameter to ignore this:
image: <image_name>
args:
- "--ignore-db-dir"
- "lost+found"

Docker-compose named mounted volume

In order to keep track of the volumes used by docker-compose, I'd like to use named volumes. This works great for 'normal' volumes like
version: 2
services:
example-app:
volume:
-named_vol:/dir/in/container/volume
volumes:
named_vol:
But I can't figure out how to make it work when mounting the local host.
I'm looking for something like:
version: 2
services:
example-app:
volume:
-named_homedir:/dir/in/container/volume
volumes:
named_homedir: /c/Users/
or
version: 2
services:
example-app:
volume:
-/c/Users/:/home/dir/in/container/ --name named_homedir
is this in any way possible or am I stuck with anonymous volumes for mounted ones?
As you can read in this GitHub issue, mounting named volumes now is a thing … since 1.11 or 1.12.). Driver specific options are documented. Some notes from the GitHub thread:
docker volume create --opt type=none --opt device=<host path> --opt o=bind
If the host path does not exist, it will not be created.
Options are passed in literally to the mount syscall. We may add special cases for certain "types" because they are awkward to use... like the nfs example [referenced above].
– #cpuguy83
To address your specific question about how to use that in compose, you write under your volumes section:
my-named-volume:
driver_opts:
type: none
device: /home/full/path #NOTE needs full path (~ doesn't work)
o: bind
This is because as cpuguy83 wrote in the github thread linked, the options are (under the hood) passed directly to the mount command.
EDIT: As commented by…
…#villasv, you can use ${PWD} for relative paths.
…#mikeyjk, you might need to delete preexisting volumes:
docker volume rm $(docker volume ls -q)
OR
docker volume prune
…#Camron Hudson, in case you have no such file or directory errors showing up, you might want to read this SO question/ answer as Docker does not follow symlinks and there might be permission issues with your local file system.
OP appears to be using full paths already, but if like most people you're interested in mounting a project folder inside the container this might help.
This is how to do it with driver_opts like #kaiser said and #linuxbandit exemplified. But you can try to use the usually available environment variable $PWD to avoid specifying full paths for directories in the docker-compose context:
logs-directory:
driver_opts:
type: none
device: ${PWD}/logs
o: bind
I've been trying the (almost) same thing and it seems to work with something like:
version: '2'
services:
example-app:
volume:
-named_vol:/dir/in/container/volume
-/c/Users/:/dir/in/container/volume
volumes:
named_vol:
Seems to work for me (I didn't dig into it, just tested it).
I was looking for an answer to the same question recently and stumbled on this plugin: https://github.com/CWSpear/local-persist
Looks like it allows just what topic started wants to do.
Haven't tried it myself yet, but thought it might be useful for somebody.
Host volumes are different from named volumes or anonymous volumes. Their "name" is the path on the host.
There is no way to use the volumes section for host volumes.