Kubernetes changing ownership of the volume parent directory - kubernetes

I am running a non root K8s pod, which is using a PV and the following security context
# security context
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
# volume
volumeMounts:
- name: app
mountPath: /home/user/app
The files and folders created inside the volume are indeed owned by 1000 and 2000
-rw-r--r-- 1 1000 2000 2113 Jun 7 12:34 README.md
-rw-r--r-- 1 1000 2000 1001 Jun 7 12:34 package.json
but the parent directory /app is owned by root instead of UID 1000
drwxrwxrwx 5 0 0 8 Jun 7 12:34 app
I tried creating the app folder beforehand with the right ownership and permissions, but it's getting overridden, as the volume is created by the K8s csi.
Actually in the documentation stated, that the parent directory should also be owned by the GID 2000
The owner for volume /data/demo and any files created in that volume
will be Group ID 2000.
How can i force Kubernetes to respect the ownership of the parent directory? Is that handled by the CSI?
I am using Rook as storage class.

When mounting volumes the pre-existing files and directories will be overwritten by the CSI.
I'm not sure where the permissions on the mounted directories are coming from; my guess is that it's simply the UID of the FS provisioner, but this is pure speculation on my part.
Perhaps a solution is to provision the directories you want yourself; you could use an initcontainer with the same securityContext setting it up, or add some code to check for and conditionally provision the directory in the main pod.

Related

Docker Permissions Assistance

Hoping you can help. I'm not able to select my external mounted drives within the media path in sonarr. It doesn't even show any folders passed the "Media" folder within the main root drive. I'm sure its a permissions issue but I've granted access everywhere. This setup is in a docker-compose portrainer stack on Ubuntu 22.
The ENV files have the correct environment for all apps which aligns with my id $USER which is plex.
`PUID=1000
PGID=1000`
I have correctly mounted my internal drives giving them the proper read/write permissions as well as assigning them to the correct groups (plex) as you can see below.
`plex#plex:~$ ls -l /media
total 4
drwxrwxrwx+ 4 plex plex 4096 Feb 10 17:11 plex
plex#plex:~$ ls -l /media/plex
total 8
drwx------ 4 plex plex 4096 Feb 9 14:29 sdb1
drwxrwxrwx 4 plex plex 4096 Feb 9 16:35 sdc1`
The drives show mounted as well within the gnome disk utility.
`Ext4 (version 1.0) — Mounted at /media/plex/sdb1`
Please let me know if theres any other information needed from my end or if I'm missing a step. Ive attached the entire Docker-compose file and ENV file below. Thank you for the help.
Docker-Compose File:
https://pastebin.com/HLR1pdJd
Env File:
`# Your timezone, https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=America/New_York
# UNIX PUID and PGID, find with: id $USER
PUID=1000
PGID=1000
# The directory where data and configuration will be stored.
ROOT=/media/plex/sdb1/data`
My view in the /media directory within sonarr -
https://imgur.com/6hjTgc6
Tried providing permissions to folders via chown. Set the PUI and PGUI along with all folders to read/write for the Plex group.

Helm chart MongoDb cannot create directory permisions

I try to deploy mongodb with helm and it gives this error:
mkdir: cannot create directory /bitnami/mongodb/data : permision denied.
I also tried this solution:
sudo chown -R 1001 /tmp/mongo
but it says no this directory.
You have permission denied on /bitnami/mongodb/data and you are trying to modify another path: /tmp/mongo. It is possible that you do not have such a directory at all.
You need to change the owner of the resource for which you don't have permissions, not random (non-related) paths :)
You've probably seen this github issue and this answer:
You are getting that error message because the container can't mount the /tmp/mongo directory you specified in the docker-compose.yml file.
As you can see in our changelog, the container was migrated to the non-root user approach, that means that the user 1001 needs read/write permissions in the /tmp/mongo folder so it can be mounted and used. Can you modify the permissions in your local folder and try to launch the container again?
sudo chown -R 1001 /tmp/mongo
This method will work if you are going to mount the /tmp/mongo folder, which is actually not quite a common behavior. Look for another answer:
Please note that mounting host path volumes is not the usual way to work with these containers. If using docker-compose, it would be using docker volumes (which already handle the permission issue), the same would apply with Kubernetes and the MongoDB helm chart, which would use the securityContext section to ensure the proper permissions.
In your situation, you'll just have change owner to the path /bitnami/mongodb/data or to use Security Context on your Helm chart and everything should work out for you.
Probably here you can find the most interesting part with example context:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"

Configure gsutil to use kubernetes service account credentials inside of pod

I have a kubernetes Cronjob that performs some backup jobs, and the backup files needs to be uploaded to a bucket. The pod have the service account credentials mounted inside the pod at /var/run/secrets/kubernetes.io/serviceaccount, but how can I instruct gsutil to use the credentials in /var/run/secrets/kubernetes.io/serviceaccount?
lrwxrwxrwx 1 root root 12 Oct 8 20:56 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 8 20:56 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 8 20:56 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 31 Oct 8 20:56 ..data -> ..2018_10_08_20_56_04.686748281
drwxr-xr-x 2 root root 100 Oct 8 20:56 ..2018_10_08_20_56_04.686748281
drwxrwxrwt 3 root root 140 Oct 8 20:56 .
drwxr-xr-x 3 root root 4096 Oct 8 20:57 ..
The short answer is that the token there is not in a format that gsutil knows how to use, so you can't use it. You'll need a JSON keyfile, as mentioned in the tutorial here (except that you won't be able to use the GOOGLE_APPLICATION_CREDENTIALS environment variable):
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Rather than reading from the GOOGLE_APPLICATION_CREDENTIALS environment variable, Gsutil uses Boto configuration files to load credentials. The common places that it knows to look for these Boto config files are /etc/boto.cfg and $HOME/.boto. Note that the latter value changes depending on the user running the command ($HOME expands to different values for different users); since cron jobs usually run as a different user than the one who set up the config file, I wouldn't recommend relying on this path.
So, on your pod, you'll need to first create a Boto config file that references the keyfile:
# This option is only necessary if you're running an installation of
# gsutil that came bundled with gcloud. It tells gcloud that you'll be
# managing credentials manually via your own Boto config files.
$ gcloud config set pass_credentials_to_gsutil False
# Set up your boto file at /path/to/my/boto.cfg - the setup will prompt
# you to supply the /path/to/your/keyfile.json. Alternatively, to avoid
# interactive setup prompts, you could set up this config file beforehand
# and copy it to the pod.
$ gsutil config -e -o '/path/to/my/boto.cfg'
And finally, whenever you run gsutil, you need to tell it where to find that Boto config file which references your JSON keyfile (and also make sure that the user running the command has permission to read both the Boto config file and the JSON keyfile). If you wrote your Boto config file to one of the well-known paths I mentioned above, gsutil will attempt to find it automatically; if not, you can tell gsutil where to find the Boto config file by exporting the BOTO_CONFIG environment variable in the commands you supply for your cron job:
export BOTO_CONFIG=/path/to/my/boto.cfg; /path/to/gsutil cp <src> <dst>
Edit:
Note that GCE VM images come with a pre-populated file at /etc/boto.cfg. This config file tells gsutil to load a plugin that allows gsutil to contact the GCE metadata server and fetch auth tokens (corresponding to the default robot service account for that VM) that way. If your pod is able to read the host VM's /etc/boto.cfg file, you're able to contact the GCE metadata server, and you're fine with operations being performed by the VM's default service account, this solution should work out-of-the-box.
Note that your Kubernetes Service Account is different from your Google Cloud Storage service account.
gsutil uses the boto config so you can mount a Kubernetes secret under /etc/boto.cfg or ~/.boto
You can authenticate with GCP using a token or a service account. You can generate a token using gsutil config -f you can generate service account credentials using gsutil config -e. It will generate a ~/.boto file and then you can mount that as Kubernetes secret on your pods.
More information here.

Why lost + found directory inside OpenEBS volume?

Every time when I create a new OpenEBS volume, and mounting the same on the host/application there is a lost+found directory created.
Is there some way to avoid this and what is need of this?
lost+found directory is created by ext4.
It can be deleted manually, but will get created on the next mount/fsck. In your application yaml,use the following parameter to ignore this:
image: <image_name>
args:
- "--ignore-db-dir"
- "lost+found"

Bash script mounted as configmap with 777 permissions cannot be ran

This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?
See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.
It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation