Docker Permissions Assistance - docker-compose

Hoping you can help. I'm not able to select my external mounted drives within the media path in sonarr. It doesn't even show any folders passed the "Media" folder within the main root drive. I'm sure its a permissions issue but I've granted access everywhere. This setup is in a docker-compose portrainer stack on Ubuntu 22.
The ENV files have the correct environment for all apps which aligns with my id $USER which is plex.
`PUID=1000
PGID=1000`
I have correctly mounted my internal drives giving them the proper read/write permissions as well as assigning them to the correct groups (plex) as you can see below.
`plex#plex:~$ ls -l /media
total 4
drwxrwxrwx+ 4 plex plex 4096 Feb 10 17:11 plex
plex#plex:~$ ls -l /media/plex
total 8
drwx------ 4 plex plex 4096 Feb 9 14:29 sdb1
drwxrwxrwx 4 plex plex 4096 Feb 9 16:35 sdc1`
The drives show mounted as well within the gnome disk utility.
`Ext4 (version 1.0) — Mounted at /media/plex/sdb1`
Please let me know if theres any other information needed from my end or if I'm missing a step. Ive attached the entire Docker-compose file and ENV file below. Thank you for the help.
Docker-Compose File:
https://pastebin.com/HLR1pdJd
Env File:
`# Your timezone, https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=America/New_York
# UNIX PUID and PGID, find with: id $USER
PUID=1000
PGID=1000
# The directory where data and configuration will be stored.
ROOT=/media/plex/sdb1/data`
My view in the /media directory within sonarr -
https://imgur.com/6hjTgc6
Tried providing permissions to folders via chown. Set the PUI and PGUI along with all folders to read/write for the Plex group.

Related

Postgresql path and LVM path issues while mapping to directory

I have created a ec2 instance and attached 3 ebs volumes gp3=3, io1=4gb, io2=4 and mounted it.
I have installed postgres source code v 8.4.18 on it and created a database with 2 million sample entries.
The directory of the pgsql is /usr/local/pgsql/data
Now my root volume is full to 95%, I created a LVM for this 3 ebs volumes with pv create, lvcreate and vgcreate, formatted via ext4 file system and mounted to a directory /usr/local/pgsql.
Now when I try to login to postgresql by doing su - postgres and then /usr/local/pgsql/data/bin/pg_ctl -D usr/local/pgsql/data -l logfile start. It does not get start and I get an error bash: /usr/local/pgsql/data/bin/pg_ctl: No such file or directory.
And if i do cd /usr/local/pgsql/data all the data has been gone ( eg pg config files. logs. hba files etc) Also when I open /home/postgres/usr/local/pgsql/data/postgresql.conf I see blank page.
If i mount the LVM to other directories it works and i can see the conf files etc. I want to mount to the same directory so that once my root volume is full, The further sample tables i am creating should be stored in the LVM if there is no space on root vol.
Tried to check conf files, uninstalled postgresql but did not get results. I tried the same on postgresql v12, was facing same error then I just uninstalled postgres and re-installed it with it's default directory and it worked there, but not on v8.4.18 .

Kubernetes changing ownership of the volume parent directory

I am running a non root K8s pod, which is using a PV and the following security context
# security context
securityContext:
runAsUser: 1000
runAsGroup: 2000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
# volume
volumeMounts:
- name: app
mountPath: /home/user/app
The files and folders created inside the volume are indeed owned by 1000 and 2000
-rw-r--r-- 1 1000 2000 2113 Jun 7 12:34 README.md
-rw-r--r-- 1 1000 2000 1001 Jun 7 12:34 package.json
but the parent directory /app is owned by root instead of UID 1000
drwxrwxrwx 5 0 0 8 Jun 7 12:34 app
I tried creating the app folder beforehand with the right ownership and permissions, but it's getting overridden, as the volume is created by the K8s csi.
Actually in the documentation stated, that the parent directory should also be owned by the GID 2000
The owner for volume /data/demo and any files created in that volume
will be Group ID 2000.
How can i force Kubernetes to respect the ownership of the parent directory? Is that handled by the CSI?
I am using Rook as storage class.
When mounting volumes the pre-existing files and directories will be overwritten by the CSI.
I'm not sure where the permissions on the mounted directories are coming from; my guess is that it's simply the UID of the FS provisioner, but this is pure speculation on my part.
Perhaps a solution is to provision the directories you want yourself; you could use an initcontainer with the same securityContext setting it up, or add some code to check for and conditionally provision the directory in the main pod.

Configure gsutil to use kubernetes service account credentials inside of pod

I have a kubernetes Cronjob that performs some backup jobs, and the backup files needs to be uploaded to a bucket. The pod have the service account credentials mounted inside the pod at /var/run/secrets/kubernetes.io/serviceaccount, but how can I instruct gsutil to use the credentials in /var/run/secrets/kubernetes.io/serviceaccount?
lrwxrwxrwx 1 root root 12 Oct 8 20:56 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 8 20:56 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 8 20:56 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 31 Oct 8 20:56 ..data -> ..2018_10_08_20_56_04.686748281
drwxr-xr-x 2 root root 100 Oct 8 20:56 ..2018_10_08_20_56_04.686748281
drwxrwxrwt 3 root root 140 Oct 8 20:56 .
drwxr-xr-x 3 root root 4096 Oct 8 20:57 ..
The short answer is that the token there is not in a format that gsutil knows how to use, so you can't use it. You'll need a JSON keyfile, as mentioned in the tutorial here (except that you won't be able to use the GOOGLE_APPLICATION_CREDENTIALS environment variable):
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Rather than reading from the GOOGLE_APPLICATION_CREDENTIALS environment variable, Gsutil uses Boto configuration files to load credentials. The common places that it knows to look for these Boto config files are /etc/boto.cfg and $HOME/.boto. Note that the latter value changes depending on the user running the command ($HOME expands to different values for different users); since cron jobs usually run as a different user than the one who set up the config file, I wouldn't recommend relying on this path.
So, on your pod, you'll need to first create a Boto config file that references the keyfile:
# This option is only necessary if you're running an installation of
# gsutil that came bundled with gcloud. It tells gcloud that you'll be
# managing credentials manually via your own Boto config files.
$ gcloud config set pass_credentials_to_gsutil False
# Set up your boto file at /path/to/my/boto.cfg - the setup will prompt
# you to supply the /path/to/your/keyfile.json. Alternatively, to avoid
# interactive setup prompts, you could set up this config file beforehand
# and copy it to the pod.
$ gsutil config -e -o '/path/to/my/boto.cfg'
And finally, whenever you run gsutil, you need to tell it where to find that Boto config file which references your JSON keyfile (and also make sure that the user running the command has permission to read both the Boto config file and the JSON keyfile). If you wrote your Boto config file to one of the well-known paths I mentioned above, gsutil will attempt to find it automatically; if not, you can tell gsutil where to find the Boto config file by exporting the BOTO_CONFIG environment variable in the commands you supply for your cron job:
export BOTO_CONFIG=/path/to/my/boto.cfg; /path/to/gsutil cp <src> <dst>
Edit:
Note that GCE VM images come with a pre-populated file at /etc/boto.cfg. This config file tells gsutil to load a plugin that allows gsutil to contact the GCE metadata server and fetch auth tokens (corresponding to the default robot service account for that VM) that way. If your pod is able to read the host VM's /etc/boto.cfg file, you're able to contact the GCE metadata server, and you're fine with operations being performed by the VM's default service account, this solution should work out-of-the-box.
Note that your Kubernetes Service Account is different from your Google Cloud Storage service account.
gsutil uses the boto config so you can mount a Kubernetes secret under /etc/boto.cfg or ~/.boto
You can authenticate with GCP using a token or a service account. You can generate a token using gsutil config -f you can generate service account credentials using gsutil config -e. It will generate a ~/.boto file and then you can mount that as Kubernetes secret on your pods.
More information here.

CentOS 7 pg_ctl: could not access directory "/var/lib/pgsql/data": Permission denied

PostgreSQL 10.6 and CentOS 7
pg_ctl status
pg_ctl: could not access directory "/var/lib/pgsql/data": Permission denied`
Wouldn't pg_ctl have access to this, given /var/lib/pgsql/data has ownership postgres:postgres?
drwx------ 3 postgres postgres 94 Nov 14 06:43 pgsql
How can I fix this without creating a vulnerability? Why is this throwing an error?
Additional info (edit):
su - postgres
cd /var/lib
/var/lib/pgsql: drwx------ 3 postgres postgres 94 Nov 14 06:43 pgsql
/var/lib/pgsql/10: drwx------ 4 postgres postgres 33 Nov 14 06:38 10
/var/lib/pgsql/10/data: drwx------ 20 postgres postgres 4096 Nov 15 03:47 data
In UNIX, each process runs with the permissions of the user that starts the executable, not the owner of the executable (unless the SETUID flag is set).
So it doesn't matter who owns pg_ctl, but you have to be user postgres when you run it.
This needs few troubleshooting steps to pinpoint the real issue.
Find out the user/owner and files permissions for that location in Linux:
Ls - al /var/lib/pgsql/data/
ls - al /var/lib/pgsql/
Try to change to the postgres user and access the directory in 1
# su - postgres
Following links should fill in blanks for few steps to check things out. On #2 link, you aren’t moving the dir, but you see steps to ensure dir is ready/accessible
https://wiki.postgresql.org/wiki/First_steps
https://www.digitalocean.com/community/tutorials/how-to-move-a-postgresql-data-directory-to-a-new-location-on-ubuntu-16-04
Update
From comments, it looks like pg ctl is run as user x... and lacks sufficient permissions
Without knowing much about your environment, it may be better to let postgres be that user who runs pg ctl since it’s already doing stuff related..

Bash script mounted as configmap with 777 permissions cannot be ran

This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?
See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.
It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation