Hoping you can help. I'm not able to select my external mounted drives within the media path in sonarr. It doesn't even show any folders passed the "Media" folder within the main root drive. I'm sure its a permissions issue but I've granted access everywhere. This setup is in a docker-compose portrainer stack on Ubuntu 22.
The ENV files have the correct environment for all apps which aligns with my id $USER which is plex.
`PUID=1000
PGID=1000`
I have correctly mounted my internal drives giving them the proper read/write permissions as well as assigning them to the correct groups (plex) as you can see below.
`plex#plex:~$ ls -l /media
total 4
drwxrwxrwx+ 4 plex plex 4096 Feb 10 17:11 plex
plex#plex:~$ ls -l /media/plex
total 8
drwx------ 4 plex plex 4096 Feb 9 14:29 sdb1
drwxrwxrwx 4 plex plex 4096 Feb 9 16:35 sdc1`
The drives show mounted as well within the gnome disk utility.
`Ext4 (version 1.0) — Mounted at /media/plex/sdb1`
Please let me know if theres any other information needed from my end or if I'm missing a step. Ive attached the entire Docker-compose file and ENV file below. Thank you for the help.
Docker-Compose File:
https://pastebin.com/HLR1pdJd
Env File:
`# Your timezone, https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=America/New_York
# UNIX PUID and PGID, find with: id $USER
PUID=1000
PGID=1000
# The directory where data and configuration will be stored.
ROOT=/media/plex/sdb1/data`
My view in the /media directory within sonarr -
https://imgur.com/6hjTgc6
Tried providing permissions to folders via chown. Set the PUI and PGUI along with all folders to read/write for the Plex group.
Evening!
I'm wondering if anyone could share the steps for updating the certificates on OpenShift + Kubernetes 4.6? I've checked using the below command and some are expired.
find /etc/kubernetes/ -type f -name "*.crt" -print|egrep -v 'ca.crt$'|xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After'
I'm not able to find relevant steps to my UPN install. The following certificates are expired as well.
81789506 lrwxrwxrwx. 1 root root 59 Jan 9 00:32 kubelet-server-current.pem -> /var/lib/kubelet/pki/kubelet-server-2021-06-18-20-35-33.pem 81800208 lrwxrwxrwx. 1 root root 59 Jan 9 00:32 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2021-06-19-13-16-00.pem
Since the API server is offline, I'm not able to renew the certificates via oc commands. All OC commands return an error since the API server ( port 6443 ) is offline. This cluster is installed on VMware using the UPI method. There was a failure sometime back taking the cluster offline. When the cluster was brought back up, the certs were already expired and could not renew since services needed for that were offline I think?
Wondering if anyone managed to recover from this scenario and would be able to help?
Did you check the official doc on that subject?
It may help you
https://docs.openshift.com/container-platform/4.6/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.html
But if you can't login to your cluster, it may be quite difficult...
I have AKS cluster deployed(version 1.19) on Azure, part of the deployment in kube-system namespace there are 2 azure-cni-networkmonitor pods, when opening a bash in one of the pods using:
kubectl exec -t -i -n kube-system azure-cni-networkmonitor-th6pv -- bash
I've noticed that although I'm running as root in the container:
uid=0(root) gid=0(root) groups=0(root)
There are some files that I can't open for reading, read commands are resulting in permission denied error, for example:
cat: /run/containerd/io.containerd.runtime.v1.linux/k8s.io/c3bd2dfc2ad242e1a706eb3f42be67710630d314cfeb4b96ec35f35869264830/rootfs/sys/module/zswap/uevent: Permission denied
File stat:
Access: (0200/--w-------) Uid: ( 0/ root) Gid: ( 0/ root)
Linux distribution running on container:
Common Base Linux Delridge
Although the file is non-readable, I shouldn't have a problem to read it as root right?
Any idea why would this happen? I don't see there any SELinux enabled.
/proc and /sys are special filesystems created and maintained by the kernel to provide interfaces into settings and events in the system. The uevent files are used to access information about the devices or send events.
If a given subsystem implements functionality to expose information via that interface, you can cat the file:
[root#home sys]# cat /sys/devices/system/cpu/cpu0/uevent
DRIVER=processor
MODALIAS=cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0017,0018,0019,001A,001B,001C,002B,0034,003A,003B,003D,0068,006F,0070,0072,0074,0075,0076,0079,0080,0081,0089,008C,008D,0091,0093,0094,0096,0097,0099,009A,009B,009C,009D,009E,009F,00C0,00C5,00E7,00EB,00EC,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0120,0123,0125,0127,0128,0129,012A,012D,0140,0165,024A,025A,025B,025C,025D,025F
But if that subsystem doesn't expose that interface, you just get permission denied - even root can't call kernel code that's not there.
This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?
See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.
It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation
I am running a Google Compute Instance which must be able to connect to read and write to a bucket that is mounted locally.
At the moment, while ssh-ed into the machine I have the permission to read all the files in the directory but not to write them.
Here some more details:
gcloud init
account: PROJECT_NUMBER-compute#developer.gserviceaccount.com
When looking at the IAMs on google platform, this IAM has proprietary role, so that it should be able to access to all the resources in the project.
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 --o nonempty BUCKET LOCAL_DIR
now looking at permissions, all file have (as expected)
ls -lh LOCAL_DIR/
drwxrwxrwx 1 ubuntu ubuntu 0 Jul 2 11:51 folder
However, when running a very simple python code saving a pickle into one of these directories, i get the following error
OSError: [Errno 5] Input/output error: FILENAME
If I run the gcsuse with --foreground flag, the error it produces is
fuse: 2018/07/02 12:31:05.353525 *fuseops.GetXattrOp error: function not implemented
fuse: 2018/07/02 12:31:05.362076 *fuseops.SetInodeAttributesOp error: SetMtime: \
UpdateObject: googleapi: Error 403: Insufficient Permission, insufficientPermissions
Which is weird as the account on the VM has proprietary role.
Any guess on how to overcome this?
Your instance requires the appropriate scopes to access GCS buckets. You can view the scopes through the console or using gcloud compute instances describe [instance_name] | grep scopes -A 10
You must have Storage read/write or https://www.googleapis.com/auth/devstorage.read_write