Permission denied writing artifacts to an NFS-mounted PVC - kubernetes

I'm attempting to write MLflow artifacts to an NFS-mounted PVC. It's a new PVC mounting at /opt/mlflow, but MLflow seems to have permission writing to it. The specific error I'm getting is
PermissionError: [Errno 13] Permission denied: '/opt/mlflow'
I ran the same deployment with an S3-backed artifact store, and that worked just fine. That was on my home computer though, and I don't have the ability to do that at work. The MLflow documentation seems to indicate that I don't need any special syntax for NFS mounts.

Independent of MLflow you can approach this in a standard file permission way.
Exec into your pod and view the permissions at that file path
kubectl exec -it <pod>
ls -l /opt/mlflow
Within your pod/container see what user you are running as
whoami
If your user doesn't have access to that filepath, then you could adjust the file permissions by mounting the pvc into a different pod that runs under a user with permission to adjust them. Or you could try using fsGroup to control the permissions of the files mounted, which you can read more about here.

Related

Reset/remove kubeconfig file user from Kubernetes

Whenever you start a Kubernetes cluster at one of the big clouds (EKS at AWS, GKE at GCP, AKS at Azure, or Kubernetes at Digitalocean), you can generate a kubeconfig file from them, which grants you full access.
It is now very nice to work with them, but I am always concerned about what I can do if someone manages to steal it. What can I do then?
I never found a button at one of the big clouds to revoke access of the stolen kubeconfig and to regenerate a new one. Is there anything with which I can make that aspect more secure - if you have a documentation at hand, that would be appreciated.
In GKE at GCP the Kubeconfig file which is generated while the cluster creation is located in $HOME/.kube/config. The kubeconfig directory is default to $HOME/.kube/config where $HOME refers to the /home/.
1. If you want to remove user from kubeconfig file use the following command:
$ kubectl --kubeconfig=<kubeconfig-name> config unset users.<name>
2. If you want to regenerate the Kubeconfig file with the previous Kubeconfig file contents try authorizing the cluster using the command:
$ gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
3. If you want to restrict users to kubeconfig file, add permissions to kubeconfig file using the following permission commands:
$ chmod 644 <kubeconfig-file> - which means that the owner can read and write the file, and all others on the system can only read it.
$ chmod 640 <kubeconfig-file> - that the owner has read and write permissions, the group has read permissions, and all other users have no rights to the file.
$ chmod 600 <kubeconfig-file> - only the owner of the file has full read and write access to it. Once a file permission is set to 600, no one else can access the file.
NOTE: Revoking the contents of Kubeconfig file after the kubeconfig file deletion is not possible, you can regenerate the contents of Kubeconfig file only by authorizing the cluster.
Refer to the documentation for more information.

Helm chart MongoDb cannot create directory permisions

I try to deploy mongodb with helm and it gives this error:
mkdir: cannot create directory /bitnami/mongodb/data : permision denied.
I also tried this solution:
sudo chown -R 1001 /tmp/mongo
but it says no this directory.
You have permission denied on /bitnami/mongodb/data and you are trying to modify another path: /tmp/mongo. It is possible that you do not have such a directory at all.
You need to change the owner of the resource for which you don't have permissions, not random (non-related) paths :)
You've probably seen this github issue and this answer:
You are getting that error message because the container can't mount the /tmp/mongo directory you specified in the docker-compose.yml file.
As you can see in our changelog, the container was migrated to the non-root user approach, that means that the user 1001 needs read/write permissions in the /tmp/mongo folder so it can be mounted and used. Can you modify the permissions in your local folder and try to launch the container again?
sudo chown -R 1001 /tmp/mongo
This method will work if you are going to mount the /tmp/mongo folder, which is actually not quite a common behavior. Look for another answer:
Please note that mounting host path volumes is not the usual way to work with these containers. If using docker-compose, it would be using docker volumes (which already handle the permission issue), the same would apply with Kubernetes and the MongoDB helm chart, which would use the securityContext section to ensure the proper permissions.
In your situation, you'll just have change owner to the path /bitnami/mongodb/data or to use Security Context on your Helm chart and everything should work out for you.
Probably here you can find the most interesting part with example context:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"

Permission denied while reading file as root in Azure AKS container

I have AKS cluster deployed(version 1.19) on Azure, part of the deployment in kube-system namespace there are 2 azure-cni-networkmonitor pods, when opening a bash in one of the pods using:
kubectl exec -t -i -n kube-system azure-cni-networkmonitor-th6pv -- bash
I've noticed that although I'm running as root in the container:
uid=0(root) gid=0(root) groups=0(root)
There are some files that I can't open for reading, read commands are resulting in permission denied error, for example:
cat: /run/containerd/io.containerd.runtime.v1.linux/k8s.io/c3bd2dfc2ad242e1a706eb3f42be67710630d314cfeb4b96ec35f35869264830/rootfs/sys/module/zswap/uevent: Permission denied
File stat:
Access: (0200/--w-------) Uid: ( 0/ root) Gid: ( 0/ root)
Linux distribution running on container:
Common Base Linux Delridge
Although the file is non-readable, I shouldn't have a problem to read it as root right?
Any idea why would this happen? I don't see there any SELinux enabled.
/proc and /sys are special filesystems created and maintained by the kernel to provide interfaces into settings and events in the system. The uevent files are used to access information about the devices or send events.
If a given subsystem implements functionality to expose information via that interface, you can cat the file:
[root#home sys]# cat /sys/devices/system/cpu/cpu0/uevent
DRIVER=processor
MODALIAS=cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0017,0018,0019,001A,001B,001C,002B,0034,003A,003B,003D,0068,006F,0070,0072,0074,0075,0076,0079,0080,0081,0089,008C,008D,0091,0093,0094,0096,0097,0099,009A,009B,009C,009D,009E,009F,00C0,00C5,00E7,00EB,00EC,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0120,0123,0125,0127,0128,0129,012A,012D,0140,0165,024A,025A,025B,025C,025D,025F
But if that subsystem doesn't expose that interface, you just get permission denied - even root can't call kernel code that's not there.

K8s PersistentVolume - smart way to view data

Using Google Cloud & Kubernetes engine:
Is there a smart way to view or mount a
PersistentVolume(physical Storage, in the case of Google PD) to a local drive/remote computer/macos, or anything able to view data on the volume - to be able to backup or just view files.
Maybe using something like FUSE and in my case osxfuse.
Obviously I can mount a container and exec,
but maybe there are other ways?
Tried to ssh into the node and cd to /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet
But I get cd: pods: Permission denied
Regarding sharing PersistnetDisk between other VM's it was discussed here. If you want to use the same PD on many nodes, it would work only in read-only mode.
Easiest way to check what's inside the PD is to SSH to node (like you mentioned), but it will require superuser privileges (sudo) rights.
- SSH to node
$ sudo su
$ cd /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts
$ ls
Now you will get a few records, depends on how many PVC you have. Name of folder is the same as name you get from kubectl get pv.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-53091548-57af-11ea-a629-42010a840131 1Gi RWO Delete Bound default/pvc-postgres standard 42m
Enter to it using cd
$ cd <pvc_name>
in my case:
$ cd gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131
now you can list all files inside this PersistentDisk
...gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131 # ls
lost+found text.txt
$ cat text.txt
This is test
It's not empty
There is tutorial on Github where user used sshfs but on MacOS.
===
Alternative way to mount PD to your local machine is to use NFS. However, you would need to configure it. Later you could specify mount in your Deployment and your local machine.
More details can be found here.
===
To create backup's you can consider Persistent disk snapshots.

Permission Denied for Kubernetes Secrets with SELINUX Enabled

I followed kubernetes documentation to manage secrets of my applications.
http://kubernetes.io/v1.1/docs/user-guide/secrets.html
When pod starts it kubernetes mounts secret at the right place, but application is unable to read secret data as it described in documentation.
root#quoter-controller-whw7k:/etc/quoter# whoami
root
root#quoter-controller-whw7k:/etc/quoter# ls -l
ls: cannot access local.py: Permission denied
total 0
-????????? ? ? ? ? ? local.py
root#quoter-controller-whw7k:/etc/quoter# cat local.py
cat: local.py: Permission denied
What is wrong with that?
SELinux configured with enforcing mode
SELINUX=enforcing
Docker started with the following command
/usr/bin/docker daemon --registry-mirror=http://mirror.internal:5000 --selinux-enabled --insecure-registry registry.internal:5555 --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/atomicos-docker--pool --bip=10.16.16.1/24 --mtu=8951
There is a known issue with SELinux and Kubernetes Secrets as per the Atomic issue tracker, see ISSUE-117.