Permission denied while reading file as root in Azure AKS container - kubernetes

I have AKS cluster deployed(version 1.19) on Azure, part of the deployment in kube-system namespace there are 2 azure-cni-networkmonitor pods, when opening a bash in one of the pods using:
kubectl exec -t -i -n kube-system azure-cni-networkmonitor-th6pv -- bash
I've noticed that although I'm running as root in the container:
uid=0(root) gid=0(root) groups=0(root)
There are some files that I can't open for reading, read commands are resulting in permission denied error, for example:
cat: /run/containerd/io.containerd.runtime.v1.linux/k8s.io/c3bd2dfc2ad242e1a706eb3f42be67710630d314cfeb4b96ec35f35869264830/rootfs/sys/module/zswap/uevent: Permission denied
File stat:
Access: (0200/--w-------) Uid: ( 0/ root) Gid: ( 0/ root)
Linux distribution running on container:
Common Base Linux Delridge
Although the file is non-readable, I shouldn't have a problem to read it as root right?
Any idea why would this happen? I don't see there any SELinux enabled.

/proc and /sys are special filesystems created and maintained by the kernel to provide interfaces into settings and events in the system. The uevent files are used to access information about the devices or send events.
If a given subsystem implements functionality to expose information via that interface, you can cat the file:
[root#home sys]# cat /sys/devices/system/cpu/cpu0/uevent
DRIVER=processor
MODALIAS=cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0017,0018,0019,001A,001B,001C,002B,0034,003A,003B,003D,0068,006F,0070,0072,0074,0075,0076,0079,0080,0081,0089,008C,008D,0091,0093,0094,0096,0097,0099,009A,009B,009C,009D,009E,009F,00C0,00C5,00E7,00EB,00EC,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0120,0123,0125,0127,0128,0129,012A,012D,0140,0165,024A,025A,025B,025C,025D,025F
But if that subsystem doesn't expose that interface, you just get permission denied - even root can't call kernel code that's not there.

Related

Permission denied writing artifacts to an NFS-mounted PVC

I'm attempting to write MLflow artifacts to an NFS-mounted PVC. It's a new PVC mounting at /opt/mlflow, but MLflow seems to have permission writing to it. The specific error I'm getting is
PermissionError: [Errno 13] Permission denied: '/opt/mlflow'
I ran the same deployment with an S3-backed artifact store, and that worked just fine. That was on my home computer though, and I don't have the ability to do that at work. The MLflow documentation seems to indicate that I don't need any special syntax for NFS mounts.
Independent of MLflow you can approach this in a standard file permission way.
Exec into your pod and view the permissions at that file path
kubectl exec -it <pod>
ls -l /opt/mlflow
Within your pod/container see what user you are running as
whoami
If your user doesn't have access to that filepath, then you could adjust the file permissions by mounting the pvc into a different pod that runs under a user with permission to adjust them. Or you could try using fsGroup to control the permissions of the files mounted, which you can read more about here.

K8s PersistentVolume - smart way to view data

Using Google Cloud & Kubernetes engine:
Is there a smart way to view or mount a
PersistentVolume(physical Storage, in the case of Google PD) to a local drive/remote computer/macos, or anything able to view data on the volume - to be able to backup or just view files.
Maybe using something like FUSE and in my case osxfuse.
Obviously I can mount a container and exec,
but maybe there are other ways?
Tried to ssh into the node and cd to /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet
But I get cd: pods: Permission denied
Regarding sharing PersistnetDisk between other VM's it was discussed here. If you want to use the same PD on many nodes, it would work only in read-only mode.
Easiest way to check what's inside the PD is to SSH to node (like you mentioned), but it will require superuser privileges (sudo) rights.
- SSH to node
$ sudo su
$ cd /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts
$ ls
Now you will get a few records, depends on how many PVC you have. Name of folder is the same as name you get from kubectl get pv.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-53091548-57af-11ea-a629-42010a840131 1Gi RWO Delete Bound default/pvc-postgres standard 42m
Enter to it using cd
$ cd <pvc_name>
in my case:
$ cd gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131
now you can list all files inside this PersistentDisk
...gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131 # ls
lost+found text.txt
$ cat text.txt
This is test
It's not empty
There is tutorial on Github where user used sshfs but on MacOS.
===
Alternative way to mount PD to your local machine is to use NFS. However, you would need to configure it. Later you could specify mount in your Deployment and your local machine.
More details can be found here.
===
To create backup's you can consider Persistent disk snapshots.

Nginx ingress controller at kubernetes not allowing installation of some package

I am looking to execute
apt install tcpdump
but facing permission denial, upon looking to set the directory to root, it is asking me for password and I don't know from where to get that password.
I installed nginx helm chart from stable/nginx repository with no RBAC
Please see snapshot for details on error, while I tried installing tcpdump in the pod after doing ssh into it.
In Using GDB with Nginx, you can find troubleshooting section:
Shortly:
find the node where your pod is running (kubectl get pods -o wide)
ssh into the node
find the docker_ID for this image (docker ps | grep pod_name)
run docker exec -it --user=0 --privileged docker_ID bash
Note: Runtime privilege and Linux capabilities
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
Additional resources:
ROOT IN CONTAINER, ROOT ON HOST
Hope this help.

Google VM not enough permissions to write on mounted bucket

I am running a Google Compute Instance which must be able to connect to read and write to a bucket that is mounted locally.
At the moment, while ssh-ed into the machine I have the permission to read all the files in the directory but not to write them.
Here some more details:
gcloud init
account: PROJECT_NUMBER-compute#developer.gserviceaccount.com
When looking at the IAMs on google platform, this IAM has proprietary role, so that it should be able to access to all the resources in the project.
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 --o nonempty BUCKET LOCAL_DIR
now looking at permissions, all file have (as expected)
ls -lh LOCAL_DIR/
drwxrwxrwx 1 ubuntu ubuntu 0 Jul 2 11:51 folder
However, when running a very simple python code saving a pickle into one of these directories, i get the following error
OSError: [Errno 5] Input/output error: FILENAME
If I run the gcsuse with --foreground flag, the error it produces is
fuse: 2018/07/02 12:31:05.353525 *fuseops.GetXattrOp error: function not implemented
fuse: 2018/07/02 12:31:05.362076 *fuseops.SetInodeAttributesOp error: SetMtime: \
UpdateObject: googleapi: Error 403: Insufficient Permission, insufficientPermissions
Which is weird as the account on the VM has proprietary role.
Any guess on how to overcome this?
Your instance requires the appropriate scopes to access GCS buckets. You can view the scopes through the console or using gcloud compute instances describe [instance_name] | grep scopes -A 10
You must have Storage read/write or https://www.googleapis.com/auth/devstorage.read_write

Permission Denied for Kubernetes Secrets with SELINUX Enabled

I followed kubernetes documentation to manage secrets of my applications.
http://kubernetes.io/v1.1/docs/user-guide/secrets.html
When pod starts it kubernetes mounts secret at the right place, but application is unable to read secret data as it described in documentation.
root#quoter-controller-whw7k:/etc/quoter# whoami
root
root#quoter-controller-whw7k:/etc/quoter# ls -l
ls: cannot access local.py: Permission denied
total 0
-????????? ? ? ? ? ? local.py
root#quoter-controller-whw7k:/etc/quoter# cat local.py
cat: local.py: Permission denied
What is wrong with that?
SELinux configured with enforcing mode
SELINUX=enforcing
Docker started with the following command
/usr/bin/docker daemon --registry-mirror=http://mirror.internal:5000 --selinux-enabled --insecure-registry registry.internal:5555 --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/atomicos-docker--pool --bip=10.16.16.1/24 --mtu=8951
There is a known issue with SELinux and Kubernetes Secrets as per the Atomic issue tracker, see ISSUE-117.