Configure gsutil to use kubernetes service account credentials inside of pod - kubernetes

I have a kubernetes Cronjob that performs some backup jobs, and the backup files needs to be uploaded to a bucket. The pod have the service account credentials mounted inside the pod at /var/run/secrets/kubernetes.io/serviceaccount, but how can I instruct gsutil to use the credentials in /var/run/secrets/kubernetes.io/serviceaccount?
lrwxrwxrwx 1 root root 12 Oct 8 20:56 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 8 20:56 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 8 20:56 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 31 Oct 8 20:56 ..data -> ..2018_10_08_20_56_04.686748281
drwxr-xr-x 2 root root 100 Oct 8 20:56 ..2018_10_08_20_56_04.686748281
drwxrwxrwt 3 root root 140 Oct 8 20:56 .
drwxr-xr-x 3 root root 4096 Oct 8 20:57 ..

The short answer is that the token there is not in a format that gsutil knows how to use, so you can't use it. You'll need a JSON keyfile, as mentioned in the tutorial here (except that you won't be able to use the GOOGLE_APPLICATION_CREDENTIALS environment variable):
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Rather than reading from the GOOGLE_APPLICATION_CREDENTIALS environment variable, Gsutil uses Boto configuration files to load credentials. The common places that it knows to look for these Boto config files are /etc/boto.cfg and $HOME/.boto. Note that the latter value changes depending on the user running the command ($HOME expands to different values for different users); since cron jobs usually run as a different user than the one who set up the config file, I wouldn't recommend relying on this path.
So, on your pod, you'll need to first create a Boto config file that references the keyfile:
# This option is only necessary if you're running an installation of
# gsutil that came bundled with gcloud. It tells gcloud that you'll be
# managing credentials manually via your own Boto config files.
$ gcloud config set pass_credentials_to_gsutil False
# Set up your boto file at /path/to/my/boto.cfg - the setup will prompt
# you to supply the /path/to/your/keyfile.json. Alternatively, to avoid
# interactive setup prompts, you could set up this config file beforehand
# and copy it to the pod.
$ gsutil config -e -o '/path/to/my/boto.cfg'
And finally, whenever you run gsutil, you need to tell it where to find that Boto config file which references your JSON keyfile (and also make sure that the user running the command has permission to read both the Boto config file and the JSON keyfile). If you wrote your Boto config file to one of the well-known paths I mentioned above, gsutil will attempt to find it automatically; if not, you can tell gsutil where to find the Boto config file by exporting the BOTO_CONFIG environment variable in the commands you supply for your cron job:
export BOTO_CONFIG=/path/to/my/boto.cfg; /path/to/gsutil cp <src> <dst>
Edit:
Note that GCE VM images come with a pre-populated file at /etc/boto.cfg. This config file tells gsutil to load a plugin that allows gsutil to contact the GCE metadata server and fetch auth tokens (corresponding to the default robot service account for that VM) that way. If your pod is able to read the host VM's /etc/boto.cfg file, you're able to contact the GCE metadata server, and you're fine with operations being performed by the VM's default service account, this solution should work out-of-the-box.

Note that your Kubernetes Service Account is different from your Google Cloud Storage service account.
gsutil uses the boto config so you can mount a Kubernetes secret under /etc/boto.cfg or ~/.boto
You can authenticate with GCP using a token or a service account. You can generate a token using gsutil config -f you can generate service account credentials using gsutil config -e. It will generate a ~/.boto file and then you can mount that as Kubernetes secret on your pods.
More information here.

Related

Docker Permissions Assistance

Hoping you can help. I'm not able to select my external mounted drives within the media path in sonarr. It doesn't even show any folders passed the "Media" folder within the main root drive. I'm sure its a permissions issue but I've granted access everywhere. This setup is in a docker-compose portrainer stack on Ubuntu 22.
The ENV files have the correct environment for all apps which aligns with my id $USER which is plex.
`PUID=1000
PGID=1000`
I have correctly mounted my internal drives giving them the proper read/write permissions as well as assigning them to the correct groups (plex) as you can see below.
`plex#plex:~$ ls -l /media
total 4
drwxrwxrwx+ 4 plex plex 4096 Feb 10 17:11 plex
plex#plex:~$ ls -l /media/plex
total 8
drwx------ 4 plex plex 4096 Feb 9 14:29 sdb1
drwxrwxrwx 4 plex plex 4096 Feb 9 16:35 sdc1`
The drives show mounted as well within the gnome disk utility.
`Ext4 (version 1.0) — Mounted at /media/plex/sdb1`
Please let me know if theres any other information needed from my end or if I'm missing a step. Ive attached the entire Docker-compose file and ENV file below. Thank you for the help.
Docker-Compose File:
https://pastebin.com/HLR1pdJd
Env File:
`# Your timezone, https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=America/New_York
# UNIX PUID and PGID, find with: id $USER
PUID=1000
PGID=1000
# The directory where data and configuration will be stored.
ROOT=/media/plex/sdb1/data`
My view in the /media directory within sonarr -
https://imgur.com/6hjTgc6
Tried providing permissions to folders via chown. Set the PUI and PGUI along with all folders to read/write for the Plex group.

Updating Certificates on OpenShift + Kubernetes 4.6+

Evening!
I'm wondering if anyone could share the steps for updating the certificates on OpenShift + Kubernetes 4.6? I've checked using the below command and some are expired.
find /etc/kubernetes/ -type f -name "*.crt" -print|egrep -v 'ca.crt$'|xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After'
I'm not able to find relevant steps to my UPN install. The following certificates are expired as well.
81789506 lrwxrwxrwx. 1 root root 59 Jan 9 00:32 kubelet-server-current.pem -> /var/lib/kubelet/pki/kubelet-server-2021-06-18-20-35-33.pem 81800208 lrwxrwxrwx. 1 root root 59 Jan 9 00:32 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2021-06-19-13-16-00.pem
Since the API server is offline, I'm not able to renew the certificates via oc commands. All OC commands return an error since the API server ( port 6443 ) is offline. This cluster is installed on VMware using the UPI method. There was a failure sometime back taking the cluster offline. When the cluster was brought back up, the certs were already expired and could not renew since services needed for that were offline I think?
Wondering if anyone managed to recover from this scenario and would be able to help?
Did you check the official doc on that subject?
It may help you
https://docs.openshift.com/container-platform/4.6/backup_and_restore/control_plane_backup_and_restore/disaster_recovery/scenario-3-expired-certs.html
But if you can't login to your cluster, it may be quite difficult...

Permission denied while reading file as root in Azure AKS container

I have AKS cluster deployed(version 1.19) on Azure, part of the deployment in kube-system namespace there are 2 azure-cni-networkmonitor pods, when opening a bash in one of the pods using:
kubectl exec -t -i -n kube-system azure-cni-networkmonitor-th6pv -- bash
I've noticed that although I'm running as root in the container:
uid=0(root) gid=0(root) groups=0(root)
There are some files that I can't open for reading, read commands are resulting in permission denied error, for example:
cat: /run/containerd/io.containerd.runtime.v1.linux/k8s.io/c3bd2dfc2ad242e1a706eb3f42be67710630d314cfeb4b96ec35f35869264830/rootfs/sys/module/zswap/uevent: Permission denied
File stat:
Access: (0200/--w-------) Uid: ( 0/ root) Gid: ( 0/ root)
Linux distribution running on container:
Common Base Linux Delridge
Although the file is non-readable, I shouldn't have a problem to read it as root right?
Any idea why would this happen? I don't see there any SELinux enabled.
/proc and /sys are special filesystems created and maintained by the kernel to provide interfaces into settings and events in the system. The uevent files are used to access information about the devices or send events.
If a given subsystem implements functionality to expose information via that interface, you can cat the file:
[root#home sys]# cat /sys/devices/system/cpu/cpu0/uevent
DRIVER=processor
MODALIAS=cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0017,0018,0019,001A,001B,001C,002B,0034,003A,003B,003D,0068,006F,0070,0072,0074,0075,0076,0079,0080,0081,0089,008C,008D,0091,0093,0094,0096,0097,0099,009A,009B,009C,009D,009E,009F,00C0,00C5,00E7,00EB,00EC,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0120,0123,0125,0127,0128,0129,012A,012D,0140,0165,024A,025A,025B,025C,025D,025F
But if that subsystem doesn't expose that interface, you just get permission denied - even root can't call kernel code that's not there.

Bash script mounted as configmap with 777 permissions cannot be ran

This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?
See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.
It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation

Google VM not enough permissions to write on mounted bucket

I am running a Google Compute Instance which must be able to connect to read and write to a bucket that is mounted locally.
At the moment, while ssh-ed into the machine I have the permission to read all the files in the directory but not to write them.
Here some more details:
gcloud init
account: PROJECT_NUMBER-compute#developer.gserviceaccount.com
When looking at the IAMs on google platform, this IAM has proprietary role, so that it should be able to access to all the resources in the project.
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 --o nonempty BUCKET LOCAL_DIR
now looking at permissions, all file have (as expected)
ls -lh LOCAL_DIR/
drwxrwxrwx 1 ubuntu ubuntu 0 Jul 2 11:51 folder
However, when running a very simple python code saving a pickle into one of these directories, i get the following error
OSError: [Errno 5] Input/output error: FILENAME
If I run the gcsuse with --foreground flag, the error it produces is
fuse: 2018/07/02 12:31:05.353525 *fuseops.GetXattrOp error: function not implemented
fuse: 2018/07/02 12:31:05.362076 *fuseops.SetInodeAttributesOp error: SetMtime: \
UpdateObject: googleapi: Error 403: Insufficient Permission, insufficientPermissions
Which is weird as the account on the VM has proprietary role.
Any guess on how to overcome this?
Your instance requires the appropriate scopes to access GCS buckets. You can view the scopes through the console or using gcloud compute instances describe [instance_name] | grep scopes -A 10
You must have Storage read/write or https://www.googleapis.com/auth/devstorage.read_write