Google VM not enough permissions to write on mounted bucket - google-cloud-storage

I am running a Google Compute Instance which must be able to connect to read and write to a bucket that is mounted locally.
At the moment, while ssh-ed into the machine I have the permission to read all the files in the directory but not to write them.
Here some more details:
gcloud init
account: PROJECT_NUMBER-compute#developer.gserviceaccount.com
When looking at the IAMs on google platform, this IAM has proprietary role, so that it should be able to access to all the resources in the project.
gcsfuse -o allow_other --file-mode 777 --dir-mode 777 --o nonempty BUCKET LOCAL_DIR
now looking at permissions, all file have (as expected)
ls -lh LOCAL_DIR/
drwxrwxrwx 1 ubuntu ubuntu 0 Jul 2 11:51 folder
However, when running a very simple python code saving a pickle into one of these directories, i get the following error
OSError: [Errno 5] Input/output error: FILENAME
If I run the gcsuse with --foreground flag, the error it produces is
fuse: 2018/07/02 12:31:05.353525 *fuseops.GetXattrOp error: function not implemented
fuse: 2018/07/02 12:31:05.362076 *fuseops.SetInodeAttributesOp error: SetMtime: \
UpdateObject: googleapi: Error 403: Insufficient Permission, insufficientPermissions
Which is weird as the account on the VM has proprietary role.
Any guess on how to overcome this?

Your instance requires the appropriate scopes to access GCS buckets. You can view the scopes through the console or using gcloud compute instances describe [instance_name] | grep scopes -A 10
You must have Storage read/write or https://www.googleapis.com/auth/devstorage.read_write

Related

Permission denied while reading file as root in Azure AKS container

I have AKS cluster deployed(version 1.19) on Azure, part of the deployment in kube-system namespace there are 2 azure-cni-networkmonitor pods, when opening a bash in one of the pods using:
kubectl exec -t -i -n kube-system azure-cni-networkmonitor-th6pv -- bash
I've noticed that although I'm running as root in the container:
uid=0(root) gid=0(root) groups=0(root)
There are some files that I can't open for reading, read commands are resulting in permission denied error, for example:
cat: /run/containerd/io.containerd.runtime.v1.linux/k8s.io/c3bd2dfc2ad242e1a706eb3f42be67710630d314cfeb4b96ec35f35869264830/rootfs/sys/module/zswap/uevent: Permission denied
File stat:
Access: (0200/--w-------) Uid: ( 0/ root) Gid: ( 0/ root)
Linux distribution running on container:
Common Base Linux Delridge
Although the file is non-readable, I shouldn't have a problem to read it as root right?
Any idea why would this happen? I don't see there any SELinux enabled.
/proc and /sys are special filesystems created and maintained by the kernel to provide interfaces into settings and events in the system. The uevent files are used to access information about the devices or send events.
If a given subsystem implements functionality to expose information via that interface, you can cat the file:
[root#home sys]# cat /sys/devices/system/cpu/cpu0/uevent
DRIVER=processor
MODALIAS=cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0017,0018,0019,001A,001B,001C,002B,0034,003A,003B,003D,0068,006F,0070,0072,0074,0075,0076,0079,0080,0081,0089,008C,008D,0091,0093,0094,0096,0097,0099,009A,009B,009C,009D,009E,009F,00C0,00C5,00E7,00EB,00EC,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0120,0123,0125,0127,0128,0129,012A,012D,0140,0165,024A,025A,025B,025C,025D,025F
But if that subsystem doesn't expose that interface, you just get permission denied - even root can't call kernel code that's not there.

gcloud SDK: Unable to write file

I installed gcloud SDK with brew cask install google-cloud-sdk
$ gcloud container clusters get-credentials my-gke-cluster --region europe-west4-c
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
Unable to write file [/Users/xxxxx/my-repo]: [Errno 21] Is a directory: '/Users/xxxxx/my-repo'
Now all permissions of the folder and recursive files are restricted to 600 (drw-------). Tried to reinstall gcloud but with no effect on its behavior.
I assume you're using macOS and I'm unfamiliar with it.
The gcloud container clusters get-credentials command should write to a file called ${HOME}/.kube/config.
The error suggests that it's trying to write the credentials to /Users/xxxxx/my-repo and this is determined by the value of ${KUBECONFIG}. Have you changed either ${KUBECONFIG} or your ${HOME} environment variable? You should be able to printf "HOME=${HOME}\nKUBECONFIG=${KUBECONFIG}" to inspect these.
You may be able to choose a different destination by adjust the value of KUBECONFIG. Perhaps set this to /Users/xxxxx and try the command again.
Ultimately, this is some sugar to update the local configuration file. It should be possible to create this manually if needs be. If the above don't work, I can update this answer with more details.

how do I elevate my gcloud scp and ssh commands?

I want to be able to fire commands at my instance with gcloud because it handles auth for me. This works well but how do I run them with sudo/root access?
For example I can copy files to my accounts folder:
gcloud compute scp --recurse myinst:/home/me/zzz /test --zone us-east1-b
But I cant copy to /tmp:
gcloud compute scp --recurse /test myinst:/tmp --zone us-east1-b
pscp: unable to open directory /tmp/.pki: permission denied
19.32.38.265147.log | 0 kB | 0.4 kB/s | ETA: 00:00:00 | 100%
pscp: unable to open /tmp/ks-script-uqygub: permission denied
What is the right way to run "gcloud compute scp" with sudo? Just to be clear, I of course can ssh into the instance and run sudo interactively
Edit: for now im just editing the permissions on the remote host
Just so I'm understanding correctly, are you trying to copy FROM the remote /tmp folder, or TO it? This question sounds like you're trying to copy to it, but the code says you're trying to copy from it.
This has worked for me in the past for copying from my local drive to a remote drive, though I have some concern over running sudo remotely:
gcloud compute scp myfile.txt [gce_user]#myinst:~/myfile.txt --project=[project_name];
gcloud compute ssh [gce_user]#myinst --command 'sudo cp ~/myfile.txt /tmp/' --project=[project_name];
You would reverse the process (and obviously rewrite the direction and sequence of the commands) if you needed to remotely access the contents of /tmp and then copy them down to your local drive.
Hope this helps!

MailKit dotnet core on Ubuntu 16.04

Running a Kestrel server managed by systemd using www-data user. When trying to send an email using MailKit with TLS enabled I get the following error message:
System.UnauthorizedAccessException: Access to the path '/var/www/.dotnet/corefx/cryptography/crls' is denied. ---> System.IO.IOException: Permission denied
One solution possibly is to set a home directory for www-data, but that seems counter intuitive.
The call stack indicates that the code (MailKit or one of its dependencies) is trying to build and access a certificate cache.
You can manually create the directory and grant the necessary permissions.
Don't modify /var top directory as that's crazy.
First, you need to recursively create the directory:
mkdir -p /var/www/.dotnet/corefx/cryptography/crls
and give rights to www-data group
(if this is the group that runs your service)
sudo chgrp www-data /var/www/.dotnet/corefx/cryptography/crls

Google Dataproc Agent reports failure when using initialization script

I am trying to set up a cluster with an initialization script, but I get the following error:
[BAD JSON: JSON Parse error: Unexpected identifier "Google"]
In the log folder the init script output log is absent.
This seems rather strange as it seemed to work past week, and the error message does not seem related to the init script, but rather to the input arguments for the cluster creation. I used the following command:
gcloud beta dataproc clusters create <clustername> --bucket <bucket> --zone <zone> --master-machine-type n1-standard-1 --master-boot-disk-size 10 --num-workers 2 --worker-machine-type n1-standard-1 --worker-boot-disk-size 10 --project <projectname> --initialization-actions <gcs-uri of script>
Apparently changing
#!/bin/sh
to
#!/bin/bash
and removing all "sudo" occurrences did the trick.
This particular error occurs most often when the initialization script is in a Cloud Storage (GCS) bucket to which the project running the cluster does not have access.
I would recommend double-checking the project which is being used for the cluster has read access to the bucket.