I have a Google Cloud Storage Bucket which is mounted to 3 virtual machines using fstab.
When I upload a file from another machine to google bucket using gsutil command, the uploaded file is accessible from only 2 vms (Set A). The other vm (Set B) doens't show the newly uploaded file.
fstab entry used for mounting is as follows.
bucket_name mounted_path gcsfuse rw,uid=1002,gid=1003,user,allow_other,key_file=key_file_path
Content of /etc/mtab file from Set A is as follows.
bucket_name mounted_path fuse rw,nosuid,nodev,relatime,user_id=1002,group_id=1003,default_permissions 0 0
Content of /etc/mtab file from Set B is as follows.
bucket_name mounted_path fuse fuse rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
Here is how I do mounting Storage Buckets on VMs:
create a VM instance and give it Storage "full" access scope. if you
have already a VM, edit that VM with this scope. note that you need
to stop the VM to edit it's access scope.
install gcsfuse on your instance if you haven't already
create a directory on where you want to mount your bucket mkdir /path/to/bucket
go to Cloud Storage and edit your bucket permissions by adding the compute engine default service account as a Storage Admin, you can find this service account in IAM&admin --> service account, it has this structure 1213242443-compute#developer.gserviceaccount.com
use gcsfuse bucket_name /path/to/bucket to mount your bucket. here gcsfuse will use the default service account to verify access and make the connection. this is the easiest way as it conclude few steps
now any file you upload to your bucket will appear in VMs bucket mounting folder /path/to/bucket
Read more about this process here
Related
I have a Google Cloud Composer 1 environment (Airflow 2.1.2) where I want to run an Airflow DAG that utilizes the KubernetesPodOperator.
Cloud Composer makes available to all DAGs a shared file directory for storing application data. The files in the directory reside in a Google Cloud Storage bucket managed by Composer. Composer uses FUSE to map the directory to the path /home/airflow/gcs/data on all of its Airflow worker pods.
In my DAG I run several Kubernetes pods like so:
from airflow.contrib.operators import kubernetes_pod_operator
# ...
splitter = kubernetes_pod_operator.KubernetesPodOperator(
task_id='splitter',
name='splitter',
namespace='default',
image='europe-west1-docker.pkg.dev/redacted/splitter:2.3',
cmds=["dotnet", "splitter.dll"],
)
The application code in all the pods that I run needs to read from and write to the /home/airflow/gcs/data directory. But when I run the DAG my application code is unable to access the directory. Likely this is because Composer has mapped the directory into the worker pods but does not extend this courtesy to my pods.
What do I need to do to give my pods r/w access to the /home/airflow/gcs/data directory?
Cloud Composer uses FUSE to mount certain directories from Cloud Storage into Airflow worker pods running in Kubernetes. It mounts these with default permissions that cannot be overwritten, because that metadata is not tracked by Google Cloud Storage. A possible solution is to use a bash operator that runs at the beginning of your DAG to copy files to a new directory. Another possible solution can be to use a non-Google Cloud Storage path like a /pod path.
I'm using k8s on google cloud and I'm trying to use google cloud's build-in snapshotting for backups, but I would also like to use them for retrieving a db to work with locally. I've come to the conclusion that I need to first create an image of the snapshot, then export that image to a bucket before downloading. Something like this:
gcloud compute disks snapshot mydrive --snapshot-names=mydrive-snapshot
gcloud compute images create mydrive-image --source-snapshot mydrive-snapshot
gcloud compute images export --destination-uri gs://my-bucket/mydrive-image.tar.gz --image mydrive-image
gsutil cp gs://my-bucket/my-drive-image.tar.gz file://my-drive-image.tar.gz
tar xvf my-drive-image.tar.gz
This gives me a file disk.raw. Not sure how to mount this locally though?
Are there any other simple solutions to this? I would be fine to use a native k8s workflow instead as long as its on the volume level and doesn't involve actually running anything in a pod.
Why not just mount the GCS bucket locally in which you exported the disk data?
You can use gcsfuse for doing this.
Follow these instructions for installing Cloud Storage FUSE and its dependencies
Set up credentials for Cloud Storage FUSE (follow the above instructions to do this)
Create a directory (or use an already existing directory to mount the bucket)
Use Cloud Storage FUSE to mount the bucket (e.g. my-bucket).
gcsfuse my-bucket /path/to/mount
Now you can see the content inside the bucket:
ls /path/to/mount
I am new to k8s in azure (I have used k8s in a non-cloud server) and I dont find info on how can I copy local files/directories to aks's premium or standard storage.
This link PVC shows how to create a PVC and how to mount it in a pod, but I cannot find how do I copy files from my local pc to aks's premium or standard storage.
To copy files you can use azcopy for azure files and azure storage explorer for azure managed disk.
I am not able to get write access to a GCS bucket from within a GKE pod.
I have a GKE pod running. I have not changed any k8s configuration regarding service accounts. I have docker exec'd into the pod and installed gcloud/gsutil. gcloud auth list shows a 1234-compute#developer.gserviceaccount.com entry. From within GCS I have added that same account as storage admin, storage legacy bucket owner, storage object creator (i.e., I just tried a bunch of stuff). I am able to run gsutil ls gs://bucket. However when running gsutil cp file gs://bucket, it prints:
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
gsutil acl get gs://bucket prints:
AccessDeniedException: Access denied. Please ensure you have OWNER permission on gs://bucket
Other things I have tried are adding the allUsers and allAuthenticatedUsers as creators and owners of the bucket, with no change. I am able to write to the bucket from my dev machine just fine.
When I run gsutil acl get gs://bucket from another machine, it prints the same address as an OWNER as the output from gcloud auth list from within the pod.
What is the special sauce I need to allow the pod to write to the bucket?
You need to set permissions for cluster (or better for particular node in case of Terraform):
oauth_scopes = [
"https://www.googleapis.com/auth/devstorage.read_write", // 'ere we go!
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/compute",
]
The GKE cluster was created with default permissions, which only has read scope to GCS. Solutions:
Apply advice from Changing Permissions of Google Container Engine Cluster
Set GOOGLE_APPLICATION_CREDENTIALS as described in https://developers.google.com/identity/protocols/application-default-credentials
Had the same issue, I had to recreated a node pool with custom security config in order to get that access.
Also, in my pod I mounted the SA provided in a secret (default-token-XXXXX)
Then, once gcloud is installed in the pod (via docker file) works like a charm.
The key is the node-pool config and mounting the SA.
I'm trying to create and mount a Google storage bucket on a Ubuntu Linux instance using gsutil.
sudo gsutil mb -c STANDARD -l us-central1-a gs://test-bucket
Here's what I'm getting:
Creating gs://test-bucket/...
AccessDeniedException: 403 Insufficient Permission
I've been searching around for a solution with no success. Can anyone help?
Check to see who is the account managing your VM instance from the GCloud Dashboard. Should be the compute service OR app account that is automatically created.
In the initial configuration settings you should see the Cloud API list that will state whether or not that user has Cloud Storage capability.
If not You will have to recreate your VM instance.
Create a GCP snapshot of your VM
Delete VM instance
Create a new instance using it existing snapshot. (allows you to start where you left off in new VM)
When creating the VM under API give the user full access which will allow them to write to the Cloud storage using the gsutil/gcsfuse commands.
THEN permissions from the Cloud Storage will be a concern but your root user should be able to write.