I'm trying to create and mount a Google storage bucket on a Ubuntu Linux instance using gsutil.
sudo gsutil mb -c STANDARD -l us-central1-a gs://test-bucket
Here's what I'm getting:
Creating gs://test-bucket/...
AccessDeniedException: 403 Insufficient Permission
I've been searching around for a solution with no success. Can anyone help?
Check to see who is the account managing your VM instance from the GCloud Dashboard. Should be the compute service OR app account that is automatically created.
In the initial configuration settings you should see the Cloud API list that will state whether or not that user has Cloud Storage capability.
If not You will have to recreate your VM instance.
Create a GCP snapshot of your VM
Delete VM instance
Create a new instance using it existing snapshot. (allows you to start where you left off in new VM)
When creating the VM under API give the user full access which will allow them to write to the Cloud storage using the gsutil/gcsfuse commands.
THEN permissions from the Cloud Storage will be a concern but your root user should be able to write.
Related
I have a Google Cloud Composer 1 environment (Airflow 2.1.2) where I want to run an Airflow DAG that utilizes the KubernetesPodOperator.
Cloud Composer makes available to all DAGs a shared file directory for storing application data. The files in the directory reside in a Google Cloud Storage bucket managed by Composer. Composer uses FUSE to map the directory to the path /home/airflow/gcs/data on all of its Airflow worker pods.
In my DAG I run several Kubernetes pods like so:
from airflow.contrib.operators import kubernetes_pod_operator
# ...
splitter = kubernetes_pod_operator.KubernetesPodOperator(
task_id='splitter',
name='splitter',
namespace='default',
image='europe-west1-docker.pkg.dev/redacted/splitter:2.3',
cmds=["dotnet", "splitter.dll"],
)
The application code in all the pods that I run needs to read from and write to the /home/airflow/gcs/data directory. But when I run the DAG my application code is unable to access the directory. Likely this is because Composer has mapped the directory into the worker pods but does not extend this courtesy to my pods.
What do I need to do to give my pods r/w access to the /home/airflow/gcs/data directory?
Cloud Composer uses FUSE to mount certain directories from Cloud Storage into Airflow worker pods running in Kubernetes. It mounts these with default permissions that cannot be overwritten, because that metadata is not tracked by Google Cloud Storage. A possible solution is to use a bash operator that runs at the beginning of your DAG to copy files to a new directory. Another possible solution can be to use a non-Google Cloud Storage path like a /pod path.
I'm using k8s on google cloud and I'm trying to use google cloud's build-in snapshotting for backups, but I would also like to use them for retrieving a db to work with locally. I've come to the conclusion that I need to first create an image of the snapshot, then export that image to a bucket before downloading. Something like this:
gcloud compute disks snapshot mydrive --snapshot-names=mydrive-snapshot
gcloud compute images create mydrive-image --source-snapshot mydrive-snapshot
gcloud compute images export --destination-uri gs://my-bucket/mydrive-image.tar.gz --image mydrive-image
gsutil cp gs://my-bucket/my-drive-image.tar.gz file://my-drive-image.tar.gz
tar xvf my-drive-image.tar.gz
This gives me a file disk.raw. Not sure how to mount this locally though?
Are there any other simple solutions to this? I would be fine to use a native k8s workflow instead as long as its on the volume level and doesn't involve actually running anything in a pod.
Why not just mount the GCS bucket locally in which you exported the disk data?
You can use gcsfuse for doing this.
Follow these instructions for installing Cloud Storage FUSE and its dependencies
Set up credentials for Cloud Storage FUSE (follow the above instructions to do this)
Create a directory (or use an already existing directory to mount the bucket)
Use Cloud Storage FUSE to mount the bucket (e.g. my-bucket).
gcsfuse my-bucket /path/to/mount
Now you can see the content inside the bucket:
ls /path/to/mount
I have a Google Cloud Storage Bucket which is mounted to 3 virtual machines using fstab.
When I upload a file from another machine to google bucket using gsutil command, the uploaded file is accessible from only 2 vms (Set A). The other vm (Set B) doens't show the newly uploaded file.
fstab entry used for mounting is as follows.
bucket_name mounted_path gcsfuse rw,uid=1002,gid=1003,user,allow_other,key_file=key_file_path
Content of /etc/mtab file from Set A is as follows.
bucket_name mounted_path fuse rw,nosuid,nodev,relatime,user_id=1002,group_id=1003,default_permissions 0 0
Content of /etc/mtab file from Set B is as follows.
bucket_name mounted_path fuse fuse rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
Here is how I do mounting Storage Buckets on VMs:
create a VM instance and give it Storage "full" access scope. if you
have already a VM, edit that VM with this scope. note that you need
to stop the VM to edit it's access scope.
install gcsfuse on your instance if you haven't already
create a directory on where you want to mount your bucket mkdir /path/to/bucket
go to Cloud Storage and edit your bucket permissions by adding the compute engine default service account as a Storage Admin, you can find this service account in IAM&admin --> service account, it has this structure 1213242443-compute#developer.gserviceaccount.com
use gcsfuse bucket_name /path/to/bucket to mount your bucket. here gcsfuse will use the default service account to verify access and make the connection. this is the easiest way as it conclude few steps
now any file you upload to your bucket will appear in VMs bucket mounting folder /path/to/bucket
Read more about this process here
I am not able to get write access to a GCS bucket from within a GKE pod.
I have a GKE pod running. I have not changed any k8s configuration regarding service accounts. I have docker exec'd into the pod and installed gcloud/gsutil. gcloud auth list shows a 1234-compute#developer.gserviceaccount.com entry. From within GCS I have added that same account as storage admin, storage legacy bucket owner, storage object creator (i.e., I just tried a bunch of stuff). I am able to run gsutil ls gs://bucket. However when running gsutil cp file gs://bucket, it prints:
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
gsutil acl get gs://bucket prints:
AccessDeniedException: Access denied. Please ensure you have OWNER permission on gs://bucket
Other things I have tried are adding the allUsers and allAuthenticatedUsers as creators and owners of the bucket, with no change. I am able to write to the bucket from my dev machine just fine.
When I run gsutil acl get gs://bucket from another machine, it prints the same address as an OWNER as the output from gcloud auth list from within the pod.
What is the special sauce I need to allow the pod to write to the bucket?
You need to set permissions for cluster (or better for particular node in case of Terraform):
oauth_scopes = [
"https://www.googleapis.com/auth/devstorage.read_write", // 'ere we go!
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/compute",
]
The GKE cluster was created with default permissions, which only has read scope to GCS. Solutions:
Apply advice from Changing Permissions of Google Container Engine Cluster
Set GOOGLE_APPLICATION_CREDENTIALS as described in https://developers.google.com/identity/protocols/application-default-credentials
Had the same issue, I had to recreated a node pool with custom security config in order to get that access.
Also, in my pod I mounted the SA provided in a secret (default-token-XXXXX)
Then, once gcloud is installed in the pod (via docker file) works like a charm.
The key is the node-pool config and mounting the SA.
I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.
The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...
How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1