regional vs multi-regional buckets - google-cloud-storage

In the project I see multi regional buckets created by someone by mistake and it is been used for the data pipelines. It should have been regional buckets. What is the recommended way to change this to regional buckets?

You can try following command
gsutil defstorageclass set regional gs://[BUCKET_NAME]
Reference: changing default storage class

Related

Without retention policy or lifecycle rules, would Google Cloud Storage automatically delete files?

My app uses Google Cloud Storage through Firebase with Java, Angular & Flutter. It stores pictures and such there. Now, a lot of older files recently disappeared from Google Cloud Storage. A test version of my app is probably the culprit. But I want to make sure that I got the storage bucket configured correctly.
Please note that I don't have object versioning enabled. From what I know, it would keep a copy of deleted files around. That's why I plan to enable it in the future. But it doesn't help me with files deleted in the past.
Right now, my storage bucket is configured as follows:
Default storage class: Standard
Object versioning: Off
Retention policy: None
Lifecycle rules: None
So with that configuration, would Google Cloud Storage automatically delete files? Like, say, after a year or so?
No. If you don't ask Cloud Storage to delete your files, your files will stay around forever. There's no expiration of any sort by default. Cloud Storage is a popular tool for long term storage/backup/retention.
If you want to be especially careful not to delete certain objects, retention policies and object holds can be used to make it harder to delete objects by accident. For example, if you wanted to temporarily ensure that your scripts would not delete your most important object, you could run:
gsutil retention temp set gs://my_bucket_name/my_important_file.txt
This would set a "temporary object hold" on the object, which would make it so that my_important_file.txt could not be deleted with the delete command until you released the hold.

Is there a way to figure out in which region a Google Cloud Storage bucket is hosted?

NCBI (the National Center for Biotech Info) generously provided their data for 3rd parties to consume. The data is located in cloud buckets such as gs://sra-pub-run-1/. I would like to read this data without incurring additional costs, which I believe can be achieved by reading it from the same region as where the bucket is hosted. Unfortunately, I can't figure out in which region the bucket is hosted (NCBI mentions in their docs that's in the US, but not where in the US). So my questions are:
Is there a way to figure out in which region a bucket that I don't own, like gs://sra-pub-run-1/ is hosted?
Is my understanding correct that reading the data from instances in the same region is free of charge? What if the GCS bucket is multi-region?
Doing a simple gsutil ls -b -L either provides no information (when listing a specific directory within sra-pub-run-1 or I get a permission denied error if I try to list info on gs://sra-pub-run-1/ directly using:
gsutil -u metagraph ls -b gs://sra-pub-run-1/
You cannot specify a specific Compute Engine zone as a bucket location, but all Compute Engine VM instances in zones within a given region have similar performance when accessing buckets in that region.
Billing-wise, egressing data from Cloud Storage into a Compute Engine instance in the same location/region (for example, US-EAST1 to US-EAST1) is free, regardless of zone.
So, check the "Location constraint" of the GCS bucket (gsutil ls -Lb gs://bucketname ), and if it says "US-EAST1", and if your GCE instance is also in US-EAST1, downloading data from that GCS bucket will not incur an egress fee.

With AWS Powershell commandlets, how do I specify a different endpoint for S3 buckets

I have AWS instances in several regions (us-east-1, us-west-2). I use CodeDeploy to take .zip files stored in S3 and deploy them to AutoScale groups. However, since the S3 bucket only exists in us-east-1, and I am attempting to deploy to us-west-2, specifying a region in my PowerShell commandlet (New-CDDeployment) doesn't work.
I need to specify a region (us-west-2), but pull the files from the S3 bucket in us-east-1 by using a custom endpoint (s3-us-east-1.amazonaws.com), but I cannot find any way of doing this within the PowerShell commandlet.
Use cross-region replication to replicate your bucket from us-east-1 into us-west-2, and reference the replica bucket in your powershell cmdlet since it will be in the same region.
Even if you didn't have this issue, this would be a good general practice so that you don't lose access to your code on S3 during us-east-1 S3 outages.

Setting the Durable Reduced Availability (DRA) attribute for a bucket using Storage Console

When manually creating a new cloud storage bucket using the web-based storage console (https://console.developers.google.com/), is there a way to specify the DRA attribute? From the documentation, it appears that the only way to create buckets with that attribute is to either use Curl, gsutil or some other script, but not the console.
There is currently no way to do this.
At present, the storage console provides only a subset of the Cloud Storage API, so you'll need to use one of the tools you mentioned to create a DRA bucket.
For completeness, it's pretty easy to do this using gsutil (documentation at https://developers.google.com/storage/docs/gsutil/commands/mb):
gsutil mb -c DRA gs://some-bucket

benefits of using directoryperdb in MongoDB

I have found out that there is an option directoryperdb but what are the benefits of using that instead of default file organization?
cheers,
/Marcin
The main benefit is being able to mount different volumes per database.