Setting the Durable Reduced Availability (DRA) attribute for a bucket using Storage Console - google-cloud-storage

When manually creating a new cloud storage bucket using the web-based storage console (https://console.developers.google.com/), is there a way to specify the DRA attribute? From the documentation, it appears that the only way to create buckets with that attribute is to either use Curl, gsutil or some other script, but not the console.

There is currently no way to do this.
At present, the storage console provides only a subset of the Cloud Storage API, so you'll need to use one of the tools you mentioned to create a DRA bucket.
For completeness, it's pretty easy to do this using gsutil (documentation at https://developers.google.com/storage/docs/gsutil/commands/mb):
gsutil mb -c DRA gs://some-bucket

Related

RMAN backup into Google Cloud Storage

I want to take Oracle database backup using RMAN directly into the Google Cloud Storage
I am unable to find the plugin to use to take the RMAN backups into Cloud Storage. We have a plugin for Amazon S3 and am looking for one such related to Google Cloud Storage.
I don't believe there's an official way of doing this. Although I did file a Feature Request for the Cloud Storage engineering team to look into that you can find here.
I recommend you to star the Feature Request, for easy visibility and access, allowing you to view its status updates. The Cloud Storage team might ask questions there too.
You can use gcsfuse to mount GCS bucket as file systems on your machine and use RMAN to create backups there.
You can find more information about gcsfuse on its github page. Here are the basic steps to mount a bucket and run RMAN:
Create a bucket oracle_bucket. Check that it doesn't have a retention policy defined on it (it looks like gcsfuse has some issues with retention policies).
Please have a look at mounting.md that describes credentials for GCS. For example, I created a service account with Storage Admin role and created a JSON key for it.
Next, set up credentials for gcsfuse on your machine. In my case, I set GOOGLE_APPLICATION_CREDENTIALS to the path to JSON key from step 1. Run:
sudo su - oracle
mkdir ./mnt_bucket
gcsfuse --dir-mode 755 --file-mode 777 --implicit-dirs --debug_fuse oracle_bucket ./mnt_bucket
From gcsfuse docs:
Important: You should run gcsfuse as the user who will be using the
file system, not as root. Do not use sudo.
Configure RMAN to create a backup in mnt_bucket. For example:
configure controlfile autobackup format for device type disk to '/home/oracle/mnt_bucket/%F';
configure channel device type disk format '/home/oracle/mnt_bucket/%U';
After you run backup database you'll see a backup files created in your GCS bucket.

When creating a new cloud composer env, is it possible to set the bucket to preexisting one?

So I already have an empty storage bucket created for this and I don't want composer to create its own bucket for the dags - I'd like to use the one already created.
It's not ideal to have it just create a random bucket and then go
gcloud composer environments run test-environment --location europe-west1 variables -- --set gcs_bucket gs://my-bucket
I've dug around the docs but it seems you cannot go around it creating a brand new bucket every time?
Currently, it is not possible.
In the environment’s configuration in Cloud Composer API, the dagGcsPrefix parameter is output only, you cannot set it. Documentation also mentions a Cloud Storage bucket is always created along with the Composer environment, the name of the bucket is based on the environment’s region, name and a random Id.
You may want to “Star” this Feature Request for the mentioned functionality, to receive notifications whenever an update on this regard is published. You can also review or subscribe to the Cloud Composer release notes to be updated about recently added features.
You are right, this is currently not supported in Composer.

how we can do automatic backup for compute engine disk everyday ? in google cloud

I have created instance in compute engine with windows server 2012. i cant see any option to take automatic backup for instance disk database everyday. there is option of snapshot but we need to operate this manually. please suggest any way to backup automatically and can be restore able on a single click. if is there any other possibility using cloud SQL storage or any other storage please recommend.
thanks
There's an API to take snapshots, see API section here:
https://cloud.google.com/compute/docs/disks/create-snapshots#create_your_snapshot
You can write a simple app to get triggered from Cron or something to take a snapshot periodically.
You have no provision for automatic back up for compute engine disk. But you can do a manual disk backup by creating a snapshot.
Best alternative way is to create a bucket and move your files there. Google cloud buckets have automated back up facility available.
Cloud storage and cloud SQL are your options for automated back ups in google cloud.

What is the fastest way to duplicate google storage bucket?

I have one 10TB bucket and need to create it's copy as quickly as possible. What is the fastest and most effective way of doing this?
Assuming you want to copy the bucket to another bucket in the same location and storage class, you could run gsutil rsync on a GCE instance:
gsutil -m rsync -r -d -p gs://source-bucket gs://dest-bucket
If you want to copy across locations or storage classes the above command will still work, but it will take longer because in that case the data (not just metadata) need to be copied.
Either way, you should check the result status and re-run the rsync command if any errors occurred. (The rsync command will avoid re-copying objects that have already been copied.) You should repeat the rsync command until the bucket has successfully been fully copied.
One simple way is to use Google's Cloud Storage Transfer Service. It may also be the fastest, though I have not confirmed this.
You can achieve this easily with gsutil.
gsutil -m cp -r gs://source-bucket gs://duplicate-bucket
Are you copying within Google Cloud Storage to a bucket with the same location and storage class? If so, this operation should be very fast. If the buckets have different locations and/or storage classes, the operation will be slower (and more expensive), but this will still be the fastest way.

how to rotate file while doing a streaming transfer to google cloud storage

We are working on a POC where we want to stream our web logs to google cloud storage. We learnt that objects on google cloud storage are immutable and cannot be appended from java api. However, we can do streaming transfers using gsutil according to this link https://cloud.google.com/storage/docs/concepts-techniques?hl=en#streaming
Now we would like to write hourly files. Is there a way to change the file name every hour like logrotate?
gsutil doesn't offer any logrotate-style features for object naming.
With a gsutil streaming transfer, the resulting cloud object is named according to the destination object in your gsutil cp command. To achieve rotation, your job that produces the stream could close the stream on an hourly basis, select a new filename, and issue a new streaming gsutil cp command.