I want to take Oracle database backup using RMAN directly into the Google Cloud Storage
I am unable to find the plugin to use to take the RMAN backups into Cloud Storage. We have a plugin for Amazon S3 and am looking for one such related to Google Cloud Storage.
I don't believe there's an official way of doing this. Although I did file a Feature Request for the Cloud Storage engineering team to look into that you can find here.
I recommend you to star the Feature Request, for easy visibility and access, allowing you to view its status updates. The Cloud Storage team might ask questions there too.
You can use gcsfuse to mount GCS bucket as file systems on your machine and use RMAN to create backups there.
You can find more information about gcsfuse on its github page. Here are the basic steps to mount a bucket and run RMAN:
Create a bucket oracle_bucket. Check that it doesn't have a retention policy defined on it (it looks like gcsfuse has some issues with retention policies).
Please have a look at mounting.md that describes credentials for GCS. For example, I created a service account with Storage Admin role and created a JSON key for it.
Next, set up credentials for gcsfuse on your machine. In my case, I set GOOGLE_APPLICATION_CREDENTIALS to the path to JSON key from step 1. Run:
sudo su - oracle
mkdir ./mnt_bucket
gcsfuse --dir-mode 755 --file-mode 777 --implicit-dirs --debug_fuse oracle_bucket ./mnt_bucket
From gcsfuse docs:
Important: You should run gcsfuse as the user who will be using the
file system, not as root. Do not use sudo.
Configure RMAN to create a backup in mnt_bucket. For example:
configure controlfile autobackup format for device type disk to '/home/oracle/mnt_bucket/%F';
configure channel device type disk format '/home/oracle/mnt_bucket/%U';
After you run backup database you'll see a backup files created in your GCS bucket.
Related
When I use the below command in Azure Databricks
display(dbutils.fs.ls("/mnt/MLRExtract/excel_v1.xlsx"))
My output is coming as wasbs://paycnt#sdvstr01.blob.core.windows.net/mnt/MLRExtract/excel_v1.xlsx
not as expected-- dbfs://mnt/MLRExtract/excel_v1.xlsx
Please suggest
Mounting a storage account to Databricks File System allows users to access them any number of times without any credentials. Any files or directories can be accessed from Databricks clusters using these mount points. The procedure you used allows you to mount blob storage container to DBFS.
So, you can access your blob storage container from DBFS using the mount point. The method dbutils.fs.ls(<mount_point>) displays all the files and directories available in that mount point. It is not necessary to provide path of a file, instead simply use:
display(dbutils.fs.ls(“/mnt/MLRExtract/”))
The above command returns all the files available in the mount point (which is your blob storage container). You can perform all the required operations and then write to this DBFS, which will be reflected in your blob storage container too.
Refer to the following link to understand more about Databricks file system.
https://docs.databricks.com/data/databricks-file-system.html
I have setup a postgres database inside the Kubernetes cluster and now I would like to backup the database and don't know how it is possible.
Can anyone help me to get it done ?
Thanks
Sure you can backup your database. You can setup a CronJob to periodically run pg_dump and upload the dumped data into a cloud bucket. Check this blog post for more details.
However, I recommend you to use a Kubernetes native disaster recovery tool like Velero, Stash, Portworx PX-Backup, etc.
If you use an operator to manage your database such as zalando/postgres-operator, CrunchyData, KubeDB, etc. You can use their native database backup functionality.
Disclosure: I am one of the developer of Stash tool.
I am currently using the Google Cloud Shell, and I wish to access the persistent disk of another user. (Not using local shell)
More info on topic of inquiry: https://medium.com/google-cloud/no-localhost-no-problem-using-google-cloud-shell-as-my-full-time-development-environment-22d5a1942439
Cloud Shell is a micro vm dedicated to you, free, and with a mounted personal disk.
EDITED: Thanks to #Johnhanley comment, you can access to the cloud shell file of someone else with this code provided. However, you need the credential of the target Cloud Shell env and it's not very secure and recommended.
However, you can mount a fuse directory. And the other user also. With fuse, you navigate in a bucket like in directory. But be carefull, Storage bucket is not a file system: performance and usage aren't the same. Moreover, Fuse don't guaranty the data integrity in case of simultaneous file use, especially writing concurrency. Use with precaution.
But you can have a common workspace if it's your requirement.
If you use Cloud Shell as dev environment, like a computer or a VM, the same best practice are to apply. The dev environment has to be considered as ephemeral (computer can have outage or be lost/stolen, People can leave a company and you no longer have access to their cloud shell), and thereby you have to save your sources frequently on safe space (Git repository, Cloud Storage with Fuse)
I would like to know if there is a way to mount google cloud storage bucket as a folder for the first time
and each time we read the file, cache it locally (so it won't use money/bandwidth).
GCSFUSE has two type of caching available, Stat caching and type caching. You can refer to this document which provide detailed information on these types of caching with there trade-offs.
When manually creating a new cloud storage bucket using the web-based storage console (https://console.developers.google.com/), is there a way to specify the DRA attribute? From the documentation, it appears that the only way to create buckets with that attribute is to either use Curl, gsutil or some other script, but not the console.
There is currently no way to do this.
At present, the storage console provides only a subset of the Cloud Storage API, so you'll need to use one of the tools you mentioned to create a DRA bucket.
For completeness, it's pretty easy to do this using gsutil (documentation at https://developers.google.com/storage/docs/gsutil/commands/mb):
gsutil mb -c DRA gs://some-bucket