Google Cloud SQL - Old backup older than 7 days - google-cloud-sql

Cloud SQL retains up to 7 automated backups for each instance.
Is it possible to restore backup after 7 days?

Answer is "No" as per the documentation. Either take on-demand backups and delete them at your desired retention, or export data to flat file.
Documentation

actually, it is possible if you do an on-demand backup. that one will not be removed.
https://cloud.google.com/sql/docs/mysql/backup-recovery/backing-up#gcloud_7

Related

Rundeck MariaDB hot backup

On rundeck backup guide, noted that is mandatory to stop rundeck to take full backup when using data file. Now, that guide don't show any secure method to backup full rundeck instance (rundeck server + database) when using MariaDB, PostgreSQL, or any supported database as a backend.
In a real production scenario, not seems to be possible to stop rdeck on a daily basis.
Can anybody share best pratices to take a hot full backup on rdeck installation without stop rdeck?
Is there any secure and supported way to achive a full consistent rdeck projects and jobs definitions and database on a daily basis ?
In this post, answer is not clear, because question don't describe what kinbd of backend are used.
The documentation suggests the instance shutdown because some execution could be active, and that means a potentially active transaction in the middle of the "hot backup process" which means a potential data corruption in your backup. Is the safest way to backup your database.
If you want to do a "hot" backup you can export your projects (with all content, including jobs) and keys.

GCS: How to backup and retain versions with a least privilege service account

I want to set up a service account that can save away backups of a file into Google Cloud Storage on a daily basis.
I was going to do it using object versioning and a life cycle policy that maintains the most recent 30 versions of the file.
However, I've discovered that gsutil requires the delete privilege to create a new version of the same file.
It seems a bit nuts to me to give a backup process delete privileges and not really in step with the principle of least privilege since my understanding is that this gives the service account the ability to do gsutil rm -a and nuke all versions of the backup in one go.
What, then, is the best, least privilege way to achieve this?
I could append a timestamp to the filename each time, but then I can't use lifecycle management and would have to write my own script to determine which are the recent 30 and delete the rest.
Is there a better/easier way to do this?
The best way I can think of to solve this is to have two service accounts -- one that can only create objects (creating your backups using timestamps), and one that can list and delete them.
Account 1 would create your backups, using timestamped filenames to avoid overwriting and thus requiring storage.objects.delete permission.
The credentials for Account 2 would be used for running a script that lists your backup objects and deletes all but the most recent 30 -- you could run this script as a cronjob on a VM somewhere, or only run it when a new backup is uploaded by utilizing Cloud Pub/Sub to trigger a Cloud Function.
We've ended up going with just saving to a different filename (eg backup-YYYYMMDD) and using retention policy to delete that file after 30 days.
It's not water tight, if backup fails for 30 days then all versions will be deleted, but we think we've put enough in place that someone would notice that before 30 days.
We didn't like leaving it up to a script to do the deleting because:
It's more error prone
It means we still end up with a service account with the ability to delete files, and we were really aiming to limit that privilege.

About delete binary logs file on Cloud Sql

I have a question about binary log on Google Cloud Sql.
Now that storage on Cloud SQL is constantly increasing, I want to delete the binary logs files. I have read the documentation about it, but it is not clear that when I disable the binary logs function, will the files be deleted immediately or have to wait for the next 7 days for the files to be deleted. Thank you.
https://cloud.google.com/sql/docs/mysql/backup-recovery/pitr#disk-usage
According to the official documentation :
Disk usage and point-in-time recovery
The binary logs are automatically deleted with their associated
automatic backup, which generally happens after about 7 days.
Diagnosing issues with Cloud SQL instances
Binary logs cannot be manually deleted.
Binary logs are automatically deleted with their associated automatic
backup, which generally happens after about seven days.
Therefore you have to wait for about 7 days for the Binary logs and their associated automatic backup to be deleted.

how we can do automatic backup for compute engine disk everyday ? in google cloud

I have created instance in compute engine with windows server 2012. i cant see any option to take automatic backup for instance disk database everyday. there is option of snapshot but we need to operate this manually. please suggest any way to backup automatically and can be restore able on a single click. if is there any other possibility using cloud SQL storage or any other storage please recommend.
thanks
There's an API to take snapshots, see API section here:
https://cloud.google.com/compute/docs/disks/create-snapshots#create_your_snapshot
You can write a simple app to get triggered from Cron or something to take a snapshot periodically.
You have no provision for automatic back up for compute engine disk. But you can do a manual disk backup by creating a snapshot.
Best alternative way is to create a bucket and move your files there. Google cloud buckets have automated back up facility available.
Cloud storage and cloud SQL are your options for automated back ups in google cloud.

Can I schedule backups using the Heroku PG Backup add-on?

I have been using PG Backups add-on recently and everything has worked fine, however this morning the backup process triggered at 10:00 A.M. in the morning generating some blocks and timeouts in my application.
Is there a way to specify the schedule of the backups made with this add-on? I've been searching and haven't found anything specific.
Use Cron for Manual Backup Scheduling
Heroku gives you two types of backups: automated and user-initiated. Each plan has a different number of daily, weekly, and manual backups that are retained. You can't control when the automated backups occur with PG Backups Auto, but you can use cron to trigger a "manual" backup at any time.
For example:
# Trigger a "manual" backup every four hours.
0 */4 * * * source $HOME/database_credentials; heroku pgbackups:capture
See Creating a Backup for more information about using the pgbackups command.
No, there is no way to do it currently, aside from using an external process to fire the calls.
An email to support might reveal more.
While the original question is old, Heroku does have a schedule option for PGBackups now:
https://devcenter.heroku.com/articles/heroku-postgres-backups#scheduling-backups