Can I schedule backups using the Heroku PG Backup add-on? - postgresql

I have been using PG Backups add-on recently and everything has worked fine, however this morning the backup process triggered at 10:00 A.M. in the morning generating some blocks and timeouts in my application.
Is there a way to specify the schedule of the backups made with this add-on? I've been searching and haven't found anything specific.

Use Cron for Manual Backup Scheduling
Heroku gives you two types of backups: automated and user-initiated. Each plan has a different number of daily, weekly, and manual backups that are retained. You can't control when the automated backups occur with PG Backups Auto, but you can use cron to trigger a "manual" backup at any time.
For example:
# Trigger a "manual" backup every four hours.
0 */4 * * * source $HOME/database_credentials; heroku pgbackups:capture
See Creating a Backup for more information about using the pgbackups command.

No, there is no way to do it currently, aside from using an external process to fire the calls.
An email to support might reveal more.

While the original question is old, Heroku does have a schedule option for PGBackups now:
https://devcenter.heroku.com/articles/heroku-postgres-backups#scheduling-backups

Related

Rundeck MariaDB hot backup

On rundeck backup guide, noted that is mandatory to stop rundeck to take full backup when using data file. Now, that guide don't show any secure method to backup full rundeck instance (rundeck server + database) when using MariaDB, PostgreSQL, or any supported database as a backend.
In a real production scenario, not seems to be possible to stop rdeck on a daily basis.
Can anybody share best pratices to take a hot full backup on rdeck installation without stop rdeck?
Is there any secure and supported way to achive a full consistent rdeck projects and jobs definitions and database on a daily basis ?
In this post, answer is not clear, because question don't describe what kinbd of backend are used.
The documentation suggests the instance shutdown because some execution could be active, and that means a potentially active transaction in the middle of the "hot backup process" which means a potential data corruption in your backup. Is the safest way to backup your database.
If you want to do a "hot" backup you can export your projects (with all content, including jobs) and keys.

GCS: How to backup and retain versions with a least privilege service account

I want to set up a service account that can save away backups of a file into Google Cloud Storage on a daily basis.
I was going to do it using object versioning and a life cycle policy that maintains the most recent 30 versions of the file.
However, I've discovered that gsutil requires the delete privilege to create a new version of the same file.
It seems a bit nuts to me to give a backup process delete privileges and not really in step with the principle of least privilege since my understanding is that this gives the service account the ability to do gsutil rm -a and nuke all versions of the backup in one go.
What, then, is the best, least privilege way to achieve this?
I could append a timestamp to the filename each time, but then I can't use lifecycle management and would have to write my own script to determine which are the recent 30 and delete the rest.
Is there a better/easier way to do this?
The best way I can think of to solve this is to have two service accounts -- one that can only create objects (creating your backups using timestamps), and one that can list and delete them.
Account 1 would create your backups, using timestamped filenames to avoid overwriting and thus requiring storage.objects.delete permission.
The credentials for Account 2 would be used for running a script that lists your backup objects and deletes all but the most recent 30 -- you could run this script as a cronjob on a VM somewhere, or only run it when a new backup is uploaded by utilizing Cloud Pub/Sub to trigger a Cloud Function.
We've ended up going with just saving to a different filename (eg backup-YYYYMMDD) and using retention policy to delete that file after 30 days.
It's not water tight, if backup fails for 30 days then all versions will be deleted, but we think we've put enough in place that someone would notice that before 30 days.
We didn't like leaving it up to a script to do the deleting because:
It's more error prone
It means we still end up with a service account with the ability to delete files, and we were really aiming to limit that privilege.

AWS RDS PostgreSQL Minor Update

I am a bit confused about how to perform a minor PostgreSQL version update on AWS RDS.
I read multiple articles from AWS documentation:
https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-rds-enhances-auto-minor-version-upgrades/
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.Upgrading.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html
None of them pointed me to the exact command or set of instructions necessary to perform the minor update released in early August 2019.
I fully understand that major updates can be performed from the AWS Console -> Modify section of the RDS DB Instance or from the AWS CLI.
I even did a search on the available engine versions for Postgres:
aws rds describe-db-engine-versions --engine postgres
And this command only outputs major engine versions, and the latest one is "PostgreSQL 11.4-R1", the one I use.
I am aware that minor updates can be enabled during the maintenance period, but I did not see any minor updates applied.
The lastest August release is crucial four our DB instance because it solves a couple of bugs we have reported regarding PG 11 Partitioning.
Is there a way to perform a manual version update on RDS for Postgres? Locally I updated the PG engine and all works fine.
Thank you and have a great day!
In the RDS console, when you go to the database details and view the "Maintenance & backups" tab, there is a section that displays if there are pending maintenance tasks. Here's a screenshot of a database that has a pending maintenance task:
If there are no pending maintenance tasks that will say "none" instead of "available". If there are no pending maintenance tasks then your database should be running the latest version. If there are pending maintenance tasks, then you can manually initiate the maintenance tasks anytime you want, which should update your database to the latest minor version if it isn't already updated.
I don't have a PostgreSQL RDS instance to test this on, but you could try running SELECT version(); on the database to get the current version, which might indicate the minor release version.
I don't see any other way to get to the minor version unfortunately, so you may have to open an AWS support ticket to get them to tell you what version the DB instance is running.
You will need to change the maintenance window to the earliest time.
AWS doesn't allow us to manually trigger the minor update process.

Include a job in the database backup, to be activated when the backup is restored

When I take a backup of a database (SQL Server), is there any way that I can include a scheduled job in the backup?
I have a database with stored procedures and a maintenance job that runs some of the stored procedures nightly.
I would like to achieve a minimal effort to schedule the job, when the .bak file is restored into a server, back as a database.
I don't have the quick button to click for your problem, but I think (not sitting in front of it right now) You can right click a job and get a script for its creation including scheduling specifics. I don't know how to include a job in a backup and how to restore it, though. I think restoring a job would require a script with the CREATE for the job to be run.
You can also back up the msdb database. The msdb database is where all the jobs live, it is one of the system databases, and then restore your database plus msdb.

Mongo db partial back ups

We have a 5 node replication set up on our development server. We are looking for a way to allow developers to back up a subset of data in a mongo db and restore this to their local development enviroments.
We have looked into the clonedb and the mongodump utils, but both only allow for a backup/dump of the complete database. Due to the possible size of the database, we need an option that allows us to limit the data being backed up or restored.
Do any know of a util or way to achieve this?
I just stumbled upon this question again and decided to add a description of our backup strategy we opt in for:
Current back up strategy for our mongo db this server consist of 2 setups; backup via delayed passive secondarynode and daily backup using mongodump (takes journalling and oplog into play).
Besides our normal production nodes, we have setup another secondary node with a priority of 0 (this can either be on its own server or piggy backing off another mongo server but using a seperate port), hidden as true and a delay of 7200 seconds (2hours). This slave is there for "butter fingers", when some one accidentally drops a database or clears a collection, we have 2 hours before these changes replicate to this passive secondary. The passive secondary can NOT be used for READING or WRITING. It's role is simply a back up node. We also use this node for nightly backup to prevent unnecessary overhead on any of the other nodes.
The nightly backup is set to run every night at 23:00 via a cron tab. The command simply executes a script setup in /opt/auto-mongo-backup. This script can be found at https://github.com/jaconel/automongobackup (originally found it at https://github.com/micahwedemeyer/automongobackup). This script allows for a single nightly cron to cover weekly backups and monthly backups. Back ups are saved at /var/backups/mongodb.
Hope this helps some one out.