Setting up backup strategy for backing up postgresql database on cloud foundry - postgresql

We have setup a community postgresql service on Cloud Foundry (IBM Blumix). This is a free service and no automated backup and recovery is supported out of the box.
Is there a way to set up a standby server or a regular backup in case there is any data corruption/failure?
IBM compose and ElephantSQL can provide this service at a cost, butwe are not ready for it yet.

PostgreSQL is an experimental service and there is not a dashboard and other advanced features (Daily backup for example) that you can find in other services that you mentioned. If you want to do a backup you could write an ad-hoc script that 'saves'\exports all tables as you want and run it every day.
If you need PostegreSQL you can create a PostegreSQL by compose service $17.50 / mo for the first GB and $12 for Extra GB )

We used Postgresql Studio and deployed it on IBM Bluemix. The database service was connected to the pgstudio interface (This restricts the access to only connected databases). We also had to make minor changes to pgstudio so that we could use pg_dump with the interface.
The result: We could manually dump the data. This solution works well as we could take regular dumps (though manually).

In the free tier you are right in saying that you cant get the backup. Those features are available only in Compose for PostgresSQL service - but that's a paid service.

Related

How to downsize an AWS RDS instance to free tier

I want to create a free tier clone of a production AWS RDS PostgreSQL. As per my understanding, following are different ways
create a snapshot of the production DB and restore it on t2.micro
create a read replica of the production DB using t2.micro and then detach it as independent database
create a free tier database and restore a database dump of the production db
Option 3 is my last preference.
The problem is while creating read replica or restoring from snapshot, AWS doesn't explicitly allow to choose the free tier template. I just want to know if restoring to t2.micro without any advanced features like autoscaling, performance monitoring etc. is equivalent to free tier or not? I read here that the key thing with AWS production DB is that AWS provisions a secondary database provisioned to fallback in event of failure of the primary database or the Availability Zone in which the database is running.
AWS Free Tier doesn't actually care about the kind of service you use. Per their website you just get 750 instance hours per month for a db.t2.micro.
You can use these in any service you see fit and the discount will be applied automatically for the first 12 months.
Looking at the pricing page for RDS Postgres I can see, that these instances aren't listed anymore, which seems weird. The t2 instance family is fairly old, so they're probably trying to phase it out, but typically you can provision older instance types using the API directly if they're not available in the Console.
So what you want to do is create your db.t2.micro instance using one of the SDKs or the AWS CLI and restore from a snapshot. Alternatively you can create a read replica from the CLI and set the class to db.t2.micro. Later detaching that from the main cluster should work.
The production ready stuff refers to the Multi-AZ deployment, which is good for production use, but for anything production related a t2.micro seems like a bad choice, so I'm going to assume you're not planing to do that.

Cloud PostgreSQL clean large objects vacuumlo

We are managing to use GCP CloudSQL for our PostgreSQL database,
at this moment one of our applications uses large objects and i was wondering how to perform a vacuumlo operation on such platforms (question might be valid for AWS RDS or any other cloud postgresql provider).
Does making custom queries/procedures to perform the same task is the only solution?
Since vacuumlo is a client tool, it should work just fine with hosted databases.

MariaDB Backup from the command line

The Backup feature in the developer console for creating backups is great. I would however like the possibility to automate this. Is there a way to do so from the cf command line app?
Thanks
It's not possible from the cf cli, but there's an API endpoint for triggering backups.
API Docs | Custom Extensions | Swisscom Application Cloud Filter for
Cloud Foundry (CF) Cloud Controller (CC) API. Implements Swisscom
proprietary extensions
POST /custom/service_instances/{service-instance-id}/backups
Creates a backup for a given service instance
See for more Info Service Backup and Restore in docs.developer.swisscom.com
Create Backup To create a backup, navigate to the service instance in
the web console and then to the “Backups” tab. There you can click the
“Create” button to trigger a manual backup.
Note: Backups have to be triggered manually from the web console.
Be aware that you can only keep a set number of backups per service
instance. The actual number is dependent on the service type and
service plan. In case you already have the maximum number, you cannot
create any new backups before deleting one of the existing.
It may take several minutes to backup your service (depending on the
size of your service instance).
Restore Backup You can restore any backup at any time. The current
state of your backup will be overwritten and replaced with the state
saved to the backup. You are advised to create a backup of the current
state before restoring an old state.
Limitations You can only perform one backup or restore action per
service instance at a time. If an action is still ongoing, you cannot
trigger another one. You cannot exceed the maxmimum number of backups
per service instance
We did this by developing a small Node.js application which is running on the cloud in the same space and which backups our maria and mongo db every night automatically.
EDIT:
You can download the code from here:
https://github.com/theonlyandone/cf-backup-app
Fresh from the press: Swisscom Application Cloud cf CLI Plugin can also automate backup and restore.
The official cf CLI plugin for the Swisscom Application Cloud gives
you access to all the additional features of the App Cloud.
cf install-plugin -r CF-Community "Swisscom Application Cloud"
from 0.1.0 release notes
Service Instance Backups
Add cf backups command (list all backups of a service instance)
Add cf create-backup command (create a new backup of a service instance)
Add cf restore-backup command (restore an existing backup of a service instance)
Add cf delete-backup command (delete an existing backup of a service instance)
Despite the answer from Matthias Winzeler saying it's not possible, in fact it's totally possible to automate MariaDB backups through the command line.
I developed a plugin for the CF CLI:
https://github.com/gsmachado/cf-mariadb-backup-plugin
In future I could extend such plugin to backup any kind of service that is supported by the Cloud Foundry Provider's API (in this case, Swisscom AppCloud API).

MariaDB Backup on Swisscom Cloud

I'm interested in the MariaDB Service from the Swisscom Cloud.
https://docs.developer.swisscom.com/service-offerings/mariadb.html
What backup capabilities are offered by the Swisscom Cloud?
Is there something similar like on pivotal cloud foundry?
https://docs.pivotal.io/p-mysql/backup.html
Perfect timing for asking: we have just released major update to our platform and instant back/restore for MariaDB is a new feature that is available from today!
You can take backups and restore them from the administration console (GUI)
Michal Maczka
Product Manager Application Cloud
There's a MariaDB backup plugin for the CF CLI:
https://github.com/gsmachado/cf-mariadb-backup-plugin
Then, you are able to automate backup creations -- e.g., you can come up with a bash script to create a backup every day, or every two days.

How to replicate MySQL database to Cloud SQL Database

I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
Voilà!