How to back update Postgres database inside K8s cluster - postgresql

I have setup a postgres database inside the Kubernetes cluster and now I would like to backup the database and don't know how it is possible.
Can anyone help me to get it done ?
Thanks

Sure you can backup your database. You can setup a CronJob to periodically run pg_dump and upload the dumped data into a cloud bucket. Check this blog post for more details.
However, I recommend you to use a Kubernetes native disaster recovery tool like Velero, Stash, Portworx PX-Backup, etc.
If you use an operator to manage your database such as zalando/postgres-operator, CrunchyData, KubeDB, etc. You can use their native database backup functionality.
Disclosure: I am one of the developer of Stash tool.

Related

Is it a good idea to backup/restore Neo4j databases with Kubernetes VolumeSnapshot?

I have a Neo4j database running on Kubernetes. I want to make scheduled backups for the database. I know that Neo4j provides a set of tools for backup and restore. However, Kubernetes VolumeSnapshot also looks viable for backup and restore.
I wonder if it's a good idea to use Kubernetes VolumeSnapshot to backup/restore Neo4j databases? Will it cause errors like inconsistency database status or faulty disk problem? Thanks.
Generally, if it is not supported by the database, then it is a bad idea.
Think of your database as being stored across:
Database files on disk
Page cache (in volatile memory)
Write ahead transaction logs on disk
A volume snapshot would not save enough information to get a consistent state of your database (unless the database is gracefully shut down).
Use the set of tools provided for backup/restore

Migrate data from Citus to RDS

Since Citus is not going to be available as a Managed Service in AWS, I am trying move the database to RDS (not the whole history but only the transactional portion as an OLTP). The migration from Citus is not clear because the data does not reside in a single node. I want to check the options we might have to move data from Citus to RDS.
Amazon DMS: This option is good for the supported databases (PostgreSQL) but we do not know what behavior this will have in Citus from the distributed nature of the engine. Has someone migrated the data to S3, to another DB or something in these lines?
I saw this paper from AWS https://d1.awsstatic.com/whitepapers/aws-cloud-data-ingestion-patterns-practices.pdf?did=wp_card&trk=wp_card on how to ingest data from different sources and DMS seems like a good option but I do not know the internals of Citus that well to tell if we will get all the data and gather the CDC correctly.
A Custom migration: Via a support ticket, we can access the S3 buckets that Citus uses for Disaster recovery where the WAL logs are available and we could use something like WAL-G to take those logs and replicate them in a Postgres instance. The issue here is that this is a very custom migration and the development time might be too high.
Is there any other option to move data from Citus to RDS or Aurora in AWS, what looks like a good path to make the database migration? All the documents refer to move data the other way around, from Aurora or RDS to Citus.
Sumedh from Citus Cloud here. Please go ahead and open a support ticket with us to further investigate solutions. We can evaluate if using DMS is a viable approach for your use-case.

How to configure Prometheus to use AWS RDS PostgreSQL/MySQL as backing store?

I am using prometheus-operator in Kubernetes and am using EBS volumes as backing store through VolumeClaimTemplates. I would like to instead use Amazon RDS PostgreSQL as a backing store so that I wouldn't have to worry about running out of storage and monitoring storage etc.
I came across remote storage adapters for InfluxDB, Graphite and OpenstDB here but they don't have an adapter for PostgreSQL or MySQL.
Does anyone have any experience making prometheus backup samples to PostgreSQL/MySQL in production environments?
I came across prometheus-postgresql-adapter here but am not sure how it will work with Amazon RDS. If you have any pointers to make it work with RDS, that too will be much appreciated.
Short answer: you can't. Amazon RDS doesn't support that yet. The GitHub issue here says it all: https://github.com/timescale/prometheus-postgresql-adapter/issues/10.
Currently, if you want to use the prometheus-postgresql-adapter, you would need to run the TimescaleDB.

Setting up backup strategy for backing up postgresql database on cloud foundry

We have setup a community postgresql service on Cloud Foundry (IBM Blumix). This is a free service and no automated backup and recovery is supported out of the box.
Is there a way to set up a standby server or a regular backup in case there is any data corruption/failure?
IBM compose and ElephantSQL can provide this service at a cost, butwe are not ready for it yet.
PostgreSQL is an experimental service and there is not a dashboard and other advanced features (Daily backup for example) that you can find in other services that you mentioned. If you want to do a backup you could write an ad-hoc script that 'saves'\exports all tables as you want and run it every day.
If you need PostegreSQL you can create a PostegreSQL by compose service $17.50 / mo for the first GB and $12 for Extra GB )
We used Postgresql Studio and deployed it on IBM Bluemix. The database service was connected to the pgstudio interface (This restricts the access to only connected databases). We also had to make minor changes to pgstudio so that we could use pg_dump with the interface.
The result: We could manually dump the data. This solution works well as we could take regular dumps (though manually).
In the free tier you are right in saying that you cant get the backup. Those features are available only in Compose for PostgresSQL service - but that's a paid service.

How to replicate MySQL database to Cloud SQL Database

I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !