Is there a way to recover an orphaned confluence pgsql database? - postgresql

We are experimenting with Kubernetes and Confluence in the cloud and have deployed Confluence connected to a pgsql database. When applying an update, something happened that caused the pgsql pod to tank and lose the persistent volume connections.
Thankfully the volume was set to retain, so we have the volume and I have since been able to point a new pgsql instance to this volume, but I can't find a way to get Confluence to see this existing database. Confluence just proceeds to the initial fresh install screens. I've tried installing it on a temporary database and then modifying the confluence.cfg.xml file to point to the old data once completed but Confluence will not restart when I try this.
Any help is appreciated.

Using the web installer you should have a step to select "My own database". From there you can configure the database credentials and host. Go ahead and let the installer run, it will overwrite the default settings but will retain your existing data.
Also, you may want to get on the psql shell via console and check to make sure that your data actually exists and you haven't ended up with an empty database.
If you're still stuck, reach out here and we can check out the next steps.

In my case the original solution posted here is accurate:
However I had to do this in a non containerized environment. I installed Confluence on a VM using a blank database, then modified the confluence.cfg.xml file to point to the pgsql database in the kubernetes cluster and restarted confluence. I was able to see my data, so I then used confluence's XML export feature to grab the dataset. I then blew away the kubernetes environment and re-created it from scratch and imported the backed up XML into the new instance. Not a super clean way of doing it, but got where I needed to.

Related

How to change default location for storing Rundeck Projects

I am using Rundeck Docker Container. It was running well for 2 months and suddenly it crashed. I lost all the data wrt a project that I had created using CLI. Is there a way to change the default path to store all project related data including job definitions, resources etc?
Out of the box, Rundeck stores the project/jobs data on their internal H2 database, this database is only for testing purposes and probably will crash with a lot of data (storing projects at filesystem is deprecated right now), the best approach is to use a "real" database like MySQL, PostgreSQL, or Oracle, in that way Rundeck stores all project/jobs data on a robust backend.
Check this MySQL, PostgreSQL and Oracle Docker environment examples.
Of course, having a backup policy for your instance would be ideal to keep safe all your instance data.

How to move Postgres DB from one Kubernetes cluster to another

I have a big (300Gb) Postgres DB running on GKE cluster (Stateful Set, SSD Volume). I need to move this DB to another GKE cluster.
What is the easiest way to accomplish it?
I tried to do it with piping pg_dump/pg_restore, but it takes forever and for some reason, not all constraints/triggers were recreated.
Is there any proper way to gracefully "shutdown" Postgres server running in Kubernetes and copy the /pgdata folder directly (from one volume to another)?
Other ideas?
tnx
I got few ideas (listed from the most probable to the least) about how you could approach this:
Remember to use proper format when using pg_dump. Default plain format may not work properly with pg_restore. Try to specify different format with pg_dump or use psql -f xxx.tar instead of pg_restore. Remember that it might take a while.
You can use a tool to assist you with that. For example pghoard.
You can make a tared backup of you DB and try to copy as a object via Google Cloud Storage.
You can try to create PVCs manually, attach pods to those PVCs and than copy your dataset onto those pods.
Finally, you may try to create an Init container and use it later for your new cluster.
I suggest starting from point 1 as I think it is the most possible solution. If that would not be enough, try later points from the list.
Please let me know if that helped.

Instance of MongoDB server shut down when creating an AMI image

I was trying to clone an instance of a MongoDB server on EC2. When I selected the instance, and did 'create image', it shut down our MongoDB server completely. The IP has not changed, and we are unable to connect to it. I tried to reboot the server, as well as start it and end it. The clone of the AMI has not been touched. How am I able to get the server back up?
When we try to start the server, the terminal just says 'failed.'
We need some more information about this to give the exact solution. Please provide how you are taking the AMI. But we can assume what might be happened.
As per AWS, the SNAPSHOT and AMI are not consistent. So to make it as a consistent, they provide an option to reboot while taking the AMI. If you are going to take the AMI from the AWS console they will be a CheckBox to prevent it.
If you are using any Lambda or CLI tools just disable the Reboot option.

How to replicate MySQL database to Cloud SQL Database

I have read that you can replicate a Cloud SQL database to MySQL. Instead, I want to replicate from a MySQL database (that the business uses to keep inventory) to Cloud SQL so it can have up-to-date inventory levels for use on a web site.
Is it possible to replicate MySQL to Cloud SQL. If so, how do I configure that?
This is something that is not yet possible in CloudSQL.
I'm using DBSync to do it, and working fine.
http://dbconvert.com/mysql.php
The Sync version do the service that you want.
It work well with App Engine and Cloud SQL. You must authorize external conections first.
This is a rather old question, but it might be worth noting that this seems now possible by Configuring External Masters.
The high level steps are:
Create a dump of the data from the master and upload the file to a storage bucket
Create a master instance in CloudSQL
Setup a replica of that instance, using the external master IP, username and password. Also provide the dump file location
Setup additional replicas if needed
VoilĂ !

Using vagrant & puppet, how to create and restore a database on fresh postgresql-server instance?

I have fresh provisioned instances of apache and postgres all set to go. I would like to restore a dump or mount a logical volume with data to the postgres instance. Likewise, I'd like to ensure that the dump is written out or the volume unmounted when i bring the instance down.
Can I use a logical volume this way? How should I approach?
I see this:
How to handle data such as Mysql, web sites sources with Vagrant?
The other answer had the following suggestions. Below I will discuss their implications for PostgreSQL.
In the current version of Vagrant (1.0.3), you have two main options:
Use shared folders. You can put your MySQL data directory into a shared folder so that the data comes back onto your host machine. The
con of this is that shared folders are actually quite slow compared to
the native VM filesystem in VirtualBox, and you can run into weird
permission issues as well.
Setup a task (rake, make, etc.) to copy your MySQL data to your shared folder on demand. Then, before you decide to destroy your VM,
you can run the task to export your data to your shared folder, then
you can reimport the data when you bring your VM back up.
The shared folders approach may work, but if you do this you need to be extremely careful with file permissions. PostgreSQL tends to be very paranoid about this, so you may have to be cautious about group permissions.
I would recommend something based on the second approach with a base backup (using pg_basebackup) since you get a copy of your database. You can also archive your wal segments to that directory to have something that can be restored on demand to near-present conditions.