Recently we are finding rare issue with Cloud SQL 5.7 that clears the tables in our DB. This happened to couple of DB instances we have. We noticed this incident started on Thursday 2/13/2020, no changes have been made manually to delete the tables but the tables from the database have disappeared.
Any Thoughts?
Related
We are experiencing a problem on a Postgres instance where one long running transaction on one database prevent the vacuum process to remove dead tuples on a tables of another database (of the same instance).
It seems crazy to me that xmin is shared cross database.
So here are my questions :
Is that a normal behavior or did we misconfigured something ?
Why is it so ?
Is there a workaround ?
Thanks folks
postgres version 12.4
I have a Postgres database instance in GCP running on version 9.6.
I want to upgrade the Postgres version to a newer version, and I use GCPs "Database migration" for that purpose.
I have 2 databases in the instance and they fill around 800 GB in total.
The problem is that the migration gets in stuck. There are no errors in the migration log.
Copy of monitoring
In short, my question is:
How can I check in what phase is the migration and what is the issue?
Thanks.
I've recently joined a company with a mixed set of databases that include a Redshift cluster and some SQL databases. I'd like to use a single IDE to access both for analytical reporting, so I don't have to switch between tools. I'm currently using workbench, which works, but it's not clicking with me.
I do like Azure Data Studio, but it's SQL Server and Postgres only. Given the similarities between Redshift and Postgres, I thought I'd see if I could connect using the Postgres driver.
I've installed the Postgres extension and can "connect" to the database. However when I try to explore the database using the tree view, I get the error message 'Cannot Expand Node'. When I run a simple query that works in workbench, e.g.
Select * from [server].[database].[table]
I get the following Error message:
Started executing query at Line 1
cursors can only be used within the transaction that created them.
Total execution time: 00:00:00.019
I know I'm trying to do something that shouldn't be done. And if I can't, I can't. But has anyone here managed to get a redshift connection going in Azure Data Studio?
FWIW, I've come across a GitHub Repository that may be a Redshift driver for data studio - but this looks like a clone of the Postgres driver, with no activity since march (not even renaming the 'Postgres' titles to Redshift)... and therefore I'm dubious.
I have a PostgreSQL database on my centos server.
Unfortunately since yesterday, in all existing schemas, all the tables are lost.
I saw log file and there was an unexpected reboot in recent days, probably a crash of the o.s.
Now the Postgres server it start correctly and I can view triggers, sequences and there aren't other problems.
Can I do something to recovery these tables?
Thanks.
I have two databases on Amazon RDS, both Postgres. Database 1 and 2
I need to restore an instance from a snapshot of Database 1 for my Staging environment. (Database 2 is my current Staging DB).
However, I want the data from a few of the tables in Database 2 to overwrite the tables in the newly restored snapshot. What is the best way to do this?
When restoring RDS from a Snapshot, a new database instance is created. If you only wish to copy a portion of the snapshot:
Restore the snapshot to a new (temporary) database
Connect to the new database and dump the desired tables using pg_dump
Connect to your staging server and restore the tables using pg_restore (most probably deleting any matching existing tables first)
Delete the temporary database
pg_dump actually outputs SQL commands that are then used to recreate tables and restore data. Look at the content of a dump to understand how the restore process actually works.
I hope this still works for someone else.
With my team we faced a similar issue. We also had 2 Postgres databases and we also just needed to backup some tables from db1 to db2.
What we did is to use a lambda function using Python (from AWS lambda ofc) that connected to both databases and validates if db1.table1 has the same data as db2.table1, if not, then the lambda function should write the missing data from db1.table1 into db2.table1. The approach of using lambda was because we wanted to automate the process due to the main db (let's say db1) is constantly being updated. In addition, it allowed us to only backup our desired tables (let's say 3 tables out of 10), instead of backing up the whole database.
Note: Maybe you want to do these writes using temporary tables to avoid issues with any constraints you have in your tables.