GCP Cloud SQL Point-in-time recovery - postgresql

I am working on DR for Cloud SQL. I found that we can enable Point in time recovery for Cloud SQL and get the data restored till a particular time in case of any data corruption.
In the document, I found that we will have to create a clone after enabling point in time recovery.
Creating a clone will create a new IP address for the cloned database. Will the admin credentials going to change when we create a clone of the database or is it going to be the same?

As mentioned in the link:
Point-in-time recovery allows you to recover an instance to a specific point in time. For example, if an operator ‘fat finger’ error causes a loss of data you can
recover a database to the state it was just before the error occurred. It’s also great for testing your application and diagnosing issues since you can clone your live data to a testing database. See the point-in-time-recovery docs for more information.
Admin credentials will be the same for both databases. For more information you refer to the link where recovery with the Google Cloud SQL(postgreSQL) has been explained.

Related

pgAudit not logging anything in GCP Cloud SQL

I'm hoping for some insight into a problem I'm having with using pgAudit for a PostgreSQL 12 managed instance in GCP Cloud SQL.
Thus far, I've done the following to set this up:
Database flags:
cloudsql.enable_pgaudit=on
pgaudit.log=ddl
pgaudit.log_client=yes (turned this one on for debugging purposes)
pgaudit.log_relation=on
After enabled the cloudsql.enable_pgaudit flag and restarting the instance, I issued a CREATE EXTENSION pgaudit command, and confirmed that it was successful. I've also enabled the data access logs as suggested in the Google documentation (they didn't specify which permissions were needed in IAM, so I erred on the side of everything). I've also tried setting pgaudit.log=all to see if ANYTHING could be captured, with the same catch that nothing is being logged.
With pgaudit.log_client=on, I would expect to see the audit log information returned when viewing the Server Output in DBeaver, but nothing appears there.
Anyone have any insight as to what I might be missing? My goal, ultimately is to capture DDL operations with session logging. I've generally attempted testing by creating a dropping a table in an effort to get the log for those operations, i.e.
create table dstest_table (columnone varchar(150));
drop table dstest_table;
I've tried a few more things to get this to work, including setting the flags additionally at the database level. So far, nothing seems to be getting logged.
Update: Never did get pgAudit to work properly, however, found that DDL operations can be logged outside of pgAudit via the log_statement=ddl flag on the server. Set this, and I'm now getting what I need.
Database Flags
Cloud Logging API Data Access Log
Cloud SQL Data Access Log
log_statement=ddl as a flag allows for logging DDL statements without using pgAudit, so the majority of the setup was unnecessary. Set this flag and the operations I needed are now logged.

Debezium on AWS RDS Postgres with rds.force_ssl not working well

Has anyone managed to get Debezium to work over AWS RDS Postgres with rds.force_ssl turned on in the parameter group?
The connector seems to work for a bit, and then we begin to receive errors like Database connection failed when writing to copy and Exception thrown while calling task.commit().
I have scoured the web searching for this issue, and I see many people encountered it, and many Jira issues opened about it.
The response generally is "Check your network configuration" or "Disable your SSL".
I just can't get it to work for some reason, and obviously disabling encryption in transit is not possible in production use cases (at least in ours).
I would appreciate any kind of help or insight into how to solve this!

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

Google SQL instance stuck after operation "restore from backup"

After I proceed to restore database from automated backup file generated on Mar 13, 2019, the SQL instance stuck in this state forever:"
Restoring from backup. This may take a few minutes. While this operation is running, you may continue to view information about the instance."
The database size is very small, less than 1MB.
For future users that experience problems like this is in the future, here is how you can handle it:
If you have a Google Cloud support package, file a support ticket directly with support for the quickest response.
Otherwise please file a private GCP issue describing the problem, remembering to include the project id and instance name.
However - Cloud SQL instances are monitored for stuck states like this, so often the issue will resolve itself within a few hours.

Enable local development access to PostgreSQL DB on Amazon RDS

I'm in the early stages of a web project which requires a database. Until now, I've managed to get away with using an SQLite database locally for development and a PostgreSQL database running on AWS RDS in "production" (mainly just for alpha testers). I haven't really had any state in the database that I couldn't just blow away and re-seed whenever necessary.
However, I'm now at the point in my project where I'm going to have state in the production database that I can't easily reproduce via seeding in my local SQLite database. So I've decided to create another development database that I create via a script which just takes the last snapshot of my production database and creates a production database. I've managed to get this script running with some degree of success...
But I'm having difficulty connecting to this development database in my local development environment. Each time I try to connect, I timeout. Most of the resources on Amazon seem to indicate that this is likely a security group issue. The security group corresponding to my database currently has these inbound settings (security group erased, but it is the group listed as my RDS security group):
Is there something obviously wrong here? How do I set up my security groups such that I can connect to this development database on my local machine?
The source shouldn't be set to the same security group, but rather whatever source you'll be connecting from. You can use 0.0.0.0/0 to enable traffic from any source.