pgAudit not logging anything in GCP Cloud SQL - postgresql

I'm hoping for some insight into a problem I'm having with using pgAudit for a PostgreSQL 12 managed instance in GCP Cloud SQL.
Thus far, I've done the following to set this up:
Database flags:
cloudsql.enable_pgaudit=on
pgaudit.log=ddl
pgaudit.log_client=yes (turned this one on for debugging purposes)
pgaudit.log_relation=on
After enabled the cloudsql.enable_pgaudit flag and restarting the instance, I issued a CREATE EXTENSION pgaudit command, and confirmed that it was successful. I've also enabled the data access logs as suggested in the Google documentation (they didn't specify which permissions were needed in IAM, so I erred on the side of everything). I've also tried setting pgaudit.log=all to see if ANYTHING could be captured, with the same catch that nothing is being logged.
With pgaudit.log_client=on, I would expect to see the audit log information returned when viewing the Server Output in DBeaver, but nothing appears there.
Anyone have any insight as to what I might be missing? My goal, ultimately is to capture DDL operations with session logging. I've generally attempted testing by creating a dropping a table in an effort to get the log for those operations, i.e.
create table dstest_table (columnone varchar(150));
drop table dstest_table;
I've tried a few more things to get this to work, including setting the flags additionally at the database level. So far, nothing seems to be getting logged.
Update: Never did get pgAudit to work properly, however, found that DDL operations can be logged outside of pgAudit via the log_statement=ddl flag on the server. Set this, and I'm now getting what I need.
Database Flags
Cloud Logging API Data Access Log
Cloud SQL Data Access Log

log_statement=ddl as a flag allows for logging DDL statements without using pgAudit, so the majority of the setup was unnecessary. Set this flag and the operations I needed are now logged.

Related

How can I set log_warnings=2 on Google Cloud SQL?

I'm trying to see the cause of Aborted_connects and Aborted_clients on our mysql instance. When I try to run SET GLOBAL log_warnings=2 I receive the error: Access denied; you need (at least one of) the SUPER privilege(s) for this operation. Since users aren't created with Super privileges in Google Cloud's MYSQL offering, and log_warnings isn't showing up as an optional database flag so restarting the instance with the command, I don't know how to adjust the logs to be more verbose.

How to log SQL queries on a Google Cloud SQL PostgreSQL 11 instance?

I have to log all DDL and DML queries executed on a Google Cloud SQL PostgreSQL instance.
I checked a lot of websites, but there is no clear information available. I tried using the pgAudit extension, but that is not supported by Cloud SQL.
Can someone please suggest the extension to be used or any other way of logging SQL queries?
Also, if the user logins can be logged, then that will be helpful, too.
Short Answer - Database Flags
The solution provided in the other answer can be used, if PostgreSQL is locally installed or if we have access to the server container. In Google Cloud, however, this file cannot be directly accessed from the instance.
I found that this can be achieved on Google Cloud SQL instance by setting the various parameters given in this link - PostgreSQL configuration parameters as database flags.
Note: All of the parameters are not supported, hence verify in the official documentation by Google given below.
Google Cloud Database Flags for PostgreSQL
Add in postgresql.conf:
log_statement=mod
https://www.postgresql.org/docs/12/runtime-config-logging.html says
logs all ddl statements, plus data-modifying statements such as
INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and
EXPLAIN ANALYZE statements are also logged if their contained command
is of an appropriate type.
To log connections and disconnections, add in postgresql.conf:
log_connections=on
log_disconnections=on
On October 12, 2020 Google Cloud SQL for PostgreSQL added support for pgAudit. Please check these docs for more information.

How to take backup of Tableau Server Repository(PostgreSQL)

we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.

MongoDB logging and authentication

I am trying to get a mongoDB working with authentication, using both Java and PHP drivers. I've added user roles to the mongoDB but haven't yet turned on authentication (so clients can login with usernames and passwords, but they don't have to, and user roles are not yet enforced).
To check that everything is working, before actually turning authentication on, I've been looking at the mongod.log file. I see things like:
2015-11-17T08:47:19.052+0000 I NETWORK [initandlisten] connection accepted from ###:### #158126 (46 connections now open)
2015-11-17T08:47:19.960+0000 I ACCESS [conn158126] Successfully authenticated as ### on ###
But.... I also see quite a few connections without the "ACCESS" line. However, when cross referencing with logs of the clients, it seems they are trying to connect with authentication.
What can be going on?
Is it perhaps the case that the ACCESS log only occurs if some database action is taken? So, e.g. if a client connects but doesn't try to read or write, would I not see the 2nd line?
Is it perhaps the case that the ACCESS log only occurs if some database action is taken?
At least for the JAVA driver I'm using, "yes" is the answer. I ran a test, connecting, requesting a DB and collection, but doing nothing more, and no authentication check is triggered. It's only when you try to read/write that the authentication happens.

Postgres: "ERROR: cached plan must not change result type"

This exception is being thrown by the PostgreSQL 8.3.7 server to my application.
Does anyone know what this error means and what I can do about it?
ERROR: cached plan must not change result type
STATEMENT: select code,is_deprecated from country where code=$1
I figured out what was causing this error.
My application opened a database connection and prepared a SELECT statement for execution.
Meanwhile, another script was modifying the database table, changing the data type of one of the columns being returned in the above SELECT statement.
I resolved this by restarting the application after the database table was modified. This reset the database connection, allowing the prepared statement to execute without errors.
I'm adding this answer for anyone landing here by googling ERROR: cached plan must not change result type when trying to solve the problem in the context of a Java / JDBC application.
I was able to reliably reproduce the error by running schema upgrades (i.e. DDL statements) while my back-end app that used the DB was running. If the app was querying a table that had been changed by the schema upgrade (i.e. the app ran queries before and after the upgrade on a changed table) - the postgres driver would return this error because apparently it does caching of some schema details.
You can avoid the problem by configuring your pgjdbc driver with autosave=conservative. With this option, the driver will be able to flush whatever details it is caching and you shouldn't have to bounce your server or flush your connection pool or whatever workaround you may have come up with.
Reproduced on Postgres 9.6 (AWS RDS) and my initial testing seems to indicate the problem is completely resolved with this option.
Documentation: https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
You can look at the pgjdbc Github issue 451 for more details and history of the issue.
JRuby ActiveRecords users see this: https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/postgresql/connection_methods.rb#L60
Note on performance:
As per the reported performance issues in the above link - you should do some performance / load / soak testing of your application before switching this on blindly.
On doing performance testing on my own app running on an AWS RDS Postgres 10 instance, enabling the conservative setting does result in extra CPU usage on the database server. It wasn't much though, I could only even see the autosave functionality show up as using a measurable amount of CPU after I'd tuned every single query my load test was using and started pushing the load test hard.
For us, we were facing similar issue. Our application works on multiple schema. Whenever we were doing schema changes, this issue started occruding.
Setting up prepareThreshold=0 parameter inside JDBC parameter disables statement caching at database level. This solved it for us.
I got this error, I manually ran the failing select query and it fixed the error.