How can I set log_warnings=2 on Google Cloud SQL? - google-cloud-sql

I'm trying to see the cause of Aborted_connects and Aborted_clients on our mysql instance. When I try to run SET GLOBAL log_warnings=2 I receive the error: Access denied; you need (at least one of) the SUPER privilege(s) for this operation. Since users aren't created with Super privileges in Google Cloud's MYSQL offering, and log_warnings isn't showing up as an optional database flag so restarting the instance with the command, I don't know how to adjust the logs to be more verbose.

Related

Is it possible to limit user connection IP range with SQL instead of editing pg_hba.conf?

We are using AWS PostgreSQL RDS and we would like to limit some accounts to be accessed from a specific set of CIDR. Since RDS is managed DBMS by AWS we do not have access to pg_hba.conf.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html
By checking the CREATE ROLE and USER DDL in PG, it does not seem to be an option.
https://www.postgresql.org/docs/current/sql-createrole.html
https://www.postgresql.org/docs/current/sql-createuser.html
you can try to write you own rules via function/procedure checking, using SELECT inet_server_addr() (just keep in mind that it works only with non-localhost connections).
Also some useful functions here (like local/remote ip/port): https://www.postgresql.org/docs/9.4/functions-info.html

pgAudit not logging anything in GCP Cloud SQL

I'm hoping for some insight into a problem I'm having with using pgAudit for a PostgreSQL 12 managed instance in GCP Cloud SQL.
Thus far, I've done the following to set this up:
Database flags:
cloudsql.enable_pgaudit=on
pgaudit.log=ddl
pgaudit.log_client=yes (turned this one on for debugging purposes)
pgaudit.log_relation=on
After enabled the cloudsql.enable_pgaudit flag and restarting the instance, I issued a CREATE EXTENSION pgaudit command, and confirmed that it was successful. I've also enabled the data access logs as suggested in the Google documentation (they didn't specify which permissions were needed in IAM, so I erred on the side of everything). I've also tried setting pgaudit.log=all to see if ANYTHING could be captured, with the same catch that nothing is being logged.
With pgaudit.log_client=on, I would expect to see the audit log information returned when viewing the Server Output in DBeaver, but nothing appears there.
Anyone have any insight as to what I might be missing? My goal, ultimately is to capture DDL operations with session logging. I've generally attempted testing by creating a dropping a table in an effort to get the log for those operations, i.e.
create table dstest_table (columnone varchar(150));
drop table dstest_table;
I've tried a few more things to get this to work, including setting the flags additionally at the database level. So far, nothing seems to be getting logged.
Update: Never did get pgAudit to work properly, however, found that DDL operations can be logged outside of pgAudit via the log_statement=ddl flag on the server. Set this, and I'm now getting what I need.
Database Flags
Cloud Logging API Data Access Log
Cloud SQL Data Access Log
log_statement=ddl as a flag allows for logging DDL statements without using pgAudit, so the majority of the setup was unnecessary. Set this flag and the operations I needed are now logged.

CloudRun Suddenly got `Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}"`

We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.

Super Privilege required for running scheduled events?

I want to run scheduled events on cloud sql but it requires GLOBAL event_scheduler = ON; and per this requires super privilege, and afaik cloud sql doesn't support them and understandably I see this when I run the above:
ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) for this operation
My question is if there is an alternative here or am I complete screwed relying on cloud sql ?!?!
Use Cloud Console to edit your instance and enable the event_scheduler flag under Advanced Options:

Get the list of allowed hosts in host-based authentication

I am aware that I have to add the IP addresses of remote hosts in pg_hba.conf file and restart the PostgreSQL server for changes to take effect.
But I would like to get a list of hosts currently allowed for the host-based authentication, directly from the server that is already running.
Similar to how I can get the max_connections setting using show max_connections;, I would hypothetically imagine it to be something like show hosts; or select pg_hosts(); (neither really exists).
Is this possible?
EDIT: I understand exposing the hosts would present a security risk. But how about the psql utility invoked directly in the database server's terminal? Does it have a special command to get the list?
The psql command at the terminal has no permission to get the list. Only the PostgreSQL database does.
The best way to do this (if you really must) is to create a PL/PerlU function which reads the pg_hba.conf and parses it, and returns the information in the way you want it. You could even build a management system for the pg_hba.conf with such functions (reloading the db might get interesting but you could do this with a LISTEN/NOTIFY approach).
Note, however, if you do this, your functions have a security footprint. You would probably want to just revoke permission to run the functions from public, grant access to nobody, and thus require users be superusers in order to run the functions. I would personally avoid exposing such critical information to the db unless there was a compelling reason but I could imagine that there could be cases where it might be helpful on balance. It is certainly dangerous territory however.