How do you enable the logging of all queries to postgreSQL AWS RDS instance? - postgresql

I want to monitor all queries to my postgresql instance. Following these steps, I created a custom db parameter group, set log_statement to all and log_min_duration_statement to 1, applied the parameter group to my instance, and rebooted it. Then I fired a POST request to the instance but a record of query wasn't to be found in the Recent Events & Logs tab of my instance. Doing a SELECT * FROM table query in psql however shows that the resource was created and the post request worked. What am I missing in order to see the logs?

Setting log_min_duration_statement to 1 tells Postgres to only log queries that take longer than 1 ms. If you set it to 0 instead, all queries will be logged (the default, no logging, is -1).

You followed the right steps, all that is left is to make sure the parameter group is properly applied to your Postgres instance. Look at the Parameter Group in the Configuration Details tab of your instance, and make sure they have the right group name followed by "in-sync".
A reboot is usually required when changing parameter groups on an instance, this may be what is (was?) missing in your case.

Related

EXECUTE AS USER in DB2

We are trying to debug a very old web application that uses DB2.
I would like to run a trace to see what happens when I click on a button but as soon as I try I receive this error:
create event monitor ........ for statement where AUTH_ID='.......' write to table
"USER" does not have privilege to perform operation "CREATE EVENT MONITOR".. SQLCODE=-552, SQLSTATE=42502,
Is evident to me that our user doesn't has enough privilege to run a trace.
In T-SQL there is a way to impersonate another user:
USE AdventureWorks2019
GO
EXECUTE AS USER = 'Test';
SELECT * FROM Customer;
REVERT;
I would like to know if there is the same command in DB2.
The goal is to try to run something like SQL Server Profiler for DB2 and sniff the queries.
Yes, I already tried to run GRANT DBADM ON DATABASE TO USER E.....O and of course the system replied:
"E.....O" does not have the privilege to perform operation "GRANT".. SQLCODE=-552, SQLSTATE=42502, DRIVER=3.69.56
We are stuck and we cannot move because we cannot know how the queries work. Asking more privileges to our user is not an option as we are migrating a customer from a competitor to our side.
What I'm trying to do is a sort of privilege escalation without committing any crime.
I also taught about connecting to the DB2 database from SQL Server and use PolyBase but as far as I know such feature only allows me to query and I cannot sniff the parameters.
Db2 has a couple of ways to "impersonate", but all within the security architecture and fully audited.
I would recommend checking out "Trusted Context", basically adding privileges or switching roles based on predefined connection properties.
Another option is to look into SET SESSION AUTHORIZATION (also known as SET SESSION_USER). It switches the SESSION_USER to a different user ID.
As said, that works with the proper privileges and the security admin involved.
Depending on what you want to inspect, db2trc and other command could be of use, too.

How to change default value for parameter globally for redshift cluster

I am using the new SUPER data type and I found that you can't access camel case fields unless you set downcase_delimited_identifier to False.
It's True by default.
I want to set it to false globally on the cluster ( i.e. persistently ).
But it seems this is not possible?
This page indicates you can use parameter groups for this purpose.
But that does not appear to be the case. There's 12 parameters set by default, and you can modify their values. But you can't add any new parameters.
I tried modifying the group using aws cli, but this didn't work either:
$ aws redshift modify-cluster-parameter-group --parameter-group-name my-redshift-parameter-group --parameters ParameterName=downcase_delimited_identifier,ParameterValue=false
An error occurred (InvalidParameterValue) when calling the ModifyClusterParameterGroup operation: Could not find parameter with name: downcase_delimited_identifier
Is it really true that you can't change a default value for a parameter for a cluster?
The parameters in the parameter group are cluster parameters. The parameter you are trying to set is a connection / session parameter (i.e. you type SET in during your connection). If you want to have a parameter set for every time you log in you can configure this with ALTER USER - https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_USER.html. If you want this to affect all users I believe you have to run ALTER USER for each user individually. Scripting this is fairly easy to do. Once the user has been altered the parameter will be set every time that user connects.

Setting up MongoDB environment requirements for Parse Server

I have my instance running and am able to connect remotely however I'm stuck on where to set this parameter to false since it states that the default is set to true:
failIndexKeyTooLong
Setting the 'failIndexKeyTooLong' is a three-step process:
You need to go to the command console in the Tools menu item for the admin database of your database instance. This command will only work on the admin database, pictured here:
Once there, pick any command from the list and it will give you a short JSON text for that command.
Erase the command they provide (I chose 'ping') and enter the following JSON:
{
"setParameter" : 1,
"failIndexKeyTooLong" : false
}
Here is an example to help:
Note if you are using a free plan at MongoLab: This will NOT work if you have a free plan; it only works with paid plans. If you have the free plan, you will not even see the admin database. HOWEVER, I contacted MongoLab and here is what they suggest:
Hello,
First of all, welcome to MongoLab. We'd be happy to help.
The failIndexKeyTooLong=false option is only necessary when your data
include indexed values that exceed the maximum key value length of
1024 bytes. This only occurs when Parse auto-indexes certain
collections, which can actually lead to incorrect query results. Parse
has updated their migration guide to include a bit more information
about this, here:
https://parse.com/docs/server/guide#database-why-do-i-need-to-set-failindexkeytoolong-false-
Chances are high that your migration will succeed without this
parameter being set. Can you please give that a try? If for any reason
it does fail, please let us know and we can help you on potential next
steps.
Our Dedicated and Shared Cluster plans
(https://mongolab.com/plans/pricing/) do provide the ability to toggle
this option, but because our free Sandbox plans are running on shared
server processes, with other Sandbox users, this parameter is not
configurable.
When launching your mongodb server, you can set this parameter to false :
mongod --setParameter failIndexKeyTooLong=false
I have wrote an article that help you to Setting up Parse-Server and all its dependencies on your own server:
https://medium.com/#jcminarro/run-parse-server-on-your-own-server-using-digitalocean-b2a7d66e1205

Can I log the script that invokes DELETE query?

I have to investigate who or what caused tables rows to disappear.
So, I am thinking about creating "on before delete" trigger that logs the script that invokes the deletion. Is this possible? Can I get the db client name or event better - the script that invokes delete query and log it to another temporarly created log table?
I am open to other solutions, too.
Thanks in advance!
You can't get "the script" which issued the delete statement, but you can get various other information:
current_user will return the current Postgres user that initiated the delete statement
inet_client_addr() will return the IP address of the client's computer
current_query() will return the complete statement that caused the trigger to fire
More details about that kind of of functions are available in the manual:
http://www.postgresql.org/docs/current/static/functions-info.html
The Postgres Wiki contains two examples of such an audit trigger:
https://wiki.postgresql.org/wiki/Audit_trigger_91plus
https://wiki.postgresql.org/wiki/Audit_trigger (somewhat outdated)

Default value of Statement QueryTimeout on DB2 V9.7.7?

I tried to find out like what is the default value of Statement QueryTimeout in DB2 V9.7.7???
I googled alot but did not get this specific information. I found the value is 0 in Db2V9.7.4. I am having hard time to find in verison 9.7.7.
Please help me out.
There is no default query timeout within the DB2 Server or DB2 Client – DB2 will allow a query to run until completion.
Applications can (optionally) set the QueryTimeout attribute to control this. If you are using .NET, you might also check to see if your application sets the CommandTimeout property.