I am using aws RDS Postgres as my database. I want to remove specific columns from getting logged of RDS audits - postgresql

Lets say I have table of Salary containing columns id,created_on,company,value. From this as value is the sensitive information I do not want that to be audited. How to do this?
I understand we can disable log per table level but i want to understand how can we do that at column level

Related

Return name of the Postgres database containing value in a row in a table

I am looking for some implementation to essentially search the entire Postgres instance, all of it's databases to find the database containing a specific row in a specific table.
So I have a Postgres instance with 170 databases, every database has the exact same schema and table layout. In each database there is a specific table "SM_Project" and specific row "ProjectName" in said table. We have instances where we know a ProjectName or at least a partial match (can use LIKE with %) but we have no idea which database it lives in.
I am wanting to script, simplify the ability to enter a ProjectName and search the entire Postgres instance (all databases in there) and return the name of the db that contains that record.
I foolishly thought this would be a simple task, with my lack of experience I've tried to do this with several SELECT statements and while I can explicitly connect to a database and search for the record from there, I can't find a way to return the parent database name. I was thinking ti clunkily script it in bash to iterate through the databases until we get a true return on a EXISTS in a SELECT statement. But I feel like I'm overlooking something fundamental.
So my setup is like this"
Postgres
db1
SM_Project
ProjectName
db2
SM_Project
ProjectName
db3
SM_Project
ProjectName
In short I'm looking to return the name of the database that contains a record of ProjectName equal to a string.
Any thoughts would be very welcomed!

How to find relationships of removed column in PgAdmin4/

I have a psql rds aws database which is opened in my PgAdmin4 client.
And recently i removed one column from table in this database.
But now in my app i see error from backend column student.scale does not exist
I know it's because of me who deleted this column.
But how do i find out what is callind this column.
As in backend code there nothing what could call this column, so it's something in Database some relationship or key.
Is there way how i can find out it the easy way ?

Cloudant/Db2 - How to determine if a database table row was read from?

I have two databases - Cloudant and IBM Db2. I have a table in each of these databases that hold static data that is only read from and never updated. These were created a long time ago and I'm not sure if they are used today so I wish to do a clean-up.
I want to determine if these tables or rows from these tables, are still being read from.
Is there a way to record the read timestamp (or at least know if it is simply accessed like a dirty bit) on a row of the table when it is read from?
OR
Record the read timestamp of the entire table (if any record from it is accessed)?
There is SYSCAT.TABLES.LASTUSED system catalog column in Db2 for DML statements on whole table.
There is no way to track each table row read access.

How to get a history of latest postgres table writes regardless of which table it is

Assume I don't know which tables have been written to (not queries, I mean writes). Can I find out names of tables that were last written to?
Don't want constant reporting. Just a query I can run after testing newly added code.
Once I know the names, I'm set; I can just query them using normal sql and see the records. But need to know which tables in a 200 table database. Something like:
select names of last 10 tables that have been written to

"row is too big (...) maximum size 8160" when running "grant connect on database"

I'm facing weird issue with postgres 11.
I'm creating a bunch of users and then assigning them some roles but also letting them to connect to certain database.
After successfully creating 2478 roles when I try to create new user I get this error:
db=# create user foo;
CREATE ROLE
db=# grant connect on database db to foo;
ERROR: row is too big: size 8168, maximum size 8160
Same error shows up in db log.
I checked if db volume is not running out of space, there is still 1T to spare there...
I can't imagine postgres trying to insert more than 8k when running simple grant...?
edit:
It seems there was similar question asked already (usage privileges on schema):
ERROR: row is too big: size 8168, maximum size 8164
So the solution would be to create one role, say connect_to_my_db and grant connect to that role, and then instead of running GRANT connect to each user do GRANT connect_to_my_db.
You found the solution yourself, let me add an explanation of the cause of the error:
Each table row is stored in one of the 8KB blocks of the table, so that is its size limit.
Normal tables have a TOAST table, where long attributes can be stored out-of-line. This allows PostgreSQL to store very long rows.
Now system catalog tables do not have TOAST tables, so rows are limited to 8KB size.
The access control list of an object is stored in the object's catalog, so many permissions on a table can exceed the limit.
Be glad — if you had to manage permissions for thousands of users individually, you'd end up in DBA hell anyway.