Postgres 13 permission issues with Revoke - postgresql

I am a relatively new user of Postgres 13. Let me first tell you, that the Postgres database is hosted on AWS aurora. I have a user that owns a schema and I have a specific table that this user should only be able to SELECT and INSERT rows to this table and execute TRIGGERS.
I have REVOKED ALL on this table for this user and GRANTED SELECT, INSERT, TRIGGER ON TABLE TO USER. The INSERT, SELECT, and TRIGGER work as expected. However, when I execute a SQL UPDATE on that table it still lets me update a row in that table! I also forgot to tell you I REVOKED ALL and performed the same GRANTS to rds_superuser on this table since this user is referenced to rds_superuser.
Any help would be greatly appreciated!
Following are the results of \d:
Column | Type | Collation | Nullable | Default
-----------------------+--------------------------+-----------+----------+------------------------
id | uuid | | not null | uuid_generate_v4()
patient_medication_id | bigint | | not null |
raw_xml | character varying | | not null |
digital_signature | character varying | | not null |
create_date | timestamp with time zone | | not null | CURRENT_TIMESTAMP
created_by | character varying | | |
update_date | timestamp with time zone | | |
updated_by | character varying | | |
deleted_at | timestamp with time zone | | |
deleted_by | character varying | | |
status | character varying(1) | | | 'A'::character varying
message_type | character varying | | not null |
Indexes:
"rx_cryptographic_signature_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"rx_cryptographic_signature_fk" FOREIGN KEY (patient_medication_id) REFERENCES tryon.patient_medication(id)
Triggers:
audit_trigger_row AFTER INSERT ON tryon.rx_cryptographic_signature FOR EACH ROW EXECUTE FUNCTION td_audit.if_changed_fn('id', '{}')
audit_trigger_stm AFTER TRUNCATE ON tryon.rx_cryptographic_signature FOR EACH STATEMENT EXECUTE FUNCTION td_audit.if_changed_fn() tr_log_delete_attempt BEFORE DELETE ON tryon.rx_cryptographic_signature FOR EACH STATEMENT EXECUTE FUNCTION tryon.fn_log_update_delete_attempt()
tr_log_update_attempt BEFORE UPDATE ON tryon.rx_cryptographic_signature FOR EACH STATEMENT EXECUTE FUNCTION tryon.fn_log_update_delete_attempt()
Following are the results of \z:
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+----------------------------+-------+---------------------------------------+-------------------+----------
tryon | rx_cryptographic_signature | table | TD_Administrator=art/TD_Administrator+| |
| | | td_administrator=art/TD_Administrator+| |
| | | rds_pgaudit=art/TD_Administrator +| |
| | | rds_superuser=art/TD_Administrator | |
(1 row)
Thanks so much for your help!!

Related

Any way to find and delete almost similar records with SQL?

I have a table in Postgres DB, that has a lot of almost identical rows. For example:
1. 00Zicky_-_San_Pedro_Danilo_Vigorito_Remix
2. 00Zicky_-_San_Pedro__Danilo_Vigorito_Remix__
3. 0101_-_Try_To_Say__Strictlyjaz_Unit_Future_Rmx__
4. 0101_-_Try_To_Say__Strictlyjaz_Unit_Future_Rmx_
5. 01_-_Digital_Excitation_-_Brothers_Gonna_Work_it_Out__Piano_Mix__
6. 01_-_Digital_Excitation_-_Brothers_Gonna_Work_it_Out__Piano_Mix__
I think about to writing a little golang script to remove duplicates, but maybe SQL can do it?
Table definition:
\d+ songs
Table "public.songs"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
---------------+-----------------------------+-----------+----------+----------------------------------------+----------+-------------+--------------+-------------
song_id | integer | | not null | nextval('songs_song_id_seq'::regclass) | plain | | |
song_name | character varying(250) | | not null | | extended | | |
fingerprinted | smallint | | | 0 | plain | | |
file_sha1 | bytea | | | | extended | | |
total_hashes | integer | | not null | 0 | plain | | |
date_created | timestamp without time zone | | not null | now() | plain | | |
date_modified | timestamp without time zone | | not null | now() | plain | | |
Indexes:
"pk_songs_song_id" PRIMARY KEY, btree (song_id)
Referenced by:
TABLE "fingerprints" CONSTRAINT "fk_fingerprints_song_id" FOREIGN KEY (song_id) REFERENCES songs(song_id) ON DELETE CASCADE
Access method: heap
Tried several methods to find duplicates, but that methods search only for exact similarity.
There is no operator which is essentially A almost = B. (Well there is full text search, but that seems to be a little excessive here.) If the only difference is the number of - and _ then just get rid of them and compare the the resulting difference. If they are equal, then one is a duplicate. You can use the replace() function to remove them. So something like: (see demo)
delete
from songs s2
where exists ( select null
from songs s1
where s1.song_id < s2.song_id
and replace(replace(s1.name, '_',''),'-','') =
replace(replace(s2.name, '_',''),'-','')
);
If your table is large this will not be fast, but a functional index may help:
create index song_name_idx on songs
(replace(replace(name, '_',''),'-',''));

Bulk update datatype of a column in all relevant tables

An example of some tables with the column I want to change.
+--------------------------------------+------------------+------+
| ?column? | column_name | data_type |
|--------------------------------------+------------------+------|
| x.articles | article_id | bigint |
| x.supplier_articles | article_id | bigint |
| x.purchase_order_details | article_id | bigint |
| y.scheme_articles | article_id | integer |
....
There are some 50 tables that have the column.
I want to change the article_id column from a numeric data type to a textual data type. It is found across several tables. Is there anyway to update them all at once ? Information schema is readonly so I cannot do an update on it. Other than writing inidividual alter statements for all the tables, is there a better way to do it ?

Transactions confirmation received , but table remains empty

In my application, i write transactions to post gres schema prod.
In order to debug, I have using the psql command line client on OSX
My table the only fields I have to fill are the are message field (json blob) and and status field (text).
Here is what the schema looks like
Table "prod.suggestions"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
------------------+--------------------------+-----------+----------+--------------------+----------+--------------+-------------
id | uuid | | not null | uuid_generate_v4() | plain | |
message | jsonb | | not null | | extended | |
status | text | | not null | | extended | |
transaction_hash | text | | | | extended | |
created_at | timestamp with time zone | | | CURRENT_TIMESTAMP | plain | |
updated_at | timestamp with time zone | | | CURRENT_TIMESTAMP | plain | |
Indexes:
"suggestions_pkey" PRIMARY KEY, btree (id)
Triggers:
update_updated_at_on_prod_suggestions BEFORE UPDATE ON prod.suggestions FOR EACH ROW EXECUTE PROCEDURE update_updated_at()
here is the function the trigger executes:
create function update_updated_at()
returns trigger
as
$body$
begin
new.updated_at = current_timestamp;
return new;
end;
$body$
language plpgsql;
Here is query to write the message:
INSERT INTO prod.suggestions (message, status) VALUES ('{"name": "Paint house", "tags": ["Improvements", "Office"], "finished": true}' , 'rcvd');
It returns INSERT 0 1 which I assume is a sucesss.
however when i query the table, it doesnt return anything.
select * from prod.suggestions;
I will appreciate any pointers on this.
This had nothing to do with postgres. I have another workers thread that was deleting all the data from the table.

How to migrate tables with defaults, constraints and sequences with AWS DMS for postgres to postgres migration?

I recently did a migration from a RDS postgresql to Aurora postgresql. The tables were migrated successfully but the tables are missing their defaults, constraints and references. It also did not migrate any sequences.
Table in source database:
Table "public.addons_snack"
Column | Type | Collation | Nullable | Default
---------------+--------------------------+-----------+----------+------------------------------------------
id | integer | | not null | nextval('addons_snack_id_seq'::regclass)
name | character varying(100) | | not null |
snack_type | character varying(2) | | not null |
price | integer | | not null |
created | timestamp with time zone | | not null |
modified | timestamp with time zone | | not null |
date | date | | |
Indexes:
"addons_snack_pkey" PRIMARY KEY, btree (id)
Check constraints:
"addons_snack_price_check" CHECK (price >= 0)
Referenced by:
TABLE "addons_snackreservation" CONSTRAINT "addons_snackreservation_snack_id_373507cf_fk_addons_snack_id" FOREIGN KEY (snack_id) REFERENCES addons_snack(id) DEFERRABLE INITIALLY DEFERRED
Tables in target database
Table "public.addons_snack"
Column | Type | Collation | Nullable | Default
---------------+-----------------------------+-----------+----------+---------
id | integer | | not null |
name | character varying(100) | | not null |
snack_type | character varying(2) | | not null |
price | integer | | not null |
created | timestamp(6) with time zone | | not null |
modified | timestamp(6) with time zone | | not null |
date | date | | |
Indexes:
"addons_snack_pkey" PRIMARY KEY, btree (id)
Did I do something wrong or DMS is not capable of doing this?
This SQL Snippet will be a clear answer for you.
You can restore Index and Constraint by using pg_dump and pg_restore, and the snippet consists of executing them.

GIST index creation too slow on PostgreSQL

I have a database in PostgreSQL with the following structure:
Column | Type | Collation | Nullable | Default
-------------+-----------------------+-----------+----------+------------------------------------------------
vessel_hash | integer | | not null | nextval('samplecol_vessel_hash_seq'::regclass)
status | character varying(50) | | |
station | character varying(50) | | |
speed | character varying(10) | | |
longitude | numeric(12,8) | | |
latitude | numeric(12,8) | | |
course | character varying(50) | | |
heading | character varying(50) | | |
timestamp | character varying(50) | | |
the_geom | geometry | | |
Check constraints:
"enforce_dims_the_geom" CHECK (st_ndims(the_geom) = 2)
"enforce_geotype_geom" CHECK (geometrytype(the_geom) = 'POINT'::text OR the_geom IS NULL)
"enforce_srid_the_geom" CHECK (st_srid(the_geom) = 4326)
The database contains ~146.000.000 records and the size of table that contains the data is:
public | samplecol | table | postgres | 31 GB |
I try to create a GIST index on the geometry field the_geom with this command:
create index samplecol_the_geom_gist on samplecol using gist (the_geom);
but takes too long. It runs 2 hours already.
Based on this question Slow indexing of 300GB Postgis table
Ask Question, before index creation I execute in psql console:
ALTER SYSTEM SET maintenance_work_mem = '1GB';
ALTER SYSTEM
SELCT pg_reload_conf();
pg_reload_conf
----------------
t
(1 row)
But index creation takes too long. Does anyone know why? An how to fix this?
I am afraid you'll have to sit it out.
Apart from high maintenance_work_mem, there is not really a tuning option.
Increasing max_wal_size will help somewhat, since you will get fewer checkpoints.
If you can't live with an ACCESS EXCLUSIVE lock for that long, try CREATE INDEX CONCURRENTLY, which will be even slower, but won't block concurrent database activity.