Is it possible, when i save a data in postgresql table, automatically, id of record save in another table without trigger - postgresql

I have 2 tables. First table is called Person. Second table is called PersonRule. Person table has much columns. But PersonRule has just 2 columns. In the person table, there is a column called ruleid. The column at the same time, there is in the PersonRule table. Is it possible, when i insert to data Person table, i want to automatically create a record in PersonRule table without trigger.?
And in PostgreSQL how can i do this.?

Related

Turning cells from row into a column and marking them as a primary key? (Postgresql)

So my following table is like this:
Tower_ID|...|Insulator_ID_01|Insulator_ID_02|...|Insulator_ID_12|
Tower_01|...|01_Unique_ID_01|01_Unique_ID_02|...|01_Unique_ID_12|
Tower_02|...|02_Unique_ID_01|02_Unique_ID_02|...|02_Unique_ID_12|
Then the idea is to have a single table for every insulator that belongs to towers in this specific line (towers in this line is the table). But the only way I know how is to have a table for each column of insulators. Is it possible to create a single table with relationships which would store Insulator_ID_01 to Insulator_ID_12 in a column before going into the next row of the and doing the same?

How to copy a Specific Partitioned Table

I would like to shift data from a specific paritioned table of parent to separate table.Can someone suggest what's the better way.
If I create a table
CREATE TABLE parent columns(a,b,c)partition by c
c Type is DATE.
CREATE TABLE dec_partition PARTITION OF parent FOR VALUES FROM '2021-02-12' to 2021-03-12;
Now I want to copy table dec_partition to separate table in single command with least time.
NOTE: The table has around 4million rows with 20 columns (1 of the column is in jsonb)

How to backup whole table into a single field item?

I have few very small tables (a total of ~1000 rows) that I want to backup regularly into the same DB, to a single table. I know it sounds weird but hear me out.
Let's say that the tables I want to backup are named linux_commands, and windows_commands. These two tables have roughly: id (pkey), name, definition, config (jsonb), commands.
I want to back these up everyday into a table called commands_backup and I want this new table to have a date field, a field for windows_commands, and another one for linux_commands, so three columns in total. Each day, a script would run and write current date to date field, and then fetch whole linux_commands table and write it to related field in a single row, then do the same for windows_commands.
How would you setup something like this? Also, what is the best data type for storing whole data set in a single item?
In the target table, windows_commands and linux_commands should be type jsonb.
Then you can use:
INSERT INTO commands_backup VALUES (
current_date,
(SELECT jsonb_agg(to_jsonb(linux_commands)) FROM linux_commands),
(SELECT jsonb_agg(to_jsonb(windows_commands)) FROM windows_commands)
);

"ON UPDATE" equivalent for Amazon Redshift

I want a create a table that has a column updated_date that is updated to SYSDATE every time any field in that row is updated. How should I do this in Redshift?
You should be creating table definition like below, that will make sure whenever you insert the record, it populates sysdate.
create table test(
id integer not null,
update_at timestamp DEFAULT SYSDATE);
Every time field update?
Remember, Redshift is DW solution, not a simple database, hence updates should be avoided or minimized.
UPDATE= DELETE + INSERT
Ideally instead of updating any record, you should be deleting and inserting it, so takes care of update_at population while updating which is eventually, DELETE+INSERT.
Also, most of use ETLs, you may using stg_sales table for populating you date, then also, above solution works, where you could do something like below.
DELETE from SALES where id in (select Id from stg_sales);
INSERT INTO SALES select id from stg_sales;
Hope this answers your question.
Redshift doesn't support UPSERTs, so you should load your data to a temporary/staging table first and check for IDs in the main tables, which also exist in the staging table (i.e. which need to be updated).
Delete those records, and INSERT the data from the staging table, which will have the new updated_date.
Also, don't forget to run VACUUM on your tables every once in a while, because your use case involves a lot of DELETEs and UPDATEs.
Refer this for additional info.

Query time after partition in postgres

I have table location in postgres database with more then 50.000.000+ rows, and i decide to do partition!
Table parent have columns id,place and i want to do partition onplace column, with php and mysql i get all distinct places(about 300) and foreach
CREATE TABLE child_place_name (CHECK (place=place_name))INHERITS(location)
and after that in each child table
INSERT INTO child_place_name SELECT * FROM location WHERE place=place_name
and that works perfectly!
After i delete rows from parent class with
DELETE FROM location WHERE tableoid=('location'::regclass)::oid;
i that affected all rows is table!
Then i try to do query and a get times and realize that now is time for query 3 or more times longer then before.
I also have problem that my affect on speed: first i can't set primary key on id column in child tables, but i set index key on place(also index is set on place column in parent table), and also i can't set unique key on id and place columns i got error multiple parameters in not allowed(or something like that)
All i want is select side of table i don't need rules or triggers to set when i insert in parent table,cause that is another problem,only i want to know what is wrong with this approach!Maybe 300+ tables is to much?