Created date, last modified date fields in postgress - postgresql

In PostgreSQL, is there a way to add columns that will automatically record the creation date and latest updated date of a row?

for table creation date look to event triggers
for insertion look into DEFAULT value for timestamptz column (works only if you don't explicitly define value)
for last modification, use trigger FOR EACH ROW before DELETE/UPDATE

The idea - Robust way of adding created and modified fields for data we add to database through db triggers
Update modified_by and modeified_on or modified_at for every db transaction.
Pick created_on and created_by or created_at from modified details whenever you insert a row into tables.
For trigger function, check this repo https://github.com/charan4ks/created_fields.git

Related

Supabase: How to automatically update a timestamp field after updating a row?

What DB I am using?
Supabase hosted version
What do I need?
After I update a row with .update({ name: 'Middle Earth' }) method I need to
automatically update also a timestamp in my table.
How can I update automatically a timestamp?
1)If you already have a table use this script (provided by Supabase devs themselves):
create extension if not exists moddatetime schema extensions;
-- assuming the table name is "todos", and a timestamp column "updated_at"
-- this trigger will set the "updated_at" column to the current timestamp for every update
create trigger handle_updated_at before update on todos
for each row execute procedure moddatetime (updated_at);
2)What if I don't wanna use the moddatetime extension?
This stackoverflow question will give you an answer.

"ON UPDATE" equivalent for Amazon Redshift

I want a create a table that has a column updated_date that is updated to SYSDATE every time any field in that row is updated. How should I do this in Redshift?
You should be creating table definition like below, that will make sure whenever you insert the record, it populates sysdate.
create table test(
id integer not null,
update_at timestamp DEFAULT SYSDATE);
Every time field update?
Remember, Redshift is DW solution, not a simple database, hence updates should be avoided or minimized.
UPDATE= DELETE + INSERT
Ideally instead of updating any record, you should be deleting and inserting it, so takes care of update_at population while updating which is eventually, DELETE+INSERT.
Also, most of use ETLs, you may using stg_sales table for populating you date, then also, above solution works, where you could do something like below.
DELETE from SALES where id in (select Id from stg_sales);
INSERT INTO SALES select id from stg_sales;
Hope this answers your question.
Redshift doesn't support UPSERTs, so you should load your data to a temporary/staging table first and check for IDs in the main tables, which also exist in the staging table (i.e. which need to be updated).
Delete those records, and INSERT the data from the staging table, which will have the new updated_date.
Also, don't forget to run VACUUM on your tables every once in a while, because your use case involves a lot of DELETEs and UPDATEs.
Refer this for additional info.

Scala Slick auto update a column when the row is updated

I want add a updated_at column for every table.
The column will update its value with current time when the row is updated.
How can I automatic update it's value to avoid writing verbose code?
Thanks
You can do that directly in the database, don't need to modify your Slick code. you can define it like this, then it will automatically update whenever any value of this row is modified:
alter table xx add column updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
(this works mysql, not sure if needs adjustment for other databases)

Importing csv into Postgres database with improper date value

I have a query which has a date field with values that look like this in the query results window:
2013-10-01 00:00:00
However, when I save the results to csv, it gets saved like this:
2013-10-01T00:00:00
This is causing a problem when I'm trying to COPY the csv into a table in Redshift, where it gives me an error stating that the value is not a valid timestamp (the field I'm importing to is a timestamp field).
How can I get it so that it either strips out the time component completely, leaving just the date, or at least that the "T" is removed from the results?
I'm exporting results to csv using Aginity SQL Workbench for Redshift.
According to this knowledgebase article:
After import, add new TIMESTAMP columns and use the CAST() function to
populate them:
ALTER TABLE events ADD COLUMN received_at TIMESTAMP DEFAULT NULL;
UPDATE events SET received_at = CAST(received_at_raw as timestamp);
ALTER TABLE events ADD COLUMN generated_at TIMESTAMP DEFAULT NULL;
UPDATE events SET generated_at = CAST(generated_at_raw as timestamp);
Finally, if you forsee no more imports to this table, the raw VARCHAR
timestamp columns may be removed. If you forsee importing more events
from S3, do not remove these columns. To remove the columns, run:
ALTER TABLE events DROP COLUMN received_at_raw; ALTER TABLE events
DROP COLUMN generated_at_raw;
Hope that helps...

update the value of a column every time an update is made to the row

I am using SQL Server 2012. I need to track all the rows that were updates (as part of any action) made on a table.
I thought to add a new column (Column Name: "LastUpdated" of type Datetime.) to the table that I want to track. I would like to update the LastUpdated value every time an update is made to that row. Is there a specific way to acheive this task?
You could create a trigger. Like this:
CREATE TRIGGER dbo.Table1_Updated
ON dbo.Table1
FOR UPDATE /* Fire this trigger when a row is UPDATEd */
AS BEGIN
UPDATE dbo.Table1 SET dbo.Table1.LastUpdated = GETDATE()
FROM INSERTED
WHERE inserted.id=Table1.id
END
However, this trigger will not store WHAT was updated. Also keep in mind that if two people update it, you will only have the date of the most recent update (not all updates).
To keep a history on all changes, you might want to create an audit table (nearly identical structure to your existing table) and use a trigger to copy the pre-update data into the audit table.
This SO article talks about a solid approach: Using table auditing in order to have "snapshots" of table
You would want to look into creating a trigger to fire on update
Create TRIGGER Trigger_Name ON Table_Name
FOR update
AS
BEGIN
// update lastupdated column of updated record with current date and or time
END
Alliteratively you could just pass the new value for lastupdated when updating the other fields
We have implemented a system similar to the following:
Add a new column to your table LastUpdatedDate with a datatype of datetime
If needed, add another column with LastUpdatedBy with a datatype of varchar(100) or whatever length you need
This will store both the date/time and who performed the update.
Then when you update the row in the table, you will include those two columns in your update statement similar to this:
update yourtable
set yourCol = 'newValue',
LastUpdatedDate = getdate(),
LastUpdatedBy = 'yourUserIdentifier'