Delayed Postgresql Backup table - postgresql

Is there an easy way to create a replica table on the same postgresql server that is delayed by 24 hours?
I have tried creating a cron that does an insert with a command like this
INSERT INTO TABLE_BACKUP
SELECT * from TABLE_new WHERE time< (NOW() - INTERVAL '1 day');
I have found that doing the above some records are missing.
Is there a better way?

Related

Move rows older that x days to archive table or partition table in Postgres 11

I would like to speed up the queries on my big table that contains lots of old data.
I have a table named post that has the date column created_at. The table has over ~31 million rows and ~30 million rows older than 30 days.
Actually, I want this:
move data older than 30 days into the post_archive table or create a partition table.
when the value in column created_at becomes older than 30 days then that row should be moved to the post_archive table or partition table.
Any detailed and concrete solution in PostgresSQL 11.15?
My ideas:
Solution 1. create a cron script in whatever language (e.g. JavaScript) and run it every day to copy data from the post table into post_archive and then delete data from the post table
Solution 2. create a Postgres function that should copy the data from the post table into the partition table, and create a cron job that will call the function every day
Thanks
This is to split your data into a post and post_archive table. It's a common approach, and I've done it (with SQL Server).
Before you do anything else, make sure you have an index on your created_at column on your post table. Important.
Next, you need to use a common expression to mean "thirty days ago". This is it.
(CURRENT_DATE - INTERVAL '30 DAY')::DATE
Next, back everything up. You knew that.
Then, here's your process to set up your two tables.
CREATE TABLE post_archive AS TABLE post; to populate your archive table.
Do these two steps to repopulate your post table with the most recent thirty days. It will take forever to DELETE all those rows, so we'll truncate the table and repopulate it. That's also good because it's like starting from scratch with a much smaller table, which is what you want. This takes a modest amount of downtime.
TRUNCATE TABLE post;
INSERT INTO post SELECT * FROM post_archive
WHERE created_at > (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post_archive WHERE created_at > (CURRENT_DATE - INTERVAL '30 DAY')::DATE; to remove the most recent thirty days from your archive table.
Now, you have the two tables.
Your next step is the daily row-migration job. PostgreSQL lacks a built-in job scheduler like SQL Server's Job or MySQL's EVENT so your best bet is a cronjob.
It's probably wise to do the migration daily if that fits with your business rules. Why? Many-row DELETEs and INSERTs cause big transactions, and that can make your RDBMS server thrash. Smaller numbers of rows are better.
The SQL you need is something like this:
INSERT INTO post_archive SELECT * FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
You can package this up as a shell script. On UNIX-derived systems like Linux and FreeBSD the shell script file might look like this.
#!/bin/sh
psql postgres://username:password#hostname:5432/database << SQLSTATEMENTS
INSERT INTO post_archive SELECT * FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
SQLSTATEMENTS
Then run the shell script from cron a few minutes after 3am each day.
Some notes:
3am? Why? In many places daylight-time switchover messes up the time between 02:00 and 03:00 twice a year. A choice of, say 03:22 as a time to run the daily migration keeps you well away from that problem.
CURRENT_DATE gets you midnight of today. So, if you run the script more than once in any calendar day, no harm is done.
If you miss a day, the next day's migration will catch up.
You could package up the SQL as a stored procedure and put it into your RDBMS, then invoke it from your shell script. But then your migration procedure lives in two different places. You need the cronjob and shell script in any case in PostgreSQL.
Will your application go off the rails if it sees identical rows in both post and post_archive while the migration is in progress? If so, you'll need to wrap your SQL statements in a transaction. That way other users of the database won't see the duplicate rows. Do this.
#!/bin/sh
psql postgres://username:password#hostname:5432/database << SQLSTATEMENTS
START TRANSACTION;
INSERT INTO post_archive SELECT * FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
COMMIT;
SQLSTATEMENTS
Cronjobs are quite reliable on Linux and FreeBSD.

How to migrate "as of timestamp" query in PostgreSQL

I want to migrate or write an equivalent query to get the data from table one hr before the current time in PostgreSQL.
oracle query:
select *
from T_DATA as of timestamp (systimestamp - interval '60' minute);
select * from T_DATA where timestamp_column >= now() - interval '1 hour'
Since flashback queries are not supported in postgresql, One approach I tried with temporal tables extension.

Yesterday - in Redshift & PostgreSQL - date addition compatibility

Attempting to write queries that will be compatible with both PostgreSQL and Amazon Redshift.
Reason: Syncing data from PG to RS to perform complex queries, but in dev/QA environments budget (and DB size) dictates to stay with PG only.
Request: return yesterday's date
In PostgreSQL:
SELECT DATE((NOW() - '1 DAY'::INTERVAL));
In Redshift:
SELECT DATE(DATEADD(DAY, -1, GETDATE()));
Problem: Neither works in the other platform.
Is there a compatible way to achieve requested action?
ORM is an option we'd like to avoid.
The following works in Postgres and Redshift:
ANSI standard SQL:
SELECT current_date - interval '1' day;
-- 2018-06-19 00:00:00
SELECT current_timestamp - interval '1' day;
-- 2018-06-19 13:40:06.509337+00
Postgres (and I believe Redshift as well) also supports the alternative (non-standard) interval syntax: interval '1 day'
Or more compact (not 100% ANSI SQL but works in both):
SELECT current_date - 1;
-- 2018-06-19 00:00:00
SELECT current_timestamp - 1;
-- 2018-06-19 13:40:06.509337+00

Postgres, operation are slow

My Postgres queries are slow on the table records.
A simple request like that can take 15 seconds !
The result: 32k (on 1.5 millions)
SELECT COUNT(*)
FROM project.records
WHERE created_at > NOW() - INTERVAL '1 day'
I have an index on created_at (which is a timestamp)
What can I do to manage this ? Is it my table who is too big ?
As suggested by #Andomar, I removed the large columns to another table.
I made sure to do a VACUUM ANALYZE to really clean the table.
Now the query take 400ms.

Huge PostgreSQL table - Select, update very slow

I am using PostgreSQL 9.5. I have a table which is almost 20GB's. It has a primary key on the ID column which is an auto-increment column, however I am running my queries on another column which is a timestamp... I am trying to select/update/delete on the basis of a timestamp column but the queries are very slow. For example: A select on this table `where timestamp_column::date (current_date - INTERVAL '10 DAY')::date) is taking more than 15 mins or so..
Can you please help on what kind of Index should I add to this table (if needed) to make it perform faster?
Thanks
You can create an index with your clause expression:
CREATE INDEX ns_event_last_updated_idx ON ns_event (CAST(last_updated AT TIME ZONE 'UTC' AS DATE));
But, keep in mind that you're using timestamp with timezone, cast this type to date can let you get undesirable side effects.
Also, remove all casting in your sql:
select * from ns_event where Last_Updated < (current_date - INTERVAL '25 DAY');