Move rows older that x days to archive table or partition table in Postgres 11 - postgresql

I would like to speed up the queries on my big table that contains lots of old data.
I have a table named post that has the date column created_at. The table has over ~31 million rows and ~30 million rows older than 30 days.
Actually, I want this:
move data older than 30 days into the post_archive table or create a partition table.
when the value in column created_at becomes older than 30 days then that row should be moved to the post_archive table or partition table.
Any detailed and concrete solution in PostgresSQL 11.15?
My ideas:
Solution 1. create a cron script in whatever language (e.g. JavaScript) and run it every day to copy data from the post table into post_archive and then delete data from the post table
Solution 2. create a Postgres function that should copy the data from the post table into the partition table, and create a cron job that will call the function every day
Thanks

This is to split your data into a post and post_archive table. It's a common approach, and I've done it (with SQL Server).
Before you do anything else, make sure you have an index on your created_at column on your post table. Important.
Next, you need to use a common expression to mean "thirty days ago". This is it.
(CURRENT_DATE - INTERVAL '30 DAY')::DATE
Next, back everything up. You knew that.
Then, here's your process to set up your two tables.
CREATE TABLE post_archive AS TABLE post; to populate your archive table.
Do these two steps to repopulate your post table with the most recent thirty days. It will take forever to DELETE all those rows, so we'll truncate the table and repopulate it. That's also good because it's like starting from scratch with a much smaller table, which is what you want. This takes a modest amount of downtime.
TRUNCATE TABLE post;
INSERT INTO post SELECT * FROM post_archive
WHERE created_at > (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post_archive WHERE created_at > (CURRENT_DATE - INTERVAL '30 DAY')::DATE; to remove the most recent thirty days from your archive table.
Now, you have the two tables.
Your next step is the daily row-migration job. PostgreSQL lacks a built-in job scheduler like SQL Server's Job or MySQL's EVENT so your best bet is a cronjob.
It's probably wise to do the migration daily if that fits with your business rules. Why? Many-row DELETEs and INSERTs cause big transactions, and that can make your RDBMS server thrash. Smaller numbers of rows are better.
The SQL you need is something like this:
INSERT INTO post_archive SELECT * FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
You can package this up as a shell script. On UNIX-derived systems like Linux and FreeBSD the shell script file might look like this.
#!/bin/sh
psql postgres://username:password#hostname:5432/database << SQLSTATEMENTS
INSERT INTO post_archive SELECT * FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
SQLSTATEMENTS
Then run the shell script from cron a few minutes after 3am each day.
Some notes:
3am? Why? In many places daylight-time switchover messes up the time between 02:00 and 03:00 twice a year. A choice of, say 03:22 as a time to run the daily migration keeps you well away from that problem.
CURRENT_DATE gets you midnight of today. So, if you run the script more than once in any calendar day, no harm is done.
If you miss a day, the next day's migration will catch up.
You could package up the SQL as a stored procedure and put it into your RDBMS, then invoke it from your shell script. But then your migration procedure lives in two different places. You need the cronjob and shell script in any case in PostgreSQL.
Will your application go off the rails if it sees identical rows in both post and post_archive while the migration is in progress? If so, you'll need to wrap your SQL statements in a transaction. That way other users of the database won't see the duplicate rows. Do this.
#!/bin/sh
psql postgres://username:password#hostname:5432/database << SQLSTATEMENTS
START TRANSACTION;
INSERT INTO post_archive SELECT * FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
DELETE FROM post
WHERE created_at <= (CURRENT_DATE - INTERVAL '30 DAY')::DATE;
COMMIT;
SQLSTATEMENTS
Cronjobs are quite reliable on Linux and FreeBSD.

Related

How to get the count of hourly inserts on a table in PostgreSQL

I need a setup where rows older than 60 days get removed from the table in PostgreSQL.
I Have created a function and a trigger:
BEGIN
DELETE FROM table
WHERE updateDate < NOW() - INTERVAL '60 days';
RETURN NULL;
END;
$$;
But I believe if the insert frequency is high, this will have to scan the entire table quite often, which will cause high DB load.
I could run this function through a cron job or Lambda function every hour/day. I need to know the insert every hour on that table to take that decision.
Is there a query or job that I can setup which will collect the details?
Just to count the number of records per hour, you could run this query:
SELECT CAST(updateDate AS date) AS day
, EXTRACT(HOUR FROM updateDate) AS hour
, COUNT(*)
FROM _your_table
WHERE updateDate BETWEEN ? AND ?
GROUP BY
1,2
ORDER BY
1,2;
We do about 40 million INSERT's a day on a single table, that is partitioned by month. And after 3 months we just drop the partition. That is way faster than a DELETE.

Delayed Postgresql Backup table

Is there an easy way to create a replica table on the same postgresql server that is delayed by 24 hours?
I have tried creating a cron that does an insert with a command like this
INSERT INTO TABLE_BACKUP
SELECT * from TABLE_new WHERE time< (NOW() - INTERVAL '1 day');
I have found that doing the above some records are missing.
Is there a better way?

Postgres, operation are slow

My Postgres queries are slow on the table records.
A simple request like that can take 15 seconds !
The result: 32k (on 1.5 millions)
SELECT COUNT(*)
FROM project.records
WHERE created_at > NOW() - INTERVAL '1 day'
I have an index on created_at (which is a timestamp)
What can I do to manage this ? Is it my table who is too big ?
As suggested by #Andomar, I removed the large columns to another table.
I made sure to do a VACUUM ANALYZE to really clean the table.
Now the query take 400ms.

postgresql table data movement

In PostgreSQL i want to create script which can delete old data of before 1 month from A table(which contain many rows) and insert this data into one new alias table. and i want to execute this script every month.
for that i have created script as
insert into B select * from A where date >(now-'30 day'::interval);
delete from A where date <(now()-'30 days.
but in some month there is 30 days and in some 31 days .so how can i set this in cron tab to delete exact data and move in alias table.
While Laurenz answer makes much sense and seems to be a good guess of what OP really wants (cron monthly probably means he wants to flush not "older than one month", but of "previous month" so date_trunc to month is a case)
Yet answering it the other way (as I understand the original post):
begin;
insert into B select * from A where date >(now()-'1 month'::interval);
delete from A where date <(now()-'1 month'::interval);
end;
will delete not all data for previous month, but same timestamp one month ago, eg:
t=# select now()-'1 month'::interval;
?column?
-------------------------------
2017-05-12 07:31:31.785402+00
(1 row)
And with this logic you might want to schedule this data purge daily, not monthly - to keep last month data in active table, not "up to two" untill cron fires...
run it on the first of every month and write it like this:
... WHERE date >= date_trunc('month', current_timestamp) - INTERVAL '1 month'
AND date < date_trunc('month', current_timestamp)
If your table contains a lot of data, you might want to look into partitioning.

Huge PostgreSQL table - Select, update very slow

I am using PostgreSQL 9.5. I have a table which is almost 20GB's. It has a primary key on the ID column which is an auto-increment column, however I am running my queries on another column which is a timestamp... I am trying to select/update/delete on the basis of a timestamp column but the queries are very slow. For example: A select on this table `where timestamp_column::date (current_date - INTERVAL '10 DAY')::date) is taking more than 15 mins or so..
Can you please help on what kind of Index should I add to this table (if needed) to make it perform faster?
Thanks
You can create an index with your clause expression:
CREATE INDEX ns_event_last_updated_idx ON ns_event (CAST(last_updated AT TIME ZONE 'UTC' AS DATE));
But, keep in mind that you're using timestamp with timezone, cast this type to date can let you get undesirable side effects.
Also, remove all casting in your sql:
select * from ns_event where Last_Updated < (current_date - INTERVAL '25 DAY');