Updates in a Postgres Timescale DB - postgresql

I have a Postgres Timescale database with a table that contains about 2.5 million records.
The Primary Key is on two columns, device and time, there is a subsequent index on these columns also, and another on the time column only.
The problems I'm experiencing are:
I need to do a one off update of a new column to populate it via a
calculation from existing columns, but tests so far indicate this
will be a very slow process.
Performing an update on a single record identified by the device and
time columns can be instant, or very slow, even if the record is
only an hour or so old.
Tom

Related

Bulk update Postgres table

I have a table with around 200 million records and I have added 2 new columns to it. Now the 2 columns need values from a different table. Nearly 80% of the rows will be updated.
I tried update but it takes more than 2 hours to complete.
The main table has a composite primary key of 4 columns. I have dropped it and dropped an index that is present on a column before updating. Now the update takes little over than 1 hour.
Is there any other way to speed up this update process (like batch processing).
Edit: I used the other table(from where values will be matched for update) in from clause of the update statement.
Not really. Make sure that max_wal_size is high enough that you don't get too many checkpoints.
After the update, the table will be bloated to about twice its original size.
That bloat can be avoided if you update in batches and VACUUM in between, but that will not make processing faster.
Do you need whole update in single transaction? I had quite similar problem, with table that was under heavy load, and column required not null constraint. Do deal with it - I did some steps:
Add columns without constraints like not null, but with defaults. That way it went really fast.
Update columns in steps like 1000 entries per transaction. In my case load of the DB rise, so I had to put small delay.
Update columns to have not null constraints.
That way you don't block table for long time, but that is not an answer to your question.
First to validate where you are - I would check iostats to see if that is not the limit... To speed up, I would consider:
higher free space map - to be sure DB is aware of entries that can be removed, but note that if pages are packed to the limit it would not bring much...
maybe foreign keys referring to the table can be also removed? To stop locking the table,
removing all indices since they are slowing down, and create them afterwords - that looks like slicing problem but other way, but is an option, so counts...
There is a 2 type of solution to your problem.
1) This approach work if your main table doesn't update or inserted during this process
First create the same table schema without composite primary key and index with a different name.
Then insert the data in the new table with join table data.
Apply all constraints and indexes on the new table after insert.
Drop the old table and rename the new table with the old table name.
2) Or you can use a trigger to update that two-column on insert or update event. (This will make insert update operation slightly slow)

PostgreSQL: What is faster delete by unindexed column or other table?

In our PostgreSQL database, we have a table with around 1TB of data. The table consists of an id, name, value and a timestamp. The data isn't sorted, and we don't have an index over the timestamp. However, we want to delete everything WHERE timestamp < '2018-09-01 00:00'.
We also have a second table with an exact copy of the data we want to delete. The copy process took us around 300 Minutes.
So I wonder which one would be faster. Deleting using the WHERE-clause or using the ids of the other table? If the second one would be more quickly how to write the query?

Deleting rows in Postgres table using ctid

We have a table with nearly 2 billion events recorded. As per our data model, each event is uniquely identified with 4 columns combined primary key. Excluding the primary key, there are 5 B-tree indexes each on single different columns. So totally 6 B-tree indexes.
The events recorded span for years and now we need to remove the data older than 1 year.
We have a time column with long values recorded for each event. And we use the following query,
delete from events where ctid = any ( array (select ctid from events where time < 1517423400000 limit 10000) )
Does the indices gets updated?
During testing, it didn't.
After insertion,
total_table_size - 27893760
table_size - 7659520
index_size - 20209664
After deletion,
total_table_size - 20226048
table_size - 0
index_size - 20209664
Reindex can be done
Command: REINDEX
Description: rebuild indexes
Syntax:
REINDEX { INDEX | TABLE | DATABASE | SYSTEM } name [ FORCE ]
Considering #a_horse_with_no_name method is the good solution.
What we had:
Postgres version 9.4.
1 table with 2 billion rows with 21 columns (all bigint) and 5 columns combined primary key and 5 individual column indices with date spanning 2 years.
It looks similar to time-series data with a time column containing UNIX timestamp except that its analytics project, so time is not at an ordered increase. The table was insert and select only (most select queries use aggregate functions).
What we need: Our data span is 6 months and need to remove the old data.
What we did (with less knowledge on Postgres internals):
Delete rows at 10000 batch rate.
At inital, the delete was so fast taking ms, as the bloat increased each batch delete increased to nearly 10s. Then autovacuum got triggered and it ran for almost 3 months. The insert rate was high and each batch delete has increased the WAL size too. Poor stats in the table made the current queries so slow that they ran for minutes and hours.
So we decided to go for Partitioning. Using Table Inheritance in 9.4, we implemented it.
Note: Postgres has Declarative Partitioning from version 10, which handles most manual work needed in partitioning using Table Inheritance.
Please go through the official docs as they have clear explanation.
Simplified and how we implemented it:
Create parent table
Create child table inheriting it with check constraints. (We had monthly partitions and created using schedular)
Indexes are need to be created separately for each child table
To drop old data, just drop the table, so vacuum is not needed and will be instant.
Make sure to have the postgres property constraint_exclusion to partition.
VACUUM ANALYZE the old partition after started inserting in the new partition. (In our case, it helped the query planner to use Index-Only scan instead of Seq. scan)
Using Triggers as mentioned in the docs may make the inserts slower, so we deviated from it, as we partitioned based on time column, we calculated the table name at application level based on time value before every insert and it didn't affect the insert rate for us.
Also read other caveats mentioned there.

Update 100 million rows, minimize impact on concurrent operations

Wanted to update all rows in a table of size 1000GB. Tried 2 methods
1) first tried to update 1 million rows but after this got to know that size increased by 30GB. Since I didn't want to do auto vacuum so I rejected this method.
2) tried to create a clone table and insert all record from current table to clone table. After inserting all data exchanged the table name. Since I couldn't rename the table name without service downtime so rejected this method also.
Need a way to update all the records without any downtime.

Partitioning Oracle tables used for logging

I have an application that records activity in a table (Oracle 10g). The logging records should be kept for at least 30 days. I expect about 20 million rows to be added to this table every month.
The DBA suggested that the table be split in partitions containing one week of data. The weekly maintenance script would then delete the oldest partition (leaving only 4 weeks of data in the table).
What would be the best way of partitioning this logging table?
Partitioning a table isn't hard - it appears that you will be removing the data on a weekly basis, so the partition clauses will look like
PARTITION "P2009_45" VALUES LESS THAN
(TO_DATE(' 2009-11-02 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION "P2009_46" VALUES LESS THAN
(TO_DATE(' 2009-11-09 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
... etc
where your partitioning column is your date column of interest in the table.
Additional comments:
If you can upgrade to 11g you can
take advantage of interval
partitioning, which is similar to
this range partitioning, but Oracle
will manage creating new partitions
for you.
If you're going to routinely drop off
partitions, I would advise making all
indexes on the table
locally-partitioned to avoid the
rebuilds that would be necessary with
global partitions after partition
operations.
If you have a good idea of the number
of log entries per month, and it
stays relatively constant, you might
consider using a sequence (as a primary key) that is
capped at this number and then
recycles back to 0. Then your
logging statements must become "MERGE
INTO... " statements that either
create a new row or overwrite the row
if it exists. This only guarantees
that you'll retain the number of rows
allowed by the sequence max value and
NOT a certain time interval, but this
might be an alternative to
partitioning (which as DvE points
out, is an extra-expense option)
The most likely partitioning scheme would be to range-partition your data on the creation date. Each week you would create a new partition and drop the oldest one. The impact will depend on how this table is used / indexed.
Since it is a logging table perhaps it is not indexed, in that case dropping a partition will have little impact: referencing objects won't be invalidated, the drop will be just require a partition lock (and the oldest partition shouldn't be inserted at that time).
If the table is indexed, you will have to decide if your indexes will be global or partitionned. Global indexes will have to be rebuilt when you drop a partition (which takes time, although 20M rows is still manageable). You can use the UPDATE GLOBAL INDEXES clause to keep the indexes valid after the partition drop.
Local indexes will be partitionned like the table and may be less efficient than global indexes (index range scans will have to scan each local index instead of a common index if you do not query by date). These indexes won't have to be updated after a partition drop.
20 million rows every month and you only have to keep 30 days of data? (That's about a months worth).
Even with 12 months worth of data it wouldn't be hard to query this table (as one big table) with the correct index.
Inserting is no problemen either with 1 row in the logging table or 20 million.
Partitioning in Oracle is also a feature that needs to be paid for if I'm correct, so it's costly too (if you don't have a license already).