When do Postgresql update its statistical informations for CBO? - postgresql

I know the statistical information is updated by VACUUM ANALYZE and CREATE INDEX, but I'm not sure about some other situations:
insert new data into a table
let the database do nothing (and wait for autovacuum?)
delete some rows in a table
truncate a partition of a table

CREATE INDEX does not cause new statistics to be calculated.
The autovacuum daemon will run an ANALYZE process for all tables that have more than 10% of their data changed (this is the default configuration). Theres changes are INSERT, UPDATE or DELETE. TRUNCATE will clear the statistics for a table.

Related

Temp table updates is slower then normal table in postgresql

I have a situation where updates on my temp table is slow. Below is the scenario
Created temp table in session for every session,first time temp table created and then going forward doing insert,update and delete operations this operations until session ends only.
First i'm inserting the rows and based on rows i'm updateing other columns. but this updates is slow compared to norma table. i checked the performance by replacing temp table whereas normal table taking around 50 to 60s but temp table is taking nearly 5 mins.
I tried analyze on temp table, then i got the improved performance. when im using analyze the updates are completed in with 50 seconds.
I tried Types also, but no luck.
Record count in temp table is 480
Can anyone help to imprrove the performance on temp table with out analyze OR any alternative for bulk collect and bulk insert in user defined types
All the above ooperations i'm doing in postgresql.
The lack of information in your question forces me to guess, but if all other things are equal, the difference is probably that you don't have accurate statistics on the temporary table. For normal tables (which are visible to the public), autovacuum takes care of that automatically, but for temporary tables, you have to call ANALYZE explicitly to gather table statistics.

Huge delete on PostgreSQL table : Deleting 99,9% of the rows of the table

I have a table in my PostgreSQL database that became huge, filled with a lot of useless rows.
As these useless rows represent 99.9% of my table data (about 3.3M rows), I was wondering if deleting them could have a bad impact on my DB :
I know that this operation could take some time and I will be able to block writes on the table during the maintenance operation
But I was wondering if this huge change in the data could also impact performance after the opertation itself.
I found solutions like creating a new table / using TRUNCATE to drop all lines but as this operation will be specific and one shot, I would like to be able to choose the most adapted solution.
I know that Postgre SQL has a VACUUM mechanism but I'm not a DBA expert : Could anyone please confirm that this delete will not impact my table integrity / data structure and that freed space will be reclaimed if needed for new data ?
PostgreSQL 11.12, with default settings on AWS RDS. I don't have any index on my table and the criteria for rows deletion will not be based on the PK
Deleting rows typically does not shrink a PostgreSQL table, sou you would then have to run VACUUM (FULL) to compact it, during which the table is inaccessible.
If you are deleting many rows, both the DELETE and the VACUUM (FULL) will take a long time, and you would be much better off like this:
create a new table that is defined like the old one
INSERT INTO new_tab SELECT * FROM old_tab WHERE ... to copy over the rows you want to keep
drop foreign key constraints that point to the old table
create all indexes and constraints on the new table
drop the old table and rename the new one
By planning that carefully, you can get away with a short down time.

Performance of truncate and insert vs update

I have a table with more than 1 million records and table is growing everyday.I need to update two columns of that table everyday. What is the best way either to truncate the table and insert or update row wise?
example :-
today
userid activitycount
1 18
tomorrow
userid activitycount
1 19
Make sure that the fillfactor of the table is less than 50 and that the updated columns are not indexed.
Then the updates will become HOT updates that don't need to modify any index, and autovacuum will make sure that tomorrow's update will find enough free space.
The disadvantage is the bloat you have with this method, but you don't need to create new tables and rename them, which may be problematic with concurrent transactions.
Is faster to truncate table and copy it again. On Postgres docs you can learn how to do to populate tables with big datasets:
This section contains some suggestions on how to make this process as efficient as possible.
Use Copy: Use COPY to load all the rows in one command, instead of using a series of INSERT commands.
Remove Indexes: if you need indexes, just create indexes when data is already inserted.
Remove Foreign Key Constraints: Create constraints when data is already inserted.
Tuning Postgres installation: maintenance_work_mem, max_wal_size, Disable WAL Archival and Streaming Replication, ...

Move truncated records to another table in Postgresql 9.5

Problem is following: remove all records from one table, and insert them to another.
I have a table that is partitioned by date criteria. To avoid partitioning each record one by one, I'm collecting the data in one table, and periodically move them to another table. Copied records have to be removed from first table. I'm using DELETE query with RETURNING, but the side effect is that autovacuum is having a lot of work to do to clean up the mess from original table.
I'm trying to achieve the same effect (copy and remove records), but without creating additional work for vacuum mechanism.
As I'm removing all rows (by delete without where conditions), I was thinking about TRUNCATE, but it does not support RETURNING clause. Another idea was to somehow configure the table, to automatically remove tuple from page on delete operation, without waiting for vacuum, but I did not found if it is possible.
Can you suggest something, that I could use to solve my problem?
You need to use something like:
--Open your transaction
BEGIN;
--Prevent concurrent writes, but allow concurrent data access
LOCK TABLE table_a IN SHARE MODE;
--Copy the data from table_a to table_b, you can also use CREATE TABLE AS to do this
INSERT INTO table_b AS SELECT * FROM table_a;
--Zeroying table_a
TRUNCATE TABLE table_a;
--Commits and release the lock
COMMIT;

Alter Table Set Statistics requires table lock

I have run into a case such that Pg always preferring into a sequential scan for a table that has around 70M rows. (Index scan is ideal for that query and i have confirmed it by setting enable_seq_scan=off, speed improved by 200x)
So, in order to help Pg understand my data better i executed this
ALTER TABLE tablename ALTER COLUMN columnname SET STATISTICS 1000;
Unfortunately this requires Update Exclusive lock which locks the entire table (too much lock).
Is there a solution to avoid locking for this statement ?
Data sharding is done for this table based on Primary Key Range, so I would like Pg to even understand my Pk better so that it knows which User has got large data. Will it be of use if i increase the statistics of PrimaryKey column as well ?
From the very docs you linked
SET STATISTICS
This form sets the per-column statistics-gathering target for subsequent ANALYZE operations. The target can be set in the range 0 to 10000; alternatively, set it to -1 to revert to using the system default statistics target (default_statistics_target). For more information on the use of statistics by the PostgreSQL query planner, refer to Section 14.2.
SET STATISTICS acquires a SHARE UPDATE EXCLUSIVE lock.
And, on the docs for Explicit Locking
SHARE UPDATE EXCLUSIVE
Conflicts with the SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against concurrent schema changes and VACUUM runs.
Acquired by VACUUM (without FULL), ANALYZE, CREATE INDEX CONCURRENTLY, and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see ALTER TABLE).
So you can't change the schema, or vacuum while analytics are happening. So what? They should happen very fast. Almost instantly.