How can i lock statistics (analyze) of tables in postgresql? - postgresql

How can i lock statistics (analyze) of tables in postgresql?
There is a scheduled job truncates and inserts data into a table.( Size more than 1 gb).
Truncating the table causes the statistic to change. Then, a query using this table as a source gives undesired execution plan and takes too much time.
If i analyze the table manually, the duration of the query decreases acceptable time.

You cannot do that.
If the scheduled job performs mass data modification, it had better run an explicit ANALYZE when it is done (and a VACUUM wouldn't hurt either).
It does not make much sense to keep the statistics when you truncate a table and insert new data into it.

Related

Full Load in Redshift - DROP vs TRUNCATE

As part of daily load in Redshift, I have a couple of tables to drop and full load all of them, (data size is small, less than 1 million).
My question is which of the below two strategies is better in terms of CPU utilization and memory in Redshift:
1) Truncate data
2) DROP and Recreate Table.
If I truncate tables, should I perform Vacuum on tables every day as I have read that frequent drop and recreate tables in the database cause fragmentation of pages.
And one of the tables I would like to enable compression. So, is there any downside creating DDL with encoding every day.
Please advise! Thank you!
If you drop the tables you will lose assigned permissions to these tables. If you have views for these tables they will be obsolete.
Truncate is a better option, truncate does not require vacuum or analyze, it is built for use cases like this.
For further info Redshift Truncate documentation

PostgreSQL vacuuming a frequently updating jsonb field

I have postgres table with jsonb field. Field size is about 2-4kb per row. My application updates 100k rows per day 2000 times (changing 0.1-0.5% of data in field). Autovacuum is off, vacuum full runs every day at night.
Vacuum frees about 100-300gb every day and takes a long time to go causing application downtime.
The question is: can I solve this problem with jsonb field or I must split that field onto other simple tables?
If your concern is long down time then yes VACUUM FULL requires exclusive lock on the table being vacuumed for entire period of run.
I'll suggest you to try pg_repack extension or pg_squeeze extension - depending upon postgres version. Unlike CLUSTER and VACUUM FULL it works online, without holding an exclusive lock on the processed tables during processing. These extensions are really easy to install and use in postgres. These extensions can reduce your downtime significantly and also will help to reduce runs of VACUUM FULL.

How does postgresql lock tables when inserting and selecting?

I'm migrating data from one table to another in an environment where any long locks or downtime is not acceptable, in total about 80000 rows. Essentially the query boils down to this simple case:
INSERT INTO table_2
SELECT * FROM table_1
JOIN table_3 on table_1.id = table_3.id
All 3 tables are being read from and could have an insert at any time. I want to just run the query above, but I'm not sure how the locking works and whether the tables will be totally inaccessible during the operation. My understanding tells me that only the affected rows (newly inserted) will be locked. Table 1 is just being selected, so no harm, and concurrent inserts are safe so table 2 should be freely accessible.
Is this understanding correct, and can I run this query in a production environment without fear? If it's not safe, what is the standard way to accomplish this?
You're fine.
If you're interested in the details, you can read up on multiversion concurrency control, or on the details of the Postgres MVCC implementation, or how its various locking modes interact, but the implications for your case are nicely summarised in the docs:
reading never blocks writing and writing never blocks reading
In short, every record stored in the database has some version number attached to it, and every query knows which versions to consider and which to ignore.
This means that an INSERT can safely write to a table without locking it, as any concurrent queries will simply ignore the new rows until the inserting transaction decides to commit.

Postgresql / INSERT performance fall when doing a single SELECT during a batch of INSERTs

Context:
Using PostgreSQL (9.6), for a custom synchronisation project, we have an agent that make a lot of INSERTs between a database_1 and database_2 when syncing data.
For example: DB2 is down during 5 minutes, there are 40,000 new lines in DB1, so when DB2 is up again, all the 40,000 lines will be immediately synced from DB1 to DB2.
All this works great.
Problem/Fact:
During the synchronisation, the INSERT rate is around 1000 lines / second.
However, when we do a simple SELECT count(*) FROM table during the sync (in the middle of these thousands of INSERTs), we noticed that the INSERT rate is falling town to a few dozens per second (instead of 1000x per second).
Question:
Is there any reason why a SELECT operation (made inside pgAdmin, by another process than the syncing process) is slowing down the batch of INSERT ?
Any locking or internal reason that might explain this?
Or should I provide more information? How can I debug more?
Hints:
Logs are fully activated and all the INSERTs always take around 0.700ms (before slowdown and same during slowdown), it doesn't change.
INSERTs are currently performed one row by one row
(I'll be happy to provide more information)

postgresql 9.2 never vacuumed and analyze

I have given a postgres 9.2 DB around 20GB of size.
I looked through the database and saw that it has been never run vacuum and/or analyze on any tables.
Autovacuum is on and the transaction wraparound limit is very far (only 1% of it).
I know nothing about the data activity (number of deletes,inserts, updates), but I see, it uses a lot of index and sequence.
My question is:
does the lack of vacuum and/or analyze affect data integrity (for example a select doesn't show all the rows matches the select from a table or from an index)? The speed of querys and writes doesn't matter.
is it possible that after the vacuum and/or analyze the same query gives a different answer than it would executed before the vacuum/analyze command?
I'm fairly new to PG, thank you for your help!!
Regards,
Figaro88
Running vacuum and/or analyze will not change the result set produced by any select operation (unless there was a bug in PostgreSQL). They may effect the order of results if you do not supply an ORDER BY clause.