Run vacuum by schedule - postgresql

I'm using Postgres version 9.6
Most of my tables are for queries, update, insert.
Most of them around 200K-700K.
There are bigger (millions) and smaller.
Is that a good idea to perform vacuum (and analyze?) operation once a day? once a week? regardless if there is an autovacuum..
Advantages vs disadvantages?

Autovacuum is done when needed and it only creates statistics that are used when planning a query.
Basically you never need to do this manually, unless you have made vast changes to a table (filled it with data for example), and want to use it in another query within a few milliseconds. In that scenario, old statistics will result in the query planner coming up with a very bad query plan and will lead to a significantly slower query.
What you might want to do once per day / per week, or whatever, is to cluster tables, recreate degraded indexes, on tables that were modified a lot. Research these topics more to decide if / when / how to do it.

Related

Is there a shared query plan cache for Postgres?

I have a complex postgres query that I've optimised with pg_hint_plan. Planning time is about 150ms while query time is about 30ms. The plan should never change, therefore there's no point in gathering statistics each any every time for each query. The structural problem with the query is that it hits too many tables.
Tweaking the join collapse limit and from select collapse limit has limited effect.
Most 'enterprise' databases have a shared query cache, but as far as I can see Postgres does not.
What are ways around this? Prepared statements aren't really suitable as their lifetime is bound to the connection.
There is no way around this. The best solution I can think of is to use connection pooling, so that your connections live for a long time, and use a prepared statement.
No PostgreSQL does not have a plan cache area like Microsoft SQL Server or Oracle...
It is one of the many differences with professional RDBMS, like SQL Server... For the last one, a complete comparison can be read here

Postgres Upsert - fragmentation issues

Summary
I am using Postgres UPSERTs in our ETLs and I'm experiencing issues with fragmentation and bloat on the tables I am writing to, which is slowing down all operations including reads.
Context
I have hourly batch ETLs upserting into tables (tables ~ 10s of Millions, upserts ~ 10s of thousands) and we have auto vacuums set to thresholds on AWS.
I have had to run FULL vacuums to get the space back and prevent processes from hanging. This has been exacerbated now as the frequency of one of our ETLs has increased, which populates some core tables which are the source for a number of denormalised views.
It seems like what is happening is that tables don't have a chance to be vacuumed before the next ETL run, thus creating a spiral which eventually leads to a complete slow-down.
Question!
Does Upsert fundamentally have a negative impact on fragmentation and if so, what are other people using? I am keen to implement some materialised views and move most of our indexes to the new views while retaining only the PK index on the tables we are writing to, but I'm not confident that this will resolve the issue I'm seeing with bloat.
I've done a bit of reading on the issue but nothing conclusive, for example --> https://www.targeted.org/articles/databases/fragmentation.html
Thanks for your help
It depends. If there are no constraint violations, INSERT ... ON CONFLICT won't cause any bloat. If it performs an update, it will produce a dead row.
The measures you can take:
set autovacuum_vacuum_cost_delay = 0 for faster autovacuum
use a fillfactor somewhat less than 100 and have no index on the updated columns, so that you can get HOT updates, which make autovacuum unnecessary
It is not clear what you are actually seeing. Can you turn track_io_timing on, and then do an EXPLAIN (ANALYZE, BUFFERS) for the query that you think has been slowed down by bloat?
Bloat and fragmentation aren't the same thing. Fragmentation is more an issue with indexes under some conditions, not the tables themselves.
It seems like what is happening is that tables don't have a chance to be vacuumed before the next ETL run
This one could be very easy to fix. Run a "manual" VACUUM (not VACUUM FULL) at end or at the beginning of each ETL run. Since you have a well defined workflow, there is no need to try get autovacuum to do the right thing, as it should be very easy to inject manual vacuums into your workflow. Or do you think that one VACUUM per ETL is overkill?

Implementing a high-scale scheduler on a database

We have a Postgres DB with a table of tens of millions of rows.
We also have a scheduler (app code) that runs on those rows and querying for specific assets. Usually what we need is 30days old items there.
We started to scale, and the scheduler is very slow.
What is the best approach to scale with maintaining a good performance? Using a different DB? Redis? ES? Partitioning the Postgres?
Thanks!
Usually what we need is 30days old items there.
That's the part of your question that's actually relevant. Postgresql, when used appropriately, should have absolutely no trouble performing a simple WHERE query with tens of millions of rows. The cost of index lookups grows logarithmically.
To take a stab in the dark: If you are performing date calculations for every row in your WHERE statement, performance will indeed be abysmal. For example:
SELECT * FROM my_data WHERE AGE(CREATED_AT) > INTERVAL '30 days';
...is a rather bad idea. Instead, calculate the date cutoff once, and statically use it in the comparison.
If your query is really more complicated, you could also look into expression indices. It's overkill for the example above, and it adds some overhead to all data-modifying operations, but would make a query as the one above perform as well as the static variant.
In any case: EXPLAIN SELECT ... is your friend, and posting its output will make you even more friends here.

Why is count(*) taking extremely long in one PostgreSQL database but not another?

I have two Postgres databases. In one I have two tables, each with about 8,000,000 rows, and a count on either of them takes about a second. In another database, also Postgres, there are tables that are 1,000,000 rows, and a count takes 10s, and one table thats about 6,000,000 rows, and count takes 3min to run. What factors determine how long this will take? They are on different machines, but the database that takes longer is on a faster machine.
I've read about how postgres count is slow in general, but this seems odd to me. I can't really use a workaround, because I am using django, and it does a count in the admin, which is taking forever and making it dificult to use.
Any information on this would be helpful.
Speed of counting depends not just on the number of rows in the table but on the time taken to read the data from disk. The time depends on many things:
Number of rows in the table - as you already mentioned.
The number of records per page (if each record takes more space you need to read more pages to read the same number of rows).
If pages are only partly full you have to read more pages.
If the tables is already cached in memory (having more memory available helps here).
If the table is indexed with a small index (the index can be counted instead).
Hardware differences.
etc....
Indexes, caches, disk speed, for starters all have an impact.
Is the "slow table" properly vacuumed?
Do not use VACUUM FULL, it only creates table and index bloat. VACUUM is absolutely enough. VACUUM ANALYZE would even be better.
And make sure autovacuum is turned on and properly configured

Why doesn't PostgreSQL performance get back to its maximum after a VACUUM FULL?

I have a table with a few million tuples.
I perform updates in most of them.
The first update takes about a minute. The second, takes two minutes. The third update takes four minutes.
After that, I execute a VACUUM FULL.
Then, I execute the update again, which takes two minutes.
If I dump the database and recreate it, the first update will take one minute.
Why doesn't PostgreSQL performance get back to its maximum after a VACUUM FULL?
VACUUM FULL does not compact the indexes. In fact, indexes can be in worse shape after performing a VACUUM FULL. After a VACUUM FULL, you should REINDEX the table.
However, VACUUM FULL+REINDEX is quite slow. You can achieve the same effect of compacting the table and the indexes using the CLUSTER command which takes a fraction of the time. It has the added benefit that it will order your table based on the index you choose to CLUSTER on. This can improve query performance. The downsides to CLUSTER over VACUUM FULL+REINDEX is that it requires approximately twice the disk space while running. Also, be very careful with this command if you are running a version older than 8.3. It is not MVCC safe and you can lose data.
Also, you can do a no-op ALTER TABLE ... ALTER COLUMN statement to get rid of the table and index bloat, this is the quickest solution.
Finally, any VACUUM FULL question should also address the fact why you need to do this? This is almost always caused by incorrect vacuuming. You should be running autovacuum and tuning it properly so that you never have to run a VACUUM FULL.
The order of the tuples might be different, this results in different queryplans. If you want a fixed order, use CLUSTER. Lower the FILLFACTOR as well and turn on auto_vacuum. And did you ANALYZE as well?
Use EXPLAIN to see how a query is executed.