I am running a vacuum on a very large table.
When I run it, it says:
bacula=# VACUUM FULL VERBOSE file_partition_19
bacula-# ;
INFO: vacuuming "public.file_partition_19"
INFO: "file_partition_19": found 16242451 removable, 21024161 nonremovable row versions in 900380 pages
DETAIL: 0 dead row versions cannot be removed yet.
CPU 5.14s/14.42u sec elapsed 19.61 sec.
VACUUM
Time: 163784.767 ms
bacula=#
When it does this, it shows up to the CPU line fairly quickly, then waits a long time before it shows the final two lines (+ the prompt). This is reflected in the difference in time - "elapsed 19.61 sec", compared to to the "Time:" of 163 seconds (shown because I set \timing on).
While I haven't timed them, both times are about right - start the command, wait 20 seconds, it then shows up to the "CPU" line, then waits about 3 minutes, then prints the rest.
Is this normal? Why is it happening?
It's mostly rebuilding all indizes on the table, which it has to do since basically "VACUUM FULL" does a full rewrite of the table. If you remove all indizes from your table, there should be almost no delay after the "CPU" line.
AFAICT, the CPU usage line is printed by a generic routine that does most of the work for other (non-FULL) vacuum modes. It is meaningless in the "VACUUM FULL" case.
If you are concerned that it takes too long, I recommend that you have a look at the "When to use VACUUM FULL and when not to" from the PostgreSQL wiki. 9 times out of 10 when people are using VACUUM FULL they actually shouldn't.
Based on the tag "postgres-9.3" you used for your question I am assuming that you have Postgres 9.3 version.
you can refer this link just for your own knowledge about "VACUUM" and "VACUUM FULL" for the pre-9.0 versions of Postgres.
VACUUM VS VACUUM FULL For Pre-9.0 versions of Postgres
So as you have Postgres-9.3, the documentation says following:
For clarity, 9.0 changes VACUUM FULL. As covered in the documentation, the VACUUM FULL implementation has been changed to one that's similar to using CLUSTER in older versions. This gives a slightly different set of trade-offs from the older VACUUM FULL described here. While the potential to make the database slower via index bloating had been removed by this change, it's still something you may want to avoid doing, due to the locking and general performance overhead of a VACUUM FULL.
As per the current documentation, VACUUM FULL operation not only retrieves the space from the table where records are marked deleted but it also touches every valid record in the table and tries to reorganize them in DB pages so that's how it frees up more space then just VACUUM operation. So in the VERBOS result when we see the line
CPU 5.14s/14.42u sec elapsed 19.61 sec
it is the time taken by system process to go through the table and analyze the table and retrieve the space that is already marked. Then it starts the organizing the records into page file and hence depending on how much table pages are fragmented the process will take time.
For example, if you have a new table and keep adding new records incrementally/sequentially so that new records gets added at the bottom of the page (based on the primary key defined). Now you perform delete operation in a reverse order so that records only gets deleted from the bottom of the page. Let's say you delete half of the records from the table. In this situation, there is no much page fragmentation(virtually 0) and hence when VACUMME FULL runs the second phase, it will still try to organize the valid records but because there is no fragmentation and hence it will not have to actually move any records and will finish faster.
But, above explain situation is not the way update/delete happens in real world. Real word Update/Delete on table create lots of page fragmentation and hence during the second phase VACUUM FULL process has to actually move valid records into free space at the beginning of each page and hence takes more time.
check the following sample output,
I ran for very small dummy table. even though It has only 7 rows. VACUME PROCESS (First Phase) finishes in 0.03sec(30ms) but total query reported to finish in 61ms. So that tells me even though there is nothing to reorganize the process still checks how much if it can be reorganized and hence takes time. But if I have actually lots of fragmentation and reorganize happens then it would be much more completion time depending on page fragmentation.
Related
Is it possible to run PostgreSQL 11's VACUUM FULL for a short while and then get some benefit? Or does cancelling it midway cause all of its progress to be lost?
I've read about pg_repack (https://aws.amazon.com/blogs/database/remove-bloat-from-amazon-aurora-and-rds-for-postgresql-with-pg_repack/) but the way it works (creating new tables, copying data, etc.) sounds risky to me. Is that my paranoia or is it safe to use on a production database?
Backstory: I am working with a very large production database on AWS Aurora PostgreSQL 11. Many of the tables had tens of millions of records but have been pruned down significantly. The problem is that the table sizes on disk (and in the snapshots) have not decreased because DELETE and VACUUM (without FULL) do not shrink the files. These tables are in the hundreds of gigabytes range and I'm afraid running VACUUM FULL will take forever.
No. VACUUM FULL writes a new physical file for the table. Stopping it before it finishes voids the work done so far.
The manual:
VACUUM FULL rewrites the entire contents of the table into a new
disk file with no extra space, allowing unused space to be returned to
the operating system. This form is much slower and requires an ACCESS EXCLUSIVE lock on each table while it is being processed.
This is the main reason why community tools like pg_repack or pg_squeeze were created, which are more flexible, less blocking, and often faster, too. (I don't think pg_squeeze is available for Aurora, yet).
pg_repack might be a bit of overkill. You can instead just delete tuples from the end of the table and reinsert them towards the front of the table (reusing space already marked as free by an earlier VACUUM), at which point another ordinary VACUUM can truncate away the free space at the end of the table.
with d as (delete from mytable where ctid>='(50000,1)' returning *)
insert into mytable select * from d;
You can use pg_freespacemap to figure out where would be a good place to start the ctid criterion at.
This might not behave well if you have triggers or FK constraints, and it might bloat indexes such they would need to be rebuilt (but they probably do anyway). It will also lock a large number rows at a time, for the duration it takes for the re-insert to run and commit.
Improvements made since v11 will make the ctid scan more efficient than it will be in v11.
I just want to check that my understanding of these two things is correct. If it's relevant, I am using Postgres 9.4.
I believe that one should vacuum a database when looking to reclaim space from the filesystem, e.g. periodically after deleting tables or large numbers of rows.
I believe that one should analyse a database after creating new indexes, or (periodically) after adding or deleting large numbers of rows from a table, so that the query planner can make good calls.
Does that sound right?
vacuum analyze;
collects statistics and should be run as often as much data is dynamic (especially bulk inserts). It does not lock objects exclusive. It loads the system, but is worth of. It does not reduce the size of table, but marks scattered freed up place (Eg. deleted rows) for reuse.
vacuum full;
reorganises the table by creating a copy of it and switching to it. This vacuum requires additional space to run, but reclaims all not used space of the object. Therefore it requires exclusive lock on the object (other sessions shall wait it to complete). Should be run as often as data is changed (deletes, updates) and when you can afford others to wait.
Both are very important on dynamic database
Correct.
I would add that you can change the value of the default_statistics_target parameter (default to 100) in the postgresql.conf file to a higher number, after which, you should restart your server and run analyze to obtain more accurate statistics.
I process a table with ~10^7 rows the following way: take last N rows, update them in some way, and delete, then vacuum table. In the end I make a query for pg_total_relation_size. Loop repeats until the table is over. Each iteration last for several seconds. There are no any other queries for this table except mentioned above. The problem is that I get the same results for table size. It changes about once a several hours.
So the question is -- does postgres store somewhere the table size or does it calculate it every time the function is invoked? I.e., does my table size really stays the same in spite of its processing?
Your table really does stay the same size on disk despite the DELETEs and VACUUMing you're doing. As per the documentation on VACUUM, ordinary VACUUM only releases space back to the OS if it can do so by truncating free space from the end of the file without rearranging live rows.
The space is still "free" in that PostgreSQL can re-use it for other new rows. It is much, much faster to re-use space that PostgreSQL hasn't given back to the OS than it is to extend a relation with new space, so this is often preferable.
The other reason Pg doesn't just give this space back is that it can only give space back to the OS when it's a contiguous chunk with no visible rows until the end of the file. This doesn't happen much so in practice Pg needs to move some rows around to compact the table and allow it to free space at the end, kind of like a defrag on a file system. This is an inefficient and slow process that can counter-intuitively make the table slower to access instead of faster, so it's not always a good idea.
If you have a relation that's mostly but not entirely empty it can be worth doing a VACUUM FULL (Pg 9.0 and above) or CLUSTER (all versions) to free the space. If you expect to refill the table this is usually counter-productive; it's actually better to leave it as-is.
(For what I mean by terms like "live" and "visible" see the documentation on MVCC which will help you understand Pg's table organisation.)
Personally I'd skip the manual VACUUM in your case. Turn autovacuum up if you need to. If you really need to you could consider partitioning your table, processing it partition by partition and TRUNCATE each partition when you're done processing it.
We're using Postgresql 9.1.4 as our db server. I've been trying to speed up my test suite so I've stared profiling the db a bit to see exactly what's going on. We are using database_cleaner to truncate tables at the end of tests. YES I know transactions are faster, I can't use them in certain circumstances so I'm not concerned with that.
What I AM concerned with, is why TRUNCATION takes so long (longer than using DELETE) and why it takes EVEN LONGER on my CI server.
Right now, locally (on a Macbook Air) a full test suite takes 28 minutes. Tailing the logs, each time we truncate tables... ie:
TRUNCATE TABLE table1, table2 -- ... etc
it takes over 1 second to perform the truncation. Tailing the logs on our CI server (Ubuntu 10.04 LTS), take takes a full 8 seconds to truncate the tables and a build takes 84 minutes.
When I switched over to the :deletion strategy, my local build took 20 minutes and the CI server went down to 44 minutes. This is a significant difference and I'm really blown away as to why this might be. I've tuned the DB on the CI server, it has 16gb system ram, 4gb shared_buffers... and an SSD. All the good stuff. How is it possible:
a. that it's SO much slower than my Macbook Air with 2gb of ram
b. that TRUNCATION is so much slower than DELETE when the postgresql docs state explicitly that it should be much faster.
Any thoughts?
This has come up a few times recently, both on SO and on the PostgreSQL mailing lists.
The TL;DR for your last two points:
(a) The bigger shared_buffers may be why TRUNCATE is slower on the CI server. Different fsync configuration or the use of rotational media instead of SSDs could also be at fault.
(b) TRUNCATE has a fixed cost, but not necessarily slower than DELETE, plus it does more work. See the detailed explanation that follows.
UPDATE: A significant discussion on pgsql-performance arose from this post. See this thread.
UPDATE 2: Improvements have been added to 9.2beta3 that should help with this, see this post.
Detailed explanation of TRUNCATE vs DELETE FROM:
While not an expert on the topic, my understanding is that TRUNCATE has a nearly fixed cost per table, while DELETE is at least O(n) for n rows; worse if there are any foreign keys referencing the table being deleted.
I always assumed that the fixed cost of a TRUNCATE was lower than the cost of a DELETE on a near-empty table, but this isn't true at all.
TRUNCATE table; does more than DELETE FROM table;
The state of the database after a TRUNCATE table is much the same as if you'd instead run:
DELETE FROM table;
VACCUUM (FULL, ANALYZE) table; (9.0+ only, see footnote)
... though of course TRUNCATE doesn't actually achieve its effects with a DELETE and a VACUUM.
The point is that DELETE and TRUNCATE do different things, so you're not just comparing two commands with identical outcomes.
A DELETE FROM table; allows dead rows and bloat to remain, allows the indexes to carry dead entries, doesn't update the table statistics used by the query planner, etc.
A TRUNCATE gives you a completely new table and indexes as if they were just CREATEed. It's like you deleted all the records, reindexed the table and did a VACUUM FULL.
If you don't care if there's crud left in the table because you're about to go and fill it up again, you may be better off using DELETE FROM table;.
Because you aren't running VACUUM you will find that dead rows and index entries accumulate as bloat that must be scanned then ignored; this slows all your queries down. If your tests don't actually create and delete all that much data you may not notice or care, and you can always do a VACUUM or two part-way through your test run if you do. Better, let aggressive autovacuum settings ensure that autovacuum does it for you in the background.
You can still TRUNCATE all your tables after the whole test suite runs to make sure no effects build up across many runs. On 9.0 and newer, VACUUM (FULL, ANALYZE); globally on the table is at least as good if not better, and it's a whole lot easier.
IIRC Pg has a few optimisations that mean it might notice when your transaction is the only one that can see the table and immediately mark the blocks as free anyway. In testing, when I've wanted to create bloat I've had to have more than one concurrent connection to do it. I wouldn't rely on this, though.
DELETE FROM table; is very cheap for small tables with no f/k refs
To DELETE all records from a table with no foreign key references to it, all Pg has to do a sequential table scan and set the xmax of the tuples encountered. This is a very cheap operation - basically a linear read and a semi-linear write. AFAIK it doesn't have to touch the indexes; they continue to point to the dead tuples until they're cleaned up by a later VACUUM that also marks blocks in the table containing only dead tuples as free.
DELETE only gets expensive if there are lots of records, if there are lots of foreign key references that must be checked, or if you count the subsequent VACUUM (FULL, ANALYZE) table; needed to match TRUNCATE's effects within the cost of your DELETE .
In my tests here, a DELETE FROM table; was typically 4x faster than TRUNCATE at 0.5ms vs 2ms. That's a test DB on an SSD, running with fsync=off because I don't care if I lose all this data. Of course, DELETE FROM table; isn't doing all the same work, and if I follow up with a VACUUM (FULL, ANALYZE) table; it's a much more expensive 21ms, so the DELETE is only a win if I don't actually need the table pristine.
TRUNCATE table; does a lot more fixed-cost work and housekeeping than DELETE
By contrast, a TRUNCATE has to do a lot of work. It must allocate new files for the table, its TOAST table if any, and every index the table has. Headers must be written into those files and the system catalogs may need updating too (not sure on that point, haven't checked). It then has to replace the old files with the new ones or remove the old ones, and has to ensure the file system has caught up with the changes with a synchronization operation - fsync() or similar - that usually flushes all buffers to the disk. I'm not sure whether the the sync is skipped if you're running with the (data-eating) option fsync=off .
I learned recently that TRUNCATE must also flush all PostgreSQL's buffers related to the old table. This can take a non-trivial amount of time with huge shared_buffers. I suspect this is why it's slower on your CI server.
The balance
Anyway, you can see that a TRUNCATE of a table that has an associated TOAST table (most do) and several indexes could take a few moments. Not long, but longer than a DELETE from a near-empty table.
Consequently, you might be better off doing a DELETE FROM table;.
--
Note: on DBs before 9.0, CLUSTER table_id_seq ON table; ANALYZE table; or VACUUM FULL ANALYZE table; REINDEX table; would be a closer equivalent to TRUNCATE. The VACUUM FULL impl changed to a much better one in 9.0.
Brad, just to let you know. I've looked fairly deeply into a very similar question.
Related question: 30 tables with few rows - TRUNCATE the fastest way to empty them and reset attached sequences?
Please also look at this issue and this pull request:
https://github.com/bmabey/database_cleaner/issues/126
https://github.com/bmabey/database_cleaner/pull/127
Also this thread: http://archives.postgresql.org/pgsql-performance/2012-07/msg00047.php
I am sorry for writing this as an answer, but I didn't find any comment links, maybe because there are too much comments already there.
I've encountered similar issue lately, i.e.:
The time to run test suite which used DatabaseCleaner varied widely between different systems with comparable hardware,
Changing DatabaseCleaner strategy to :deletion provided ~10x improvement.
The root cause of the slowness was a filesystem with journaling (ext4) used for database storage. During TRUNCATE operation the journaling daemon (jbd2) was using ~90% of disk IO capacity. I am not sure if this is a bug, an edge case or actually normal behaviour in these circumstances. This explains however why TRUNCATE was a lot slower than DELETE - it generated a lot more disk writes. As I did not want to actually use DELETE I resorted to setting fsync=off and it was enough to mitigate this issue (data safety was not important in this case).
A couple of alternate approaches to consider:
Create a empty database with static "fixture" data in it, and run the tests in that. When you are done, just just drop the database, which should be fast.
Create a new table called "test_ids_to_delete" that contains columns for table names and primary key ids. Update your deletion logic to insert the ids/table names in this table instead, which will be much faster than running deletes. Then, write a script to run "offline" to actually delete the data, either after a entire test run has finished, or overnight.
The former is a "clean room" approach, while latter means there will be some test data will persist in database for longer. The "dirty" approach with offline deletes is what I'm using for a test suite with about 20,000 tests. Yes, there are sometimes problems due to having "extra" test data in the dev database but at times. But sometimes this "dirtiness" has helped us find and fixed bug because the "messiness" better simulated a real-world situation, in a way that clean-room approach never will.
I have a table with a few million tuples.
I perform updates in most of them.
The first update takes about a minute. The second, takes two minutes. The third update takes four minutes.
After that, I execute a VACUUM FULL.
Then, I execute the update again, which takes two minutes.
If I dump the database and recreate it, the first update will take one minute.
Why doesn't PostgreSQL performance get back to its maximum after a VACUUM FULL?
VACUUM FULL does not compact the indexes. In fact, indexes can be in worse shape after performing a VACUUM FULL. After a VACUUM FULL, you should REINDEX the table.
However, VACUUM FULL+REINDEX is quite slow. You can achieve the same effect of compacting the table and the indexes using the CLUSTER command which takes a fraction of the time. It has the added benefit that it will order your table based on the index you choose to CLUSTER on. This can improve query performance. The downsides to CLUSTER over VACUUM FULL+REINDEX is that it requires approximately twice the disk space while running. Also, be very careful with this command if you are running a version older than 8.3. It is not MVCC safe and you can lose data.
Also, you can do a no-op ALTER TABLE ... ALTER COLUMN statement to get rid of the table and index bloat, this is the quickest solution.
Finally, any VACUUM FULL question should also address the fact why you need to do this? This is almost always caused by incorrect vacuuming. You should be running autovacuum and tuning it properly so that you never have to run a VACUUM FULL.
The order of the tuples might be different, this results in different queryplans. If you want a fixed order, use CLUSTER. Lower the FILLFACTOR as well and turn on auto_vacuum. And did you ANALYZE as well?
Use EXPLAIN to see how a query is executed.