I am updating over ~8M rows in table. The column I am updating is client_id and table has composite index on user_id and client_id. Is it going to affect the indexing in some way ...?
Doing a large update will be slower with indexes, since the indexes will have to be updated also. It might be an issue, or not. Depends on many things.
After the update the indexes will be as they should, but REINDEX might be in order to make their space usage better. If this 8M rows is majority of the table, VACUUM FULL may be in order to reduce disk space usage, but if the table is heavily updated all the time, it might not be worth it.
So if you want, you can remove the index, update and recreate the index, but it is impossible to say if that would be faster than doing update with the index in place.
Related
I need indexes on some large tables to support ON DELETE CASCADE, but the size of btree indexes would be a problem. I therefore tried with BRIN, since the performance isn't critical. However, it seems they are never used, or at least the deletes are about as slow as without indexes. Could someone confirm that the planner can use BRIN indexes for cascading deletes? Is there a way I could examine the plan?
Use auto_explain with these settings:
auto_explain.log_min_duration = '0ms'
auto_explain.log_nested_statements=on
and you will get the plan for the cascading delete.
And with this setting, it will get sent to your client and you won't have to go dig it out of the log file:
set client_min_messages = log;
BRIN indexes can be used. But if the table is not suitable for BRIN indexes (the rows are not nearly in order by the key column) using one will not actually be faster.
In Postgres I can run a query against pg_stat_user_indexes table to verify whether an index was ever scanned or not. I have quite a few indexes that have 0 scans and have a few with less than 5 scans. I am considering a possibility of removing those indexes but I want to know when they were last used. Is there a way to find out when the last index scan happened for an index?
No, you cannot find out when an index was last used. I recommend that you take the usage count now and again a month from now. Then see if the usage count has increased.
Don't hesitate to drop indexes that are rarely used, unless you are dealing with a data warehouse. Even if the occasional query can take longer, all data modifications on the table will become faster, which is a net win.
I had a table of 200GB with an index of 49GB. Only insert and update operation happens to that table. I dropped the existing index and created a new one on the same columns. New index size is only 6GB. I am using postgres database
Can someone explain how index size got reduced from 50GB to 6GB?
The newly created index is essentially optimally packed sorted data. To put some more data somewhere in the middle, while still maintaining the optimal packed sorted data you'd have to rewrite half of the index with every insert on average.
This is not acceptable, so the database uses some complicated and clever format for indexes (based on a b-tree data structure) that allow for changing the order of index blocks without moving them on disk. But the consequence of this is that after inserting some data in the middle some of the index data blocks are not 100% packed. The space left can be used in the future but only if the values inserted match to the block with regards of ordering.
So, depending on your usage pattern, you can easily have index blocks only 10% packed on average.
This is compounded by the fact that when you update a row both old and new version have to be present in the index at the same time. And if you do a bulk update of the whole table then the index will have to expand to contain twice the number of rows, although briefly. But it will not shrink back as easily, as this requires basically a rewrite of it all.
The index size tend to grow first and then stabilize after some usage. But the stable size is often nowhere near the size of a newly created one.
You might want to tune the autovacuum to be more aggressive - so the not needed anymore space in table and indexes is recovered faster and therefore can be reused faster. This can make your index stabilize faster and smaller. Also try to avoid too big bulk updates or do a vacuum full tablename after a huge update.
I have a table bsort:
CREATE TABLE bsort(a int, data text);
Here data may be incomplete. In other words, some tuples may not have data value.
And then I build a b-tree index on the table:
CREATE INDEX ON bsort USING BTREE(a);
Now if I perform this query:
SELECT * FROM bsort ORDER BY a;
Does PostgreSQL sort tuples with nlogn complexity, or does it get the order directly from the b-tree index?
For a simple query like this Postgres will use an index scan and retrieve readily sorted tuples from the index in order. Due to its MVCC model Postgres had to always visit the "heap" (data pages) additionally to verify entries are actually visible to the current transaction. Quoting the Postgres Wiki on index-only scans:
PostgreSQL indexes do not contain visibility information. That is, it
is not directly possible to ascertain if any given tuple is visible to
the current transaction, which is why it has taken so long for index-only
scans to be implemented.
Which finally happened in version 9.2: index-only scans. The manual:
If the index stores the original indexed data values (and not some
lossy representation of them), it is useful to support index-only scans, in which the index returns the actual data not just the TID of
the heap tuple. This will only avoid I/O if the visibility map shows
that the TID is on an all-visible page; else the heap tuple must be
visited anyway to check MVCC visibility. But that is no concern of the
access method's.
The visibility map decides whether index-only scans are possible. Only an option if all involved column values are included in the index. Else, the heap has to be visited (additionally) in any case. The sort step is still not needed.
That's why we sometimes append otherwise useless columns to indexes now. Like the data column in your example:
CREATE INDEX ON bsort (a, data); -- btree is the default index type
It makes the index bigger (depends) and a bit more expensive to maintain and use for other purposes. So only append the data column if you get index-only scans out of it. The order of columns in the index is important:
Working of indexes in PostgreSQL
Is a composite index also good for queries on the first field?
Since Postgres 11, there are also "covering indexes" with the INCLUDE keyword. Like:
CREATE INDEX ON bsort (a) INCLUDE (data);
See:
Does a query with a primary key and foreign keys run faster than a query with just primary keys?
The benefit of an index-only scan, per documentation:
If it's known that all tuples on the page are visible, the heap fetch
can be skipped. This is most noticeable on large data sets where the
visibility map can prevent disk accesses. The visibility map is vastly
smaller than the heap, so it can easily be cached even when the heap
is very large.
The visibility map is maintained by VACUUM which happens automatically if you have autovacuum running (the default setting in modern Postgres). Details:
Are regular VACUUM ANALYZE still recommended under 9.1?
But there is some delay between write operations to the table and the next VACUUM run. The gist of it:
Read-only tables stay ready for index-only scans once vacuumed.
Data pages that have been modified lose their "all-visible" flag in the visibility map until the next VACUUM (and all older tansactions being finished), so it depends on the ratio between write operations and VACUUM frequency.
Partial index-only scans are still possible if some of the involved pages are marked all-visible. But if the heap has to be visited anyway, the access method "index scan" is a bit cheaper. So if too many pages are currently dirty, Postgres will switch to the cheaper index scan altogether. The Postgres Wiki again:
As the number of heap fetches (or "visits") that are projected to be
needed by the planner goes up, the planner will eventually conclude
that an index-only scan isn't desirable, as it isn't the cheapest
possible plan according to its cost model. The value of index-only
scans lies wholly in their potential to allow us to elide heap access
(if only partially) and minimise I/O.
You would need to check the execution plan. But, Postgres is quite capable of using the index to make the order by more efficient. It would read the records directly from the index. Because you have only one column, there is no need to access the data pages.
I have a fragmentation problem on my production database. One of my main data tables is about 6GB(3GB Indexes) (about 9M records) in size and has 94%(!) index fragmentation.
I know that reorganizing indexes will solve this problem BUT my database is on SQL Server 2008R2 Express which has 10GB database limit and my database is already 8GB in size.
I have read few blog posts about this issue but non gave answer to my situation.
My Question1 is:
How much size(% or in GB) increase can I expect after reorganizing indexes on that table?
Question2:
Will Drop Index -> Build same index take less space? Time is not a factor for me at the moment.
Extra question:
Any other suggestions for database fragmentation? I know only to avoid shrinking like a fire ;)
Having INDEX on key columns will improve joins and Filters by negating the need for a table scan. A well maintained index can drastically improve performance.
It is Right that GUID's makes poor choice for indexed columns but by no means does it mean that you should not create these indexes. Ideally a data type of INT or BIGINT would be advisable.
For me Adding NEWID() as a default has shown some improvement in counteracting index fragmentation but if all alternatives fail you may have to do index maintenance (Rebuild, reorganize) operations more often than for other indexes. Reorganize needs some working space but in your scenario as time is not a concern, I would disable index, shrink DB and create index.