I need indexes on some large tables to support ON DELETE CASCADE, but the size of btree indexes would be a problem. I therefore tried with BRIN, since the performance isn't critical. However, it seems they are never used, or at least the deletes are about as slow as without indexes. Could someone confirm that the planner can use BRIN indexes for cascading deletes? Is there a way I could examine the plan?
Use auto_explain with these settings:
auto_explain.log_min_duration = '0ms'
auto_explain.log_nested_statements=on
and you will get the plan for the cascading delete.
And with this setting, it will get sent to your client and you won't have to go dig it out of the log file:
set client_min_messages = log;
BRIN indexes can be used. But if the table is not suitable for BRIN indexes (the rows are not nearly in order by the key column) using one will not actually be faster.
Related
I have a table bsort:
CREATE TABLE bsort(a int, data text);
Here data may be incomplete. In other words, some tuples may not have data value.
And then I build a b-tree index on the table:
CREATE INDEX ON bsort USING BTREE(a);
Now if I perform this query:
SELECT * FROM bsort ORDER BY a;
Does PostgreSQL sort tuples with nlogn complexity, or does it get the order directly from the b-tree index?
For a simple query like this Postgres will use an index scan and retrieve readily sorted tuples from the index in order. Due to its MVCC model Postgres had to always visit the "heap" (data pages) additionally to verify entries are actually visible to the current transaction. Quoting the Postgres Wiki on index-only scans:
PostgreSQL indexes do not contain visibility information. That is, it
is not directly possible to ascertain if any given tuple is visible to
the current transaction, which is why it has taken so long for index-only
scans to be implemented.
Which finally happened in version 9.2: index-only scans. The manual:
If the index stores the original indexed data values (and not some
lossy representation of them), it is useful to support index-only scans, in which the index returns the actual data not just the TID of
the heap tuple. This will only avoid I/O if the visibility map shows
that the TID is on an all-visible page; else the heap tuple must be
visited anyway to check MVCC visibility. But that is no concern of the
access method's.
The visibility map decides whether index-only scans are possible. Only an option if all involved column values are included in the index. Else, the heap has to be visited (additionally) in any case. The sort step is still not needed.
That's why we sometimes append otherwise useless columns to indexes now. Like the data column in your example:
CREATE INDEX ON bsort (a, data); -- btree is the default index type
It makes the index bigger (depends) and a bit more expensive to maintain and use for other purposes. So only append the data column if you get index-only scans out of it. The order of columns in the index is important:
Working of indexes in PostgreSQL
Is a composite index also good for queries on the first field?
Since Postgres 11, there are also "covering indexes" with the INCLUDE keyword. Like:
CREATE INDEX ON bsort (a) INCLUDE (data);
See:
Does a query with a primary key and foreign keys run faster than a query with just primary keys?
The benefit of an index-only scan, per documentation:
If it's known that all tuples on the page are visible, the heap fetch
can be skipped. This is most noticeable on large data sets where the
visibility map can prevent disk accesses. The visibility map is vastly
smaller than the heap, so it can easily be cached even when the heap
is very large.
The visibility map is maintained by VACUUM which happens automatically if you have autovacuum running (the default setting in modern Postgres). Details:
Are regular VACUUM ANALYZE still recommended under 9.1?
But there is some delay between write operations to the table and the next VACUUM run. The gist of it:
Read-only tables stay ready for index-only scans once vacuumed.
Data pages that have been modified lose their "all-visible" flag in the visibility map until the next VACUUM (and all older tansactions being finished), so it depends on the ratio between write operations and VACUUM frequency.
Partial index-only scans are still possible if some of the involved pages are marked all-visible. But if the heap has to be visited anyway, the access method "index scan" is a bit cheaper. So if too many pages are currently dirty, Postgres will switch to the cheaper index scan altogether. The Postgres Wiki again:
As the number of heap fetches (or "visits") that are projected to be
needed by the planner goes up, the planner will eventually conclude
that an index-only scan isn't desirable, as it isn't the cheapest
possible plan according to its cost model. The value of index-only
scans lies wholly in their potential to allow us to elide heap access
(if only partially) and minimise I/O.
You would need to check the execution plan. But, Postgres is quite capable of using the index to make the order by more efficient. It would read the records directly from the index. Because you have only one column, there is no need to access the data pages.
I am updating over ~8M rows in table. The column I am updating is client_id and table has composite index on user_id and client_id. Is it going to affect the indexing in some way ...?
Doing a large update will be slower with indexes, since the indexes will have to be updated also. It might be an issue, or not. Depends on many things.
After the update the indexes will be as they should, but REINDEX might be in order to make their space usage better. If this 8M rows is majority of the table, VACUUM FULL may be in order to reduce disk space usage, but if the table is heavily updated all the time, it might not be worth it.
So if you want, you can remove the index, update and recreate the index, but it is impossible to say if that would be faster than doing update with the index in place.
I have a fragmentation problem on my production database. One of my main data tables is about 6GB(3GB Indexes) (about 9M records) in size and has 94%(!) index fragmentation.
I know that reorganizing indexes will solve this problem BUT my database is on SQL Server 2008R2 Express which has 10GB database limit and my database is already 8GB in size.
I have read few blog posts about this issue but non gave answer to my situation.
My Question1 is:
How much size(% or in GB) increase can I expect after reorganizing indexes on that table?
Question2:
Will Drop Index -> Build same index take less space? Time is not a factor for me at the moment.
Extra question:
Any other suggestions for database fragmentation? I know only to avoid shrinking like a fire ;)
Having INDEX on key columns will improve joins and Filters by negating the need for a table scan. A well maintained index can drastically improve performance.
It is Right that GUID's makes poor choice for indexed columns but by no means does it mean that you should not create these indexes. Ideally a data type of INT or BIGINT would be advisable.
For me Adding NEWID() as a default has shown some improvement in counteracting index fragmentation but if all alternatives fail you may have to do index maintenance (Rebuild, reorganize) operations more often than for other indexes. Reorganize needs some working space but in your scenario as time is not a concern, I would disable index, shrink DB and create index.
I have a very large table with two indexes on it, but no PK (clustered) index.
Would the performance of the two indexes increase if there was also a clustered index on the table, even if I have to "contrive" one from an identity column?
a well chosen clustered index can do miracles to your performance. Why? The clustered index defines how your data is physically stored on your Hard Disk. Choosing a good Clustered index will ensure you get Sequential IO instead of Random IO. Therefore this is a great performance gain, because the bottleneck in most database setups are the Hard Drives and the IO action.
Try to create your clustered index on a value that is used a lot by joins.
If you just put it on your Primary key your performance will still improve as the NON_Clustered will use the Clustered for their seek operation, which will avoid table scans.
Hope this answers your question
It's more like the opposite, non-clustered indexes suffer from clustered indexes:
http://use-the-index-luke.com/blog/2014-01/unreasonable-defaults-primary-key-clustering-key
However, if you manage to replace one of your non-clustered indexes by a clustered indexes, overall performance might increase...or decrease. Really depends on your workload.
this question is somewhat strange, but I bumped onto it in a current implementation of mine:
I want to privilege inserts over everything in my application and it came to my mind that the command $hint could also be used to let mongo NOT use an index.
Is that possible? is that a sound question, considering what $hint is supposed to do?
Thanks
To force the query optimizer to not use indexes (do a table scan), use:
db.collection.find().hint({$natural:1})
Not sure if this achieves what you want (prioritizing inserts over other activity), though.
I don't think inserts work the way you think.
An insert will catalogue it's needed fields to the btree dependant upon the number of indexes on the collection itself. As such to privilege inserts you would have to destroy all indexes on the collection.
As such using $natural order hinting will make no difference to the order of read and write. Not to mention $natural order is a disk insertion index, just an index you cannot effectively use in a query as such it will force full table scan.
However that does not actually privilege anything since maintaining the btree is part of inserting data so as such there is no way, via indexes to prioritise inserts.
Also the write and read lock are two completely different things so again I am not sure if your question makes sense.
Are you more like looking for an atomic lock to ensure that you update or put data in before it is read?