How PostgreSQL index deals with MVCC - postgresql

In PostgreSQL every update of tuple creates new tuple version. So for some period of time there can be lot of versions of same tuple and different transactions can see different version of tuple (using visibility rules)
Index is updated before transaction complete. How this works with SI?
So when one transaction updated tuple then index entry updated to point to new version of tuple?

Since PostgreSQL implements MVCC by keeping multiple versions of one row in the table at the same time, it also keeps multiple index entries for different versions of a single row (sometimes this can be avoided with heap-only tuples if the indexed entries are not modified during an update and the updated row is in the same table block as the original version).
The visibility information is not stored in the index, so to find the correct row version during an index scan, the table entries for all these index entries have to be checked (somethimes this can be avoided if an index block is known to contain only entries that are visible to everybody; this is an index-only scan).
Old index entries are removed along with old table entries during autovacuum.

Related

Overhead/downtime of partially indexing a column when no records meet the condition

I am needing to partially index a column when a single condition is met for a column (ex. some_column = 'some_value'). I am worried about the customer impact of triggering this new partial index and locking the table and am wondering how long that will take. In the databases where I am worried about the impact, there will be no records that meet the condition. Does this mean the overhead and time the table is locked would be drastically less than if there were records to index at the time of the index creation? The column in the where condition is indexed.
It will not use the index on the column in the WHERE to speed up creation of the empty partial index. It will still scan the full table, at however long it takes to do that. Not needing to sort any tuples or generate any index leaf blocks will speed it up, but probably not 'drastically'.
If you are afraid it will hold the lock too long, you can create the index CONCURRENTLY. This will take longer to do, but will hold a weaker lock while it does it. It will still need a strong lock at the beginning and at the end, but it will only be held momentarily.

PostgreSQL slow update with index

Very simply update to reset 1 column in a table with approx 5mil rows as:
UPDATE t_Daily
SET Price= NULL
Price is not part of any of the indexes on that table.
Running this without indexes takes 45s.
Running this with one or more indexes takes at least 20 mins (I keep having to stop it).
I fully understand why maintaining indexes affects the performance of insert and update statements, but this update makes no changes to the table indexes so why does it have this terrible effect on performance?
Any ideas much appreciated.
That is normal and expected: updating an index can be about ten times as expensive as updating the table itself. The table has no ordering!
If price is not indexed, you can use HOT updates that avoid updating the indexes. To make use of that, the table has to be defined with a fillfactor under 100 so that updated rows can find room in the same block as the original rows.
Found some further info (thanks to Laurenz-Albe for the HOT tip).
This link https://malisper.me/postgres-heap-only-tuples/ states that
Due to MVCC, an update in Postgres consists of finding the row being updated, and inserting a new version of the row back into the database. The main downside to doing this is the need to readd the row to every index
So it is re-writing the index despite only updating a column not in the index.

H2: Insert is slow because of index on column

I am using the h2 database to store data.
Each record has to be unique in the database (unique in the sense that the combination of timestamp, name, message,.. doesn't appear twice in the table). Therefore one column in the table is the hash of the data in the record. To speed up searching if the record already exists I created an index on the hash column. Indeed searching for a record with given hash is very fast.
But here is the problem: While in the beginning insertion of 10k records is fast enough (takes about a second), it gets awefully slow when having already one million records in the database (takes a minute). This probably because the new hashes need to be integrated into the existing index b-tree.
Is there any way to speed this up or is there a better way to ensure uniqueness of data records in the table?
Edit: To be more concrete:
Let's say my records are transactions which have the following fields:
time stamp, type, sender recipient, amount, message
A transaction should only appear once in the table so before inserting a new transaction I have to check if the transaction is already in the table. Since the sha 256 hash of all fields is unique my idea was to add a column 'hash' to the table where the hash of the fields is put in. Before inserting a new record I calculate the hash of the fields and query the table for the hash.
Index has its own over head. If you have a table where you will be having lots of insertions, I would suggest to avoid indexing on it as it has over-head of hash.
May I know what do you mean by --> one column in the table is the hash of the data in the record??
You can create a unique key constraint (here it will be the composite key of all those 3 mentioned columns), Let me know the requirements, may be we can give you a better solution of doing it in a simpler way :)
Danyal
Man, this is probably not a good way to query all the records, check them for duplicates and then insert the new row :). As soon as you move ahead, the overhead will increase as the number of the records increase.
Create a unique key constraint (check http://www.h2database.com/html/grammar.html ) on the combination of these field, you don't need to compute the hash, database will handle the hash thing. Just try to add the duplicate record, you will get the exception, catch the exception and show the error message as duplicate insertion..
Once you create the unique index, it won't allow you to insert any duplicate records. It is pretty secure and safe.
Indexing randomly distributed data is bad for performance. Once there are more entries in the index than fit in the cache, then updating the index will get very slow, specially when using a hard disk. This is because seeks on a hard disk are very slow. This, in combination with the random distribution of the data, will lead to very bad performance. With solid state disks it's a bit better, because random access reads are faster there.

Only row key in a Cassandra column family

I'm using Cassandra 1.1.8 and today I saw in my keyspace a column family with the following content
SELECT * FROM challenge;
KEY
----------------------------
49feb2000100000a556522ed68
49feb2000100000a556522ed74
49feb2000100000a556522ed7a
49feb2000100000a556522ed72
49feb2000100000a556522ed76
49feb2000100000a556522ed6a
49feb2000100000a556522ed70
49feb2000100000a556522ed78
49feb2000100000a556522ed6e
49feb2000100000a556522ed6c
So, only rowkeys.
Yesterday those rows were there and I ran some deletions (exactly on those rows). I'm using Hector
Mutator<byte []> mutator = HFactory.createMutator(keyspace, BYTES_ARRAY_SERIALIZER)
.addDeletion(challengeRowKey(...), CHALLENGE_COLUMN_FAMILY_NAME)
.execute();
This is a small development and test environment on a single machine / single node so I don't believe the hardware details are relevant.
Probably I'm doing something stupid or I didn't get the point about how things are working, but as far I understood the rows above are no valid... column name and column value coordinates are missing so there are no valid cells (rowkey / column name / column value)...is that right?
I read about ghost reads but I think this is a scenario in a distribuited environment...is that valid after one day and on a single Cassandra node??
From http://www.datastax.com/docs/1.0/dml/about_writes#about-deletes
"The row key for a deleted row may still appear in range query results. When you delete a row in Cassandra, it marks all columns for that row key with a tombstone. Until those tombstones are cleared by compaction, you have an empty row key (a row that contains no columns). These deleted keys can show up in results of get_range_slices() calls. If your client application performs range queries on rows, you may want to have if filter out row keys that return empty column lists."

indexes needed for table inheritance in postgres?

This is a fairly simple question, but it's one I can't find a firm answer on.
I have a parent table in PostgreSQL, and then several child tables which have been defined. A trigger has been established, and the children tables only have data inserted if a field, say field x, meets a certain criteria.
When I query the parent table with a field based upon x, PostgreSQL knows to immediately go to the child table that is related to that particular value of x.
That all being said, I don't need to specify a particular index on the column x do I? PostgreSQL already knows how to sort on it, and by adding an index to the parent x, PostgreSQL is therefore generating unique indexes on x for each of the new child tables.
Creating that index is a bit redundant, right?
Creating an index on the child table for x, if x only has one value (or a very, very small number of values) if probably a loss, yes. The planner would scan the whole table anyway.
If x is a timestamp and you're specifying a timeframe that may not be a whole partition, or if x is another range or set of values, an index would be a win most likely.
Edit: When I say one value or range of values, I mean, per child table.