Is pg_multixact folder used to manage a lock allocation table. If we consider a database which generates large number of transactions, are there a lot of writes in pg_multixact?
There is no direct connection between pg_multixact the number of database transaction. There is, however, a connection between row locks and multixacts:
pg_multixact is used to persist objects which, for lack of a better name, are called "multixacts". A multixact is an artifact that is necessary because of the way in which PostgreSQL implements row locks. In PostgreSQL, row locks are not kept in the shared memory locking table, but are stored on the xmax column of the table row itself. The value stored is the transaction number of the locking transaction. This presents a difficulty if several transactions (or subtransactions) lock the same table row. Since xmax can only contain a single transaction number, PostgreSQL creates a multixact that contains all locking (sub)transaction numbers and lock levels and stores the multixact ID in xmax.
Multixacts most frequently occur in connection with share locks on foreign key targets, but they can also happen if you use subtransactions (savepoints).
Related
Is it possible to dis-connect and re-connect a POSTGRES tablespace and all the associated objects within that tablespace?
I have a Postgres database with two tablespaces, one on a high-speed SSD drive (I've named this FASTSPACE) , and the other on a slower, traditional magnetic HDD (named SLOWSPACE). The slower tablespace is reserved for large volumes of historic data which is rarely accessed.
Is it possible to temporarily disconnect SLOWSPACE, with the intention of reconnecting it at a later date? the DROP TABLESPACE documentation can only be used once all database objects within it have been dropped.
I'm aware that I can backup all the tables in SLOWSPACE, then delete them, and then DROP the tablespace, however this will take time (there are several Terabytes of data). If I then need the archived data again I'll have create a new version of the SLOWSPACE tablespace from blank, then re-create all the objects from the backups. Again, this will take time.
Is there any way of temporarily disconnecting SLOWSPACE from the database - whilst still leaving the rest of the database up and running?
Update - happy to accept Franks Heikens two letter answer - 'no'
I am new to postgresql and trying to understand advisory locks. I have the following two scenarios:
With different databases in two different sessions: (Works in expected manner)
Session 1: SELECT pg_advisory_lock(1); Successfully acquires the lock
Session 2 (note in different database): SELECT pg_advisory_lock(1); Successfully acquires the lock
With Different Schemas in same database: When I do the same operation, the second 'session' blocks.
It appears that advisory locks operate at database level rather than (database and schema) combination. Is my assumption correct or is there anything I am missing?
In postgres schema is a namespace. More than just a prefix, but less than another database. in your case two, second session not "blocks", but rather is waiting as per docs:
If another session already holds a lock on the same resource
identifier, this function will wait until the resource becomes
available.
Regarding successful locking on different databases:
After you run SELECT pg_advisory_lock(1); checkout pg_locks, column objid
OID of the lock target within its system catalog, or null if the
target is not a general database object
So this number is per database - you can reference same 1 for many databases - those will be different OIDs.
If I have two READ COMMITTED PostgreSQL database transactions that both create a new row with the same primary key and then lock this row, is it possible to acquire both locks successfully at the same time?
My instinct is yes since these new rows both only exist in the individual transactions' scopes, but I was curious if new rows and locking is handled differently between transactions.
No.
Primary keys are implemented with a UNIQUE (currently only) b-tree index. This is what happens when trying to write to the index, per documentation:
If a conflicting row has been inserted by an as-yet-uncommitted
transaction, the would-be inserter must wait to see if that
transaction commits. If it rolls back then there is no conflict. If it
commits without deleting the conflicting row again, there is a
uniqueness violation. (In practice we just wait for the other
transaction to end and then redo the visibility check in toto.)
Bold emphasis mine.
You can just try it with two open transactions (two different sessions) in parallel.
I have been trying to develop replication from a Firebird database to other.
I simply add a new field to tables named replication_flag.
My replication program starts a read committed transaction, select rows, update this replication_flag field of rows then then commits or rollbacks.
My production client(s) does not update this replication_flag field and uses read committed isolation. My only one replication client only update this replication_flag field and does not update any other fields.
I still see dead locks and do not understand why. How can I avoid dead locks?
It seems that your replication app use a large transaction updating each record of each table. Probably, at the end, the whole database has been "locked".
You should consider using transactions by table or record packets. it's also possible to use a read-only transaction to read, and use an other transaction to write, with frequent commit, that allow other transactions to update the record.
An interesting slideshow: http://slideplayer.us/slide/1651121/
It´s documented that in DB2 the TRUNCATE statement is not compatible with online backup because it gets a Z lock on the table and prevents an online backup from running concurrently.
The lock wait happens when a truncate tries to get a shared lock on an internal online backup object.
Since this is by design in the product I will have to go for workarounds, so this thread is not about a solution, but why they can´t work together. I didn´t find a reasonable explanation why there is such limitation in db2.
Any insights?
Thanks,
Luciano Moreira
from http://www.ibm.com/developerworks/data/library/techarticle/dm-0501melnyk/
When a table holds a Z lock, no concurrent application can read or
update data in that table.
So now we know that a Z lock is and exclusive access to a table denying read and write access to the table.
from http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0053474.html
Exclusive Access: No other session can have a cursor open on the table, or a lock held on the table (SQLSTATE 25001).
from https://sites.google.com/site/umeshdanderdbms/difference-between-truncate-and-delete
Delete is logging operation, where as Truncate is makes the table empty on container level.
(Logging operation – DML operation are logged into logs (redo log in oracle, transaction log in DB2 etc). It is stored in logs for commit or rollback operation.)
This is the most interesting part. Truncate just 'forgets' the content of the table whereas deletes removes line by line processing all triggers, bells, and whistles. Therefore when you truncate all reading cursors will get invalid. To prevent stupid stuff like that you can only completely empty a table when nobody tries to access it. Online backup obviously needs to read the table. Therefore it is not possible to have both accessing the same table at the same time.