Marklogic - Interrupting a long running query with a database restart - restart

In Marklogic, if I interrupt a long running query with a database restart, will that query then no-longer be fully applied when the database comes online again?

Yes, in general canceling an update query will roll back any changes it tried to make. You can think of this like a stack: every update in your query goes into a stack, taking any necessary locks as it goes. After all the expressions have been evaluated, the update enters commit phase and applies that stack atomically to the database. If the query is interrupted before that atomic commit, none of the changes are durable. This behavior covers the A=atomic and D=durable aspects of the ACID properties common to transactional DBMS implementations.
There are some exceptions. It is possible to structure an update so that work is applied in granular sub-transactions. One way to do that is with a multi-statement transaction.
See http://docs.marklogic.com/guide/app-dev/transactions for more.

Related

does sql statement ensure atomicity in postgres

I have a simple bug in my program that uses multi user support. I'm using knex to build sql queries, and I have a pseudocode that depicts the scenerio:
const value = queryBuilder().readDataFromTheDatabase();//executes this
//do some other work and get value
queryBuilder.writeValueToTheDatabase(updateValue(value));
This piece of code is being use in sort of a middleware function. And as you can see, this is a possible race condition i.e. when multiple users access the thing, one of them gets a stale value when they try to execute this at roughly the same amount of time.
My solution
So, I was think a possible solution would be create a single queryBuilder statement:
queryBuilder().readAndUpdateValueInTheDatabase();
So, I'll probably have to use a little bit of plpgsql. I was wondering if this solution will be sufficient. Will the statement be executed atomically? i.e. When one request reads and doesn't finish his write, does another request wait around to both read and write or just waits to write but, reads the stale value?
I think what you are looking for here is isolation, not atomicity. You could set all transactions to the highest isolation level, serializable (which is higher than the usual default level). With that level, if data that a transaction read (and presumably relied upon) is changed, then when it tries to commit it might get a serialization failure error. I say "might", because the system could conclude the situation would be consistent with the data change having happened after the commit, in which case the commit is allowed to stand.
To avoid a race condition with such a setup, you must run both the read and the write in the same database transaction.
There are two ways to do that:
Use the default READ COMMITTED isolation level and lock the rows when you read them:
SELECT ... FROM ... FOR NO KEY UPDATE;
That locks the rows against concurrent modifications, and the lock is held until the end of the transaction.
Use the REPEATABLE READ isolation level and don't lock anything. Then your UPDATE will receive a serialization error (SQLSTATE 40001) if somebody modified the row concurrently. In that case, you roll the transaction back and try again in a new REPEATABLE READ transaction.
The first solution is usually better if you expect conflicts frequently, while the second is better if conflicts are rare.
Note that you should keep the database transaction as short as possible in both cases to keep the risk of conflicts low.
Transaction in PostgreSQL use an optimistic locking model when accessing to tables, while some other DBMS do pessimistic locking (IBM Db2) or the two locking model (MS SQL Server).
Optimistic locking snapshot the data on which you are working, and the modifications are done on the snapshot until the transaction ended. When the transaction finishes, the snapshot modifications are postponed on the real database (table rows), but if some other user had made a change between the moment of the snapshot capture and the commit, then the commit cannot apply and the COMMIT is rejected as a ROLLBACK.
You can try to raise the ISOLATION LEVEL (REPEATABLE READ or SERIALIZABLE) to avoid the trouble.

Hanging query in Postgresql 9.2

I have an issue with a postgresql-9.2 DB that is causing what is effectively a deadlock of the entire DB system.
Basically I have a table that acts as an operation queue. Entries are added to the table to indicate the need for an operation to be done. Subsequently one of multiple services will update these entries to indicate that the operation has been picked up for processing, and eventually delete the entry to indicate that the operation has been completed.
All access to the table is through transactions that first acquire an transactional advisory lock. This is to ensure that only one service is manipulating the queue at any point in time.
I have seen instances where queries on this queue will basically lock up and do nothing. I can see from pg_stat_activity that the affected query is state = active, waiting = false. I can also see that all requested locks for the PID in pg_locks have been granted. But the process just sits there and does nothing.
Typically I can precipitate this by repeated mass addition and removal of (several hundred thousand) entries to the queue. All access has to go through the advisory lock, so only one thing is getting done at a time.
Once one of these queries locks up then other queries pile up behind it, waiting on the advisory lock - eventually exhausting all DB connections.
The query that locks up is typically a deletion of all entries from the queue (delete from queue_table;). I have, however, seen one instance where the query that locked up was an update of several tuples within the table.
Right now I don't see anywhere where I could be deadlocking against any other transaction. These are straightforward inserts, deletes and updates. No other tables are involved (accept during addition of entries where the data is selected from other tables).
Other pertinent facts:
All access to the table is in fact through a view (can't access the table directly which is why i'm using an advisory lock instead of an exclusive lock on the table or similar).
The table is logged (which is probably a really bad choice in this case, i'm going to try using an unlogged table next week).
I usually also see an autovacuum (analyze) operation, also active, also waiting = false and also apparently locked up. I presume the autovacuum is coming along to re-optimize after my mass additions / removals.
Looking for any suggestions of what else I might look at to debug this issue when I next reproduce it. I'm kind of starting to feel that this might be some kind of performance optimization / DB configuration related issue.
Any suggestions of things to look at would be most welcomed!

Why does 'read uncommited' isolation level allow locks?

I put a breakpoint in my code to pause the execution before transaction is commited or rolled back. Then I'd like to see the current state of the database, but when I set in ssms the transaction isolation level to read uncommited and run the query against the tables affected by the paused transaction it gets locked and waits until transaction is finished.
Why does this happen, and is it possible to disable locking?
My crystal ball told me that the transaction that you'd paused had made schema modifications.
Such modifications take out [Schema Modification locks] (Sch-M)(https://technet.microsoft.com/en-us/library/ms175519%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396):
This means the Sch-M lock blocks all outside operations until the lock is released.
And this includes even being able to compile your read uncommitted using query because:
The Database Engine uses schema stability (Sch-S) locks when compiling and executing queries
Which makes some sense - schema modifications could include adding or removing columns so other queries can't know what the current layout of data on disk/in memory actually means.
Even for your case where all you've done is disabled constraints, the optimizer would usually make use of constraint information when planning a query - e.g. whether a check constraint can actually be trusted or not.

Lock and transaction in postgres that should block a query

Let's assume in SQL window 1 I do:
-- query 1
BEGIN TRANSACTION;
UPDATE post SET title = 'edited' WHERE id = 1;
-- note that there is no explicit commit
Then from another window (window 2) I do:
-- query 2
SELECT * FROM post WHERE id = 1;
I get:
1 | original title
Which is fine as the default isolation level is READ COMMITTED and because query 1 is never committed, the change it performs is not readable until I explicitly commit from window 1.
In fact if I, in window 1, do:
COMMIT TRANSACTION;
I can then see the change if I re-run query 2.
1 | edited
My question is:
Why is query 2 returning fine the first time I run it? I was expecting it to block as the transaction in window 1 was not committed yet and the lock placed on row with id = 1 was (should be) an unreleased exclusive one that should block a read like the one performed in window 2. All the rest makes sense to me but I was expecting the SELECT to get stuck until an explicit commit in window 1 was executed.
The behaviour you describe is normal and expected in any transactional relational database.
If PostgreSQL showed you the value edited for the first SELECT it'd be wrong to do so - that's called a "dirty read", and is bad news in databases.
PostgreSQL would be allowed to wait at the SELECT until you committed or rolled back, but it isn't required to by the SQL standard, you haven't told it you want to wait, and it doesn't have to wait for any technical reason, so it returns the data you asked for immediately. After all, until it's committed, that update only kind-of exists - it still might or might not happen.
If PostgreSQL always waited here, then you'd quickly land up with a situation where only one connection could be doing anything with the database at a time. Not pretty for performance, and totally unnecessary the vast majority of the time.
If you want to wait for a concurrent UPDATE (or DELETE), you'd use SELECT ... FOR SHARE. (But be aware that this won't work for INSERT).
Details:
SELECT without a FOR UPDATE or FOR SHARE clause does not take any row level locks. So it sees whatever is the current committed row, and is not affected by any in-flight transactions that might be modifying that row. The concepts are explained in the MVCC section of the docs. The general idea is that PostgreSQL is copy-on-write, with versioning that allows it to return the correct copy based on what the transaction or statement could "see" at the time it started - what PostgreSQL calls a "snapshot".
In the default READ COMMITTED isolation snapshots are taken at the statement level, so if you SELECT a row, COMMIT a change to it from another transaction, and SELECT it again you'll see different values even within one transation. You can use SNAPSHOT isolation if you don't want to see changes committed after the transaction begins, or SERIALIZABLE isolation to add further protection against certain kinds of transaction inter-dependencies.
See the transaction isolation chapter in the documentation.
If you want a SELECT to wait for in-progress transactions to commit or rollback changes to rows being selected, you must use SELECT ... FOR SHARE. This will block on the lock taken by an UPDATE or DELETE until the transaction that took the lock rolls back or commits.
INSERT is different, though - the tuples just don't exist to other transactions until commit. The only way to wait for concurrent INSERTs is to take an EXCLUSIVE table-level lock, so you know nobody else is changing the table while you read it. Usually the need to do that means you have a design problem in the application though - your app should not care if there are uncommitted inserts still in flight.
See the explicit locking chapter of the documentation.
In PostgreSQL's MVCC implementation, the principle is reading does not block writing and vice-versa. The manual:
The main advantage of using the MVCC model of concurrency control
rather than locking is that in MVCC locks acquired for querying
(reading) data do not conflict with locks acquired for writing data,
and so reading never blocks writing and writing never blocks reading.
PostgreSQL maintains this guarantee even when providing the strictest
level of transaction isolation through the use of an innovative
Serializable Snapshot Isolation (SSI) level.
Each transaction only sees (mostly) what has been committed before the transaction began.
That does not mean there'd be no locking. Not at all. For many operations various kinds of locks are acquired. And various strategies are applied to resolve possible conflicts.

How to wait during SELECT that pending INSERT commit?

I'm using PostgreSQL 9.2 in a Windows environment.
I'm in a 2PC (2 phase commit) environment using MSDTC.
I have a client application, that starts a transaction at the SERIALIZABLE isolation level, inserts a new row of data in a table for a specific foreign key value (there is an index on the column), and vote for completion of the transaction (The transaction is PREPARED). The transaction will be COMMITED by the Transaction Coordinator.
Immediatly after that, outside of a transaction, the same client requests all the rows for this same specific foreign key value.
Because there may be a delay before the previous transaction is really commited, the SELECT clause may return a previous snapshot of the data. In fact, it does happen sometimes, and this is problematic. Of course the application may be redesigned but until then, I'm looking for a lock solution. Advisory Lock ?
I already solved the problem while performing UPDATE on specific rows, then using SELECT...FOR SHARE, and it works well. The SELECT waits until the transaction commits and return old and new rows.
Now I'm trying to solve it for INSERT.
SELECT...FOR SHARE does not block and return immediatley.
There is no concurrency issue here as only one client deals with a specific set of rows. I already know about MVCC.
Any help appreciated.
To wait for a not-yet-committed INSERT you'd need to take a predicate lock. There's limited predicate locking in PostgreSQL for the serializable support, but it's not exposed directly to the user.
Simple SERIALIZABLE isolation won't help you here, because SERIALIZABLE only requires that there be an order in which the transactions could've occurred to produce a consistent result. In your case this ordering is SELECT followed by INSERT.
The only option I can think of is to take an ACCESS EXCLUSIVE lock on the table before INSERTing. This will only get released at COMMIT PREPARED or ROLLBACK PREPARED time, and in the mean time any other queries will wait for the lock. You can enforce this via a BEFORE trigger to avoid the need to change the app. You'll probably get the odd deadlock and rollback if you do it that way, though, because INSERT will take a lower lock then you'll attempt lock promotion in the trigger. If possible it's better to run the LOCK TABLE ... IN ACCESS EXCLUSIVE MODE command before the INSERT.
As you've alluded to, this is mostly an application mis-design problem. Expecting to see not-yet-committed rows doesn't really make any sense.