Take the case that a group of modifies on DB is intervallated by some services calls, and i don't want to retain the connection in meanwhile, is there a way to suspend the transaction and resume it after the services have respond?
Related
When the Go driver detects that a context is cancelled, I think it will idle the connection and return it to the pool. Does that imply the running DB operation that it started will also be killed on the server? Or does that continue on?
You can read the official documentation about how the driver uses the context here.
Relevant section: Socket Read and Write:
When the driver retrieves a connection for an operation, it sets the socket’s read or write deadline to either the Context deadline or socket timeout, whichever is shorter.
If you cancel the Context after the execution of the Read() or Write() function but before its deadline, the behavior of the driver differs based on version.
The driver generates a separate goroutine to listen for Context cancellation when the Read() or Write() function is in progress. If the goroutine detects a cancellation, it closes the connection. The pending Read() or Write() function returns an error which the driver overwrites with the context.Canceled error.
Note that closing the connection just means putting it back to the pool as idle. There is no explict statement about cancelling the operation on the server, but obviously a signal is sent that the client abandons the operation and no longer monitors / cares for the result. The MongoDB server would be dumb not to abort the initiated operation if it can be. For example if the operation was a query, that can be aborted. If it was a write operation (such as insert or update), that may not be aborted.
Can someone tell me about how I can perform a distributed transaction on PostgreSQL?
I need to start transaction from a node x to node y (this node has a database).
But I don't find information on internet on how I can do it.
All I can do is a distributed query with:
select dblink_connect
('conn','dbname=ConsultaRemota host=192.168.3.9
user=remoto password=12345 port=5432');
select * from dblink('conn','select * from tablaremota') as
temp (id_remoto int, nombre_remoto text, descripcion text);
Using dblink is no true distributed transaction, because it is possible that the remote transaction succeeds, while the local transaction fails.
To perform a distributed transaction:
Create a normal transaction with BEGIN or START TRANSACTION on both databases.
Perform work on both databases.
Once you are done, prepare the transaction on both databases:
PREPARE TRANSACTION 'some_name';
This step will perform everything that could potentially fail during COMMIT and persist the transaction, but it will not yet commit it.
If that step fails somewhere, use ROLLBACK or ROLLBACK PREPARED to abort the transaction on all databases.
Commit the transaction on all databases:
COMMIT PREPARED 'some_name';
This is guaranteed to succeed.
To reliably perform a distributed transaction, you need a transaction manager: that is a piece of software that keeps track of all distributed transactions. This component has to persist its information, so that it can survive a crash. The job of the transaction manager is to commit or rollback any transaction that was left in an incomplete state after a crash.
This is necessary, because prepared transactions will stay around even if you restart the database, and they will hold locks and block VACUUM progress. Such orphaned prepared transactions can break your database.
Never use distributed transactions without a transaction manager!
We are observing some behaviours/errors in some of our workflows, related to the consistency and visiblity of a Postgres write transaction, followed by a read. One of our developers offered an explanation, but I could not find any search results documenting the proposed reasoning.
Given a single Postgres 10.3 host, the following operations take place:
ClientA performs a successful write transaction
After the COMMIT, an external notification is emitted
ClientB reacts to external notification and performs a read, only to find that the UPDATE transaction changes are not visible
The explanation that was proposed is that two postgres client connections on different threads don't have a guaranteed view snapshot and may not immediately observe the write transaction update after the commit. But from what I have read, I would expect that after the COMMIT has succeeded, a read operation then starting in response should see the effects of that write.
My specific question is: Given two database client connections on different threads, is it possible for a race condition with one client viewing the effects of a write transaction AFTER the other client has committed? (no overlapping transactions).
Every bit of documentation I have found thus far only refers to concerns about overlapping/concurrent transaction and the MVCC/transaction isolation topics. Nothing about a synchronised serial operation between two different client connections.
Edit: Some extra details about the configuration.
ClientA and ClientB would be different threads accessing postgres through a connection pool. Clients may both be in the same connection pool on the same application server, or it may be ClientA/ApplicationA and ClientB/ApplicationB.
When ClientB reacts, it will access the existing Application server connection pool to make a new read.
No, that cannot happen, unless the reading transaction started earlier and is running at the REPEATABLE READ or SERIALIZABLE isolation level.
There is also the possibility that the reading transaction does not connect to the same server as the writing transaction, but to a streaming replication standby server with hot_standby enabled. Then this can easily happen, even with synchronous replication (unless you set synchronous_commit = remote_apply).
I want to ask a question about handling SUSPEND state.
Here is the background:
I am using curator/zk to as a task coordinator for a list of concurrent running jobs.
For every minute, every worker (each work run on a separate VM) try to acquire a task (lock) from zk by calling:
lock = new InterProcessSemaphoreMutex(zkClient, task);
boolean hasLock = false;
hasLock = lock.acquire(1, TimeUnit.SECONDS);
If the work get the lock, it will do the task.
The class which is responsible for retriever lock/task implement ConnectionStateListener interface. And below is the currently implementation:
RECONNECT: do nothing, since worker will try to acquire lock
regardless the ZK connection status.
LOST: release the lock, since the connect is lost.
SUSPEND: ??????
My question is about SUSPEND state, should I release the lock when enter SUSPEND (basically, treat as lost) or do something else?
What is the best practice to handle SUSPEND state?
Thanks,
I guess you have seen the comment at the end of this page.
It is strongly recommended that you add a ConnectionStateListener and
watch for SUSPENDED and LOST state changes. If a SUSPENDED state is
reported you cannot be certain that you still hold the lock unless you
subsequently receive a RECONNECTED state. If a LOST state is reported
it is certain that you no longer hold the lock.
I interpret this as: you hold the lock until you receive a RECONNECTED state, unless your connection is LOST and the lock is released.
What's the difference between functions: pg_cancel_backend(pid int) and pg_terminate_backend(pid int)? For me they work pretty the same.
pg_cancel_backend() cancels the running query while pg_terminate_backend() terminates the entire process and thus the database connection.
When a program creates a database connection and sends queries, you can cancel one query without destroying the connection and stopping the other queries. If you destroy the entire connection, everything will be stopped.