io.r2dbc.spi.Connection and PostgresqlConnection have createSavepoint, releaseSavepoint and rollbackTransactionToSavepoint methods.
How can I use these method by having R2dbcTransactionManager and TransactionalOperator?
I want to create an idempotent service which tries to insert to a table, if a unique constraint is violated then selects the existing record and continues with that
Mono.just(newOrder)
.flatMap(order -> orderRepository.save(order)
.onErrorResume(throwable -> orderRepository.findByUniqueField(newOrder.uniqueField))
.otherProcesses...
.as(transactionalOperator::transactional)
And I receive current transaction is aborted, commands ignored until end of transaction block from PostgreSQL
I saw this answer https://stackoverflow.com/a/48771320/5275087 but it seams autosave=always doesn't work with r2dbc
I wanted to try something like:
transactionalOperator.execute(reactiveTransaction -> {
GenericReactiveTransaction genericReactiveTransaction = (GenericReactiveTransaction) reactiveTransaction;
ConnectionFactoryTransactionObject txObject = (ConnectionFactoryTransactionObject) genericReactiveTransaction.getTransaction();
Connection connection = txObject.getConnectionHolder().getConnection();
return Mono.from(connection.createSavepoint(TRANSACTION_LEVEL_1))
.then(Mono.just(newOrder)
.flatMap(order -> orderRepository.save(order)
.onErrorResume(throwable ->
Mono.from(connection.rollbackTransactionToSavepoint(TRANSACTION_LEVEL_1))
.then(orderRepository.findByUniqueField(newOrder.uniqueField)))
));
But R2dbcTransactionManager.ConnectionFactoryTransactionObject is a private class.
How do I achieve this without using reflection
Related
If I call a stored procedure using JdbcIO.Write is it possible to capture the ID (primary key) if the stored procedure returns this data?
public JdbcIO.Write<MyObject> writeMyObject() {
final String UPSERT_MY_OBJECT = "EXEC [MySchema].[UspertMyObject] ?,?,?";
// If my stored procedure returns the generated or existing ID
// is it possible to update the object I'm writing with the ID?
return JdbcIO.<MyObject>write()
.withDataSourceConfiguration(myDataSourceConfig)
.withStatement(UPSERT_MY_OBJECT)
.withPreparedStatementSetter((JdbcIO.PreparedStatementSetter<MyObject>) (myObject, ps) -> {
ps.setInt(1, myObject.getFieldOne());
ps.setString(2, myObject.getFieldTwo());
ps.setString(3, myObject.getFieldThree());
});
}
I don't think it's possible but, as a workaround, you can wait for write's finish (with Wait transform, see an example there) and then read them from database.
I would like to safely drop Firebird table. I have 3 transactions, one to recreate table, one to do something with the table (just inserting a single row to keep it simple) and the last one to drop the table.
If all these txns are executed using single connection these works. If I use a different connection, then the drop command fails with
lock conflict on no wait transaction
unsuccessful metadata update
object TABLE "DEMO" is in use
private static void Test() {
using var conn1 = new FbConnection(ConnectionString);
using var conn2 = new FbConnection(ConnectionString);
using var conn3 = new FbConnection(ConnectionString);
conn1.Open();
conn2.Open();
conn3.Open();
ExecuteTxn(conn1, cmd => {
cmd.CommandText = "recreate table demo (id int primary key)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn2, cmd => {
cmd.CommandText = "insert into demo (id) values (1)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn3, cmd => {
cmd.CommandText = "drop table demo";
cmd.ExecuteNonQuery();
});
}
private static void ExecuteTxn(FbConnection conn, Action<FbCommand> todo) {
using (var txn = conn.BeginTransaction())
using (var cmd = conn.CreateCommand()) {
cmd.Transaction = txn;
todo(cmd);
txn.Commit();
}
}
I realized that changing the transaction options as
txn = conn.BeginTransaction(new FbTransactionOptions { TransactionBehavior = FbTransactionBehavior.Wait }))
seems to help. But I'm not sure if this the right thing to do or just a coincidence...
Using Firebird 3.0.6, FirebirdSql.Data.FirebirdClient.dll 7.5.0.0
As far as I understand it, the problem has to do with how Firebird caches certain metadata, which might result in existence locks being retained, which will prevent deletion of the object. In addition, it is possible - this is a guess! - that the Firebird ADO.net provider retains the statement handle with the insert statement prepared, which will also result in an existence lock being retained.
Executing in a WAIT transaction (optionally with a timeout) is considered an appropriate workaround by the Firebird core developers.
For reference, see the following tickets:
CORE-3766 - Transaction can`t change metadata if it is run in no_wait and there is another connect that once had queried these metadata
CORE-6382 - Triggers accessing a table prevent concurrent DDL command from dropping that table
In certain cases, switching from Firebird ClassicServer or Firebird SuperClassic to Firebird SuperServer can also prevent this problem.
However, if you want a more in-depth explanation, it might be worthwhile to ask this question on the firebird-devel mailing list.
How would i go about creating a transaction, inserting a row, committing the transaction and getting the last inserted id. So the method should return a Uni<Integer>. I'm new to the mutiny api, I previously used the vertx.io chaining future handlers mechanism, and so it's a bit tough readjusting myself to work with the mutiny api. I have checked the documentation and think something similar to the following snippet should work, but i'm stumped on how to make it work and return Uni<Integer> from the last query instead of Uni<Void> from the tx.commit()
return this.client.begin()
.flatMap(tx -> tx
.preparedQuery("INSERT INTO person (firstname,lastname) VALUES ($1,$2)")
.execute(Tuple.of(person.getFirstName(),person.getLastName()))
.onItem().produceUni(id-> tx.query("SELECT LAST_INSERT_ID()"))
.onItem().produceUni(res -> tx.commit())
.onFailure().recoverWithUni(ex-> tx.rollback())
);
Try this:
return client.begin().onItem().produceUni(tx -> tx
.preparedQuery("INSERT INTO person (firstname,lastname) VALUES ($1,$2)").execute(Tuple.of(person.getFirstName(),person.getLastName()))
.onItem().produceUni(id -> tx.query("SELECT LAST_INSERT_ID()").execute())
.onItem().apply(rows -> rows.iterator().next().getInteger(0))
.onItem().produceUni(item -> tx.commit().on().item().produceUni(v -> Uni.createFrom().item(item)))
.on().failure().recoverWithUni(throwable -> {
return tx.rollback().on().failure().recoverWithItem((Void) null)
.on().item().produceUni(v -> Uni.createFrom().failure(throwable));
})
);
A SqlClientHelper is coming to Quarkus in a future version (hopefully 1.6). You will be able to simplify to:
return SqlClientHelper.inTransactionUni(client, tx -> tx
.preparedQuery("INSERT INTO person (firstname,lastname) VALUES ($1,$2)").execute(Tuple.of(person.getFirstName(),person.getLastName()))
.onItem().produceUni(id -> tx.query("SELECT LAST_INSERT_ID()").execute())
.onItem().apply(rows -> rows.iterator().next().getInteger(0))
);
I want to implement skip lock. I am using postgres 9.6.17. I am using the following code
#Lock(LockModeType.PESSIMISTIC_WRITE)
#QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "-2")})
#Query("Select d from Demo d where d.id in (?1)")
List<Demo> getElementByIds(List<Long> ids);
I am making the same DB call from 2 services at the same time through cmd(parallel Curl request to both services which make DB call). From 1 server I am passing ids from 1...4 and from other I am passing ids from 1.....7.
But in case if the first service takes a lock on 1...4 row and then the second service has to wait until first service removes its lock but ideally, the second service should return rows 5...7
From the first service I am calling like this
List<Long> ids = new ArrayList<>();
ids.add(1l);
ids.add(2l);
ids.add(3l);
ids.add(4l);
List<Demo> demos = demoRepo.getElementByIds(ids);
try{
Thread.sleep(500);
} catch (Exception e) {
}
logger.info("current time: " + System.currentTimeMillis());
and from the second service I am calling like this:
List<Long> ids = new ArrayList<>();
ids.add(1l);
ids.add(2l);
ids.add(3l);
ids.add(4l);
ids.add(5l);
ids.add(6l);
ids.add(7l);
try{
Thread.sleep(100);
} catch (Exception e) {
}
logger.info("current time: " + System.currentTimeMillis());
List<Demo> demos = demoRepo.getElementByIds(ids);
logger.info("current time: " + System.currentTimeMillis());
But always both the queries returning the same rows which I am asking after waiting for another service to release the lock.
Spring JPA version I am using :
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>2.2.5.RELEASE</version>
I have also tried at the application level itself spring.jpa.javax.persistence.lock.timeout=-2 that also not working.
Both the methods seems to working like PESSIMISTIC_WRITE.
Please suggest how I can achieve skip locked functionality.
The Queries seem to be correct.
Hope you are using latest Dialect of Postgres which supports Skip Lock functionality.
Considering the version of Postgres you are using, below Dialect should be used.
org.hibernate.dialect.PostgreSQL95Dialect
You can refer this link for more information
Answer of RAVI SHANKAR was correct. I have tested and it realy worked. You need to specify dialect version.
For example in spring boot
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQL94Dialect
Also code will be more readable if you use constants insted of strings.
#QueryHint(name = AvailableSettings.JPA_LOCK_TIMEOUT, value = LockOptions.SKIP_LOCKED.toString())
We have two different query strategies that we'd ideally like to operate in conjunction on our site without opening redundant connections. One strategy uses the enterprise library to pull Database objects and Execute_____(DbCommand)s on the Database, without directly selecting any sort of connection. Effectively like this:
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("SomeProc");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
The other, newer strategy, uses a library that asks for an IDbConnection, which it Close()es immediately after execution. So, we do something like this:
DbConnection c = DatabaseFactory.CreateDatabase().CreateConnection();
using (QueryBuilder qb = new QueryBuilder(c))
{
return qb.Find<RecordType>(ConditionCollection);
}
But, the connection returned by CreateConnection() isn't the same one used by the Database.ExecuteReader(), which is apparently left open between queries. So, when we call a data access method using the new strategy after one using the old strategy inside a TransactionScope, it causes unnecessary promotion -- promotion that I'm not sure we have the ability to configure for (we don't have administrative access to the SQL Server).
Before we go down the path of modifying the query-builder-library to work with the Enterprise Library's Database objects ... Is there a way to retrieve, if existent, the open connection last used by one of the Database.Execute_______() methods?
Yes, you can get the connection associated with a transaction. Enterprise Library internally manages a collection of transactions and the associated database connections so if you are in a transaction you can retrieve the connection associated with a database using the static TransactionScopeConnections.GetConnection method:
using (var scope = new TransactionScope())
{
IEnumerable<RecordType> records = GetRecordTypes();
Database db = DatabaseFactory.CreateDatabase();
DbConnection connection = TransactionScopeConnections.GetConnection(db).Connection;
}
public static IEnumerable<RecordType> GetRecordTypes()
{
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("GetLogEntries");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
}