How do I get instance of Distributed Transaction Manager - scalardb

I am following this example to test ScalarDB.
https://github.com/indetail-blockchain/getting-started-with-scalardb
The example says
"
Execute a transaction
A DistributedTransaction can be retrieved from the transactionManager. Then use this objects to execute the desired operations and eventually commit them.
"
It is not clear though from the example what is transactionManager and how to create DistributedTransaction from it.
How do I create a DistributedTransaction instance?

Here is the way.
https://github.com/scalar-labs/scalardb/blob/master/docs/getting-started.md#store--retrieve-data-with-transaction-service
Injector injector = Guice.createInjector(new TransactionModule(new DatabaseConfig(props)));
TransactionService service = injector.getInstance(TransactionService.class);
DistributedTransaction tx = service.start();

Related

Azure Service Fabric - Clone Actors with state

Is there any way to clone an actor and its state and create the exact same actor, with the same actor Id and its corresponding state in another application in Azure Service Fabric? I've looked at the backup and restore, but it doesn't seem to do what I need.
We have several instances of the same application type running actors in production. We need this functionality for 2 reasons:
1. We need to combine 2 of the applications into one, so all of the actors will need to be re-created in their current state with their current ID's in the other instance.
2 . We would like to be able to clone production into a QA environment, which is on a different Azure Server, so we can test upgrades and new code in the exact state production is in.
Any help with this is much appreciated!
I don't think that there is built-in support for cloning. But I believe you can implement your own mechanism to do that - you can iterate over your actors to get their states and then pass it to another cluser.
To iterate:
`
var cancellationToken = new CancellationToken();
ContinuationToken continuationToken = null;
var actorProxyFactory = new ActorProxyFactory();
var actorServiceProxy = ActorServiceProxy.Create("fabric:/MyActorApp/MyActorService", partitionKey);
do
{
var queryResult = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
foreach (var item in queryResult.Items)
{
var actor = actorProxyFactory.CreateActorProxy<IMyActor>("fabric:/MyActorApp/MyActorService", item.ActorId);
var state = await actor.GetState();
}
continuationToken = queryResult.ContinuationToken;
} while (continuationToken != null);
`
If you have a dynamic list of names for actor state, you can get all names using GetStateNamesAsync method:
IEnumerable<string> stateNames = await StateManager.GetStateNamesAsync();
Hope this will help.
For #1, you need to use Alex's solution or similar.
For #2, you can use the existing backup & restore mechanism. The target partition of a service you are restoring doesn't need to be the source partition you created the backup of.

Setting up and accessing Flink Queryable State (NullPointerException)

I am using Flink v1.4.0 and I have set up two distinct jobs. The first is a pipeline that consumes data from a Kafka Topic and stores them into a Queryable State (QS). Data are keyed by date. The second submits a query to the QS job and processes the returned data.
Both jobs were working fine with Flink v.1.3.2. But with the new update, everything has broken. Here is part of the code for the first job:
private void runPipeline() throws Exception {
StreamExecutionEnvironment env = configurationEnvironment();
QueryableStateStream<String, DataBucket> dataByDate = env.addSource(sourceDataFromKafka())
.map(NewDataClass::new)
.keyBy(data.date)
.asQueryableState("QSName", reduceIntoSingleDataBucket());
}
and here is the code on client side:
QueryableStateClient client = new QueryableStateClient("localhost", 6123);
// the state descriptor of the state to be fetched.
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}));
jobId = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
String key = "2017-01-06";
CompletableFuture<ValueState<DataBucket> resultFuture = client.getKvState(
jobId,
"QSName",
key,
BasicTypeInfo.STRING_TYPE_INFO,
descriptor);
try {
ValueState<DataBucket> valueState = resultFuture.get();
DataBucket bucket = valueState.value();
System.out.println(bucket.getLabel());
} catch (IOException | InterruptionException | ExecutionException e) {
throw new RunTimeException("Unable to query bucket key: " + key , e);
}
I have followed the instructions as per the following link:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/queryable_state.html
making sure to enable the queryable state on my Flink cluster by including the flink-queryable-state-runtime_2.11-1.4.0.jar from the opt/ folder of your Flink distribution to the lib/ folder and checked it runs in the task manager.
I keep getting the following error:
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:84)
at org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:253)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:210)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:174)
at com.company.dept.query.QuerySubmitter.main(QuerySubmitter.java:37)
Any idea of what is happening? I think that my requests don't reach the QS at all ... Really don't know if and how I should change anything. Thanks.
So, as it turned out, it was 2 things that were causing this error. The first was the use of the wrong constructor for creating a descriptor on the client side. Rather than using the one that only takes as input a name for the QS and a TypeHint, I had to use another one where a keySerialiser along with a default value are provided as per below:
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}).createSerializer(new ExecutionConfig()),
DataBucket.emptyBucket()); // or anything that can be used as a default value
The second was relevant to the host and port values. The port was different from v1.3.2 now set to 9069 and the localhost was also different in my case. You can verify both by checking the logs of any task manager for the line:
Started the Queryable State Proxy Server # ....
Finally, in case you are here because you are looking to allow port-range for queryable state client proxy, I suggest you follow the respective issue (FLINK-7788) here: https://issues.apache.org/jira/browse/FLINK-7788.

Set Isolation level in eclipselink

I would like to set isolation level using eclipse link,
I tried these 2 ways to do it:
java.sql.Connection
mgr = EMF.get().createEntityManager();
tx = mgr.getTransaction();
tx.begin();
java.sql.Connection connection = mgr.unwrap(java.sql.Connection.class);
connection.setTransactionIsolation(java.sql.Connection.TRANSACTION_READ_COMMITTED);
System.out.println("Connection: "+connection.getTransactionIsolation());
//prints TRANSACTION_READ_COMMITED as expected
org.eclipse.persistence.sessions.DatabaseLogin databaseLogin = new DatabaseLogin();
System.out.println("DatabaseLoging: "+databaseLogin.getTransactionIsolation());
//prints -1, representing transaction isolation is not set
DatabaseLogin setTransationIsolation method
mgr = EMF.get().createEntityManager();
tx = mgr.getTransaction();
tx.begin();
org.eclipse.persistence.sessions.DatabaseLogin databaseLogin = new DatabaseLogin();
databaseLogin.setTransactionIsolation(DatabaseLogin.TRANSACTION_READ_COMMITTED);
System.out.println("DatabaseLoging: "+databaseLogin.getTransactionIsolation());
//prints TRANSACTION_READ_COMMITED as expected
java.sql.Connection connection = mgr.unwrap(java.sql.Connection.class);
System.out.println("Connection: "+connection.getTransactionIsolation());
//prints TRANSACTION_REPEATABLE_READ
As you can see there are some inconsistencies between the return values of getTransacationIsolation() method. My question is, which transaction isolation is really set in both cases ? I know that eclipse link uses different connection for read and write operations by default, DatabaseLogin.setTransactionIsolation should set the isolation level for both connections, so why Connection.getTransactionIsolation still returns another isolation level ?
I am using Application scoped EntityManager, JPA 2.0, EclipseLink 2.5.2.
If there are more preferable ways setting the transaction isolation, please let me know.
After having a small pause with eclipse link, I finally found out how to set transaction isolation level.
As #Chris correctly mentioned in his answer I need to obtain DatabaseLogin used by sessions. After a small research on eclipse link sessions I have found out, that I can change the Session properties in my own SessionCustomizer, see the code below:
package com.filip.blabla;
import org.eclipse.persistence.sessions.DatabaseLogin;
import org.eclipse.persistence.sessions.Session;
import org.eclipse.persistence.sessions.factories.SessionCustomizer;
public class DFSessionCustomizer implements SessionCustomizer {
#Override
public void customize(Session session) throws Exception {
DatabaseLogin databaseLogin = (DatabaseLogin) session.getDatasourceLogin();
databaseLogin.setTransactionIsolation(DatabaseLogin.TRANSACTION_READ_COMMITTED);
}
}
set SessionCustomizer in persistence.xml
<property name="eclipselink.session.customizer" value="com.filip.blabla.DFSessionCustomizer"/>
The databaseLogin class is an internal object that EclipseLink uses to configure how it accesses the database, and the settings used to configure those connections. Any changes you make directly to a connection will not be reflected in a DatabaseLogin instance.
Just creating a new DatabaseLoging instance is not going to give you access to the settings being used by the persistence unit. You need to obtain the DatabaseLogin being used by the sessions underneath the the EntityManager/EMF.

Manage Transactions on Business Layer

I want to use TransactionScope class in my business layer to manage database operation in data access layer.
Here is my sample code. When i execute it, it tries to enable the dtc. I want to do the operation without enable dtc.
I already checked https://entlib.codeplex.com/discussions/32592 article. It didn't work for me. I read many articles on this subject but none of them really touch enterprise library or i didn't see.
by the way, i am able to use TransactionScope using dotnet sql client and it works pretty well.
what would be the inside of SampleInsert() method?
Thanks,
Business Layer method:
public void SampleInsert()
{
using (TransactionScope scope = new TransactionScope())
{
Sample1DAL dal1 = new Sample1DAL(null);
Sample2DAL dal2 = new Sample2DAL(null);
Sample3DAL dal3 = new Sample3DAL(null);
dal1.SampleInsert();
dal2.SampleInsert();
dal3.SampleInsert();
scope.Complete();
}
}
Data Access Layer method:
//sampleInsert method structurally same for each 3 dal
public void SampleInsert()
{
Database database = DatabaseFactory.CreateDatabase(Utility.DATABASE_INFO); ;
using (DbConnection conn = database.CreateConnection())
{
conn.Open();
DbCommand cmd = database.GetStoredProcCommand("P_TEST_INS", "some value3");
database.ExecuteNonQuery(cmd);
}
}
Hi yes this will enable dtc because you are creating 3 DB connections within one TransactionScope . When more than one DB connection is created within same TransactionScope the local transaction escalate to Distributed Transaction and hence dtc will be enabled to manage Distributed Trnsactions.You will have to do it in a way that only one DB connection is created for entire TransactionScope. I hope this will give you an idea.
After research and waching query analyzer, I changed the SampleInsert() body as follows and it worked. The problem was as ethicallogics mentioned opening new connection each time i access the database.
public void SampleInsert()
{
Database database = DatabaseFactory.CreateDatabase(Utility.DATABASE_INFO);
using (DbCommand cmd = database.GetStoredProcCommand("P_TEST_INS", "some value1"))
{
database.ExecuteNonQuery(cmd);
}
}

Managing transactions between EntityFramework and EnterpriseLibrary's DatabaseFactory

I'm working with an existing set of code that manages multiple database updates in a single transaction. Here is a simplified example:
Database db = DatabaseFactory.CreateDatabase();
using (DbConnection dbConnection = db.CreateConnection())
{
dbConnection.Open();
DbTransaction dbTransaction = dbConnection.BeginTransaction();
try
{
//do work
dbTransaction.Commit();
}
catch (Exception ex)
{
dbTransaction.Rollback();
}
}
I am also using EntityFramework in this same project for new development. Below is a simplified example of the usage of my repository class:
List<ThingViewModel> things = new List<ThingViewModel>();
// populate list of things
IObjectRepository thingRepository = new ThingRepository();
thingRepository.AddThings(things);
thingRepository.Save();
I want the work done in 'AddThings' to happen as part of the transaction in the first block of code.
Is there some clean way of blending my repository pattern into this existing code or vice-versa? I'm not at the point yet where it is feasible to rewrite the existing code to be entirely within EntityFramework, so I'm looking for some some interim approach.
I have tried passing the transaction from the older code into the repository, and thus EntityFramework, but that does not seem to work. I have also tried passing the ObjectContext back out to the older code in order to enlist it in the transaction. Neither approach works.
I cannot believe that I am the first person to encounter this hurdle in migrating existing code to EntityFramework... there must be something I am not considering.
I'll list the things that I have tried below:
using (TransactionScope transactionScope = new TransactionScope())
{
Database db = DatabaseFactory.CreateDatabase();
using (DbConnection dbConnection = db.CreateConnection())
{
dbConnection.Open();
DbTransaction dbTransaction = dbConnection.BeginTransaction();
try
{
//do work
dbTransaction.Commit();
}
catch (Exception ex)
{
dbTransaction.Rollback();
}
}
Thing thing = new Thing(){
Prop1 = Val1,
Prop2 = Val2
};
ThingObjectContext context = new ThingObjectContext();
context.Things.AddObject(thing);
context.SaveChanges();
transactionScope.Complete();
}
This last example 'works', it does not function as a transaction. When the EF insert fails, the EL commands are not rolled back by the TransactionScope. If I don't put those explicit calls to .Commit() and .SaveChanges(), nothing happens. I would really like for this to share the same connection if possible. Two other variations of this I am currently playing around with is trying to use the same connection between both EF and EL as well as use EnlistTransaction on one side or the other. Definitely trying to keep this from becoming a MSDTC - don't want the extra overhead associated with that.
Use TransactionScope instead of explicit transaction management. You'll simplify the overall code and everything you do should automatically detect and use the same transaction.
Is there any way you can call Database.GetOpenConnection() instead of CreateConnection() in your EL code, and pass in the things.Connection that you create inside of a TransactionScope block? I haven't tested this, but that is what I would try first.