Strange behavior for PESSIMISTIC_WRITE? - jpa

I am new to JPA 2.0 locking, so it might be I am missing something.
Using NetBeans, I tried to debug a Stateless Session Bean. I tried to switch between two threads to examine the concept:
em.lock(entity, LockModeType.PESSIMISTIC_WRITE);
em.persist(entity);
try {
em.flush();
} catch (Exception e) {
System.out.println("Already Locked!");
}
I let the first process to finish
em.flush();
(no exceptions). Then, I switched to the second process. Surprisingly - it paused after the first line, and continued only after the first process exited the function.
Note: It was all working as expected with LockModeType.OPTIMISTIC.
Is it a normal behavior? am I missing something? here it seems to behave in a different way.
Thanks,
Danny

It is perfectly normal behavior. Lock is released in transaction commit/rollback and that is not happening as a consequence of calling em.flush().

Related

swallowing assertexceptions in nunit

Running the code below both in VSCode and in Visual Studio is reported as failed although the exception is swallowed :(
Why it works this way? How can I make NUnit forget about the thrown exception?
[Test]
public void TestExceptionReporting() {
try {
Assert.False(true);
} catch(AssertionException e) {
Log.Debug($">>> {e.ToString()}");
}
}
Why does it work that way...
Because NUnit processes and records the error internally before you can catch the exception. The exception is propogated after processing solely as a way to terminte the test. For that reason, it is no longer a good idea to catch NUnit's own exceptions in a test.
How can you make NUnit forget about the test failure?
This is an xy question. Please explain why you want NUnit to notice a failure and then forget it. There are lots of ways to make NUnit take note of a condition without failing, but to answer folks need to know what you are actually trying to do.
You could either edit this question (and I can edit my answer) or ask a new question about what you really want to do.

JPA - JTA - two big problems (on .persist() and on .remove()) - MySQLIntegrityConstraintViolationException

Firstly i would like to apologize if i could not find anything about what i would like to describe that really solved my problems. This does not mean that i fully searched in the site. Although i have been spending too much time (days). I am also new on here (in the sense that i never wrote/replied to SO users). And i am sorry for my possible english errors.
I have to say i am new to Java EE.
I am working on WildFly 14, using MySQL.
I am now focusing on a JPA problem.
I have a uniqueness constraint. I am doing tests and while performing the uniqueness violation test, from the data source level i get a MySQLIntegrityConstraintViolationException, and that's ok. I have the problem in that the persist() method does not let me catch the exception (i even put Throwable in the clause, but nothing..). I strongly, strictly, need to catch that, in order to manage a crucial procedure (that, indirectly contains the call to .remove()) in my work's code.
By the way, trying to write that exception, the system does not show me the window of the suggested classes/annotations/etc, suggesting me just to create the class "MySQLIntegrityConstraintViolationException". Doesn't working on WildFly, using MySQL, suffice, for having the suggestions?
Not finding the solution, i decided to change: instead of using persist(), i decided to use .createNativeQuery() in which i put as parameter a String describing an insertion in the db. It seems working. Indeed it works (signals uniqueness violation (ok!), does not execute the TRY block code (ok!) and goes into CATCH block (ok!)). But, again, the exception/error is not clear.
Also, when in the code i enter the piece of code that is in charge of managing the catching and then executing what's inside (and i have a .remove(), inside), it raises the exception:
"Transaction is required to perform this operation (either use a transaction or extended persistence context)" --> this referring to my entityManager.remove() execution..
Now i cannot understand.. should not JPA/JTA manage automatically the transactions?
Moreover, trying, later, to put entityManager.getTransaction().begin() (and commit()), it gives me the problem of having tried to manage manually transactions when instead i couldn't.. seems an endless loop..
[edit]: i am working in CMT context, so i am allowed to work with just EntityManager and EntityManagerFactory. I have tried with entityManager.getTransaction().begin() and entityManager.getTransaction().commit() and it hasn't worked.
[edit']: .getTransaction (EntityTransaction object) cannot be used in CMT context, for this reason that didn't work.
[edit'']: i have solved the transaction issue, by means of the transaction management suited for the CMT context: JTA + CMT requires us to manage the transactions with a TRY-CATCH-FINALLY block, in whose TRY body it is needed to put the operation we want to perform on the database and in whose FINALLY body it is needed to put the EntityManager object closing (em.close()). Though, as explained above, i have used em.createNativeQuery(), that, when failing, throws catchable (catchable in my app) exceptions; i would really need to do a roll-back (usage of .createNativeQuery() is temporary) in my work code and use the .persist() method, so i need to know what to do in order to be able to catch that MySQLIntegrityConstraintViolationException.
Thanks so much!
IT SEEMS i have solved the problem.
Rolling back to the use of .persist() (so, discarding createNativeQuery()), putting em.flush() JUST AFTER em.persist(my_entity_object) has helped me, in that, once the uniqueness constraint is violated (see above), the raised exception is now catchable. With the catchable exception, I can now do as described at the beginning of the post.
WARNING: I remind you of the fact that i am new to JavaEE-JPA-JTA. I have been "lucky" because, since my lack of knowledge, i put that instruction (em.flush()) by taking a guess (i don't know how i could think of that). Hence, I would not be able to explain the behaviour; I would appreciate, though, any explanation of what could have happen, of how and when the method flush() is used, and so on and so forth..
Thanks!

Why do I need to use curator but not zookeeper native API as distributed lock?

Our project depends on distributed lock heavily. I know curator provides several kinds of locks. My question is, can I just use the creating node as a mutex ?
CuratorFramework zkClient = zookeeperConnectionProvider.getZkClientForJobDistributeLock();
try {
zkClient.create()
.creatingParentsIfNeeded()
.withMode(CreateMode.EPHEMERAL)
.forPath("/" + job.getIdentifier().jobNodeString());
LOGGER.info(String.format("create node in zookeeper [%s]", job.getIdentifier().jobNodeString()));
} catch (Exception e) {
LOGGER.info(String.format("create job instance node in zookeeper failed [%s], reason [%s]",
job.getIdentifier().jobNodeString(),
e.getClass().getCanonicalName()));
return NO_WORK;
}
When the first process creates successfully, the second process gets the
NodeExistsException exception. If this can not work, I want to know the reason.
I think the first objection against doing as you propose is that it's hard to read/understand the code, compare it to:
InterProcessSemaphoreMutex lock = new InterProcessSemaphoreMutex(client, path);
lock.acquire();
Another reason is that you usually use locks to block a thread until another one releases the lock so you can write code that looks like this:
//do normal work
...
lock.acquire();
//do critical single threaded work
...
lock.release();
//back to normal work
...
Which is by all means possible with your code, but here it's created for you.
There are a lot more reasons to use a already implemented lock instead of writing your own, but it mostly boils down to: 'Why reinvent the wheel?"

How to request a replay of an already received fix message

I have an application that could potentitally throw an error on receiving a ExecutionReport (35=8) message.
This error is thrown at the application level and not at the fix engine level.
The fix engine records the message as seen and therefore will not send a ResendRequest (35=2). However the application has not processed it and I would like to manually trigger a re-processing of the missed ExecutionReport.
Forcing a ResendRequest (35=2) does not work as it requires modifying the expected next sequence number.
I was wonderin if FIX supports replaying of messages but without requiring a sequence number reset?
When processing an execution report, you should not throw any exceptions and expect FIX library to handle it. You either process the report or you have a system failure (i.e. call abort()). Therefore, if your code that handles execution report throws an exception and you know how to handle it, then catch it in that very same function, eliminate the cause of the problem and try processing again. For example (pseudo-code):
// This function is called by FIX library. No exceptions must be thrown because
// FIX library has no idea what to do with them.
void on_exec_report(const fix::msg &msg)
{
for (;;) {
try {
// Handle the execution report however you want.
handle_exec_report(msg);
} catch(const try_again_exception &) {
// Oh, some resource was temporarily unavailable? Try again!
continue;
} catch(const std::exception &) {
// This should never happen, but it did. Call 911.
abort();
}
}
}
Of course, it is possible to make FIX library do a re-transmission request and pass you that message again if exception was thrown. However, it does not make any sense at all because what is the point of asking the sender (over the network, using TCP/IP) to re-send a message that you already have (up your stack :)) and just need to process. Even if it did, what's the guarantee it won't happen again? Re-transmission in this case is not only doesn't sound right logically, the other side (i.e. exchange) may call you up and ask to stop doing this crap because you put too much load on their server with unnecessary re-transmit (because IRL TCP/IP does not lose messages and FIX sequence sync process happens only when connecting, unless of course some non-reliable transport is used, which is theoretically possible but doesn't happen in practice).
When aborting, however, it is FIX library`s responsibility not to increment RX sequence unless it knows for sure that user has processed the message. So that next time application starts, it actually performs synchronization and receives missing messages. If QuickFIX is not doing it, then you need to either fix this, take care of this manually (i.e. go screw with the file where it stores RX/TX sequence numbers), or use some other library that handles this correctly.
This is the wrong thing to do.
A ResendRequest tells the other side that there was some transmission error. In your case, there wasn't, so you shouldn't do that. You're misusing the protocol to cover your app's mistakes. This is wrong. (Also, as Vlad Lazarenko points out in his answer, if they did resend it, what's to say you won't have the error again?)
If an error occurs in your message handler, then that's your problem, and you need to find it and fix it, or alternately you need to catch your own exception and handle it accordingly.
(Based on past questions, I bet you are storing ExecutionReports to a DB store or something, and you want to use Resends to compensate for DB storage exceptions. Bad idea. You need to come up with your own redundancy solution.)

Entity Framework SaveChanges "hangs" the program

My code is pretty simple:
Context.AddObject("EntitiesSetName", newObjectName);
Context.SaveChanges();
It worked fine, but just one time – the first one. That time, I interrupted my program by Shift+F5 after the SaveChanges() was traced. It was a debug process, so I manually removed a newly created record from a DB and ran a program again in the debug mode. But it does not work anymore – it “hangs” when SaveChanges() is being called.
Another strange thing that I see:
If I write before addObject() and SaveChanges() are called something like:
var tempResult = (from mydbRecord in Context
where Context.myKey == 123
select mydbRecord.myKey).Count();
// 123 is the key value of the record that should be created before the program hangs.
tempResult will have the next value: 1.
So, it seems that the record is created (when the program hung) and now exists, but when I check the DB manually using other tools – it does not!
What do I do wrong? Is it some kind of cache issue or something else?
EDIT:
I've found a source of problem.
It was not EF problem at all, but it's a problem of the tool that I use to control the database manually (Benthic).
My program falls into some kind of deadlock (when I call SaveChanges()) with the tool when the tool is connected into the same DB.
So, the problem is in the synchronization area, imho, so my question can be marked as solved.