I'm running Drools 7.7.0.final KIE server on tomcat. I am seeing this behavior when launching a container via RESTful call to the KIE server....
The container is never created, and the RESTful call hangs indefinitely. When I query the server I see that the container is stuck in 'status="Creating"'.
This doesn't always happen. It seems to be dependent on the rules. For the most part, my LHS (when clause) are of the form..
myObject( (field1 != null) && field2 ) ... etc.
....where field2 is a boolean.
The difficulty seems to come in when I attempt something complicated like ...
myObject ( JsonMappper.truth(propertiesString, "field2") )
...where propertiesString is a string containing JSON, and JsonMapper.truth is a static method that returns a boolean based on the decoded value of field2.
The odd thing is that I never receive a compilation error, and the behavior changes in unpredictable when I remove/add various rules. Sometimes the container will be created even when multiple instances of rules with JsonMapper.truth exist in the rules file. There seems to some subtle interaction between the rules.
My questions are:
1) Is there some danger associated with using a custom java function like this in the when clause?
2) Is there a way to determine why the container creation is hanging? I am not finding any useful logs. Nothing useful seems to be written to the usual tomcat logs.
3) Has anyone seen this behavior (container creation hanging)?
I had the similar issue. But I thought that it is related to Enums usage. Switching the version to '7.9.0.Final' fixed everything.
Related
In an integration test, my advice of #AfterReturning is wrongly executed while in the test I mock it to throw TimeoutException, and the arg passed to the aspect is null.
My advice:
#AfterReturning("execution(* xxxxxx" +
"OrderKafkaProducerService.sendOrderPaidMessage(..)) && " +
"args(order)")
public void orderComplete(CheckoutOrder order) { // order is null when debugging
metricService.orderPaidKafkaSent();
log.trace("Counter inc: order paid kafka"); // this line of log is shown in console
metricService.orderCompleted();
log.trace("Order complete! {}", order.getId()); // this line is not, because NPE
}
And my test:
// mocking
doThrow(new ServiceException(new TimeoutException("A timeout occurred"), FAILED_PRODUCING_ORDER_MESSAGE))
.when(orderKafkaProducerService).sendOrderPaidMessage(any()); // this is where advice is executed, which is wrong
...
// when
(API call with RestAssured, launch a real HTTP call to endpoint; service is called during this process)
// then
verify(orderKafkaProducerService).sendOrderPaidMessage(any(CheckoutOrder.class)); // it should be called
verify(metricService, never()).orderCompleted(); // but we are throwing, not returning, we should not see this advice executed
This test is failing because of NPE(order is null).
In debugging, I find that when I was mocking, I already execute the advice, and at this point, any() has no value yet, is null, so NPE. But I don't think the advice should execute while mocking. How can I avoid that while testing?? This is absurd for me.
Currently Spring test support does not explicitly handle the situation that an injected mock or spy (which is a proxy subclass via Mockito) might actually be an AOP target later on (i.e. proxied and thus subclassed again via CGLIB).
There are several bug tickets related to this topic for Spring, Spring Boot and Mockito. Nobody has done anything about it yet. I do understand why the Mockito maintainers won't include Spring-specific stuff into their code base, but I do not understand why the Spring people do not improve their testing tools.
Actually when debugging your failing test and inspecting kafkaService, you will find out the following facts:
kafkaService.getClass() is com.example.template.services.KafkaService$MockitoMock$92961867$$EnhancerBySpringCGLIB$$8fc4fe95
kafkaService.getClass().getSuperclass() is com.example.template.services.KafkaService$MockitoMock$92961867
kafkaService.getClass().getSuperclass().getSuperclass() is class com.example.template.services.KafkaService
In other words:
kafkaService is a CGLIB Spring AOP proxy.
The AOP proxy wraps a Mockito spy (probably a ByteBuddy proxy).
The Mockito spy wraps the original object.
Besides, changing the wrapping order to make the Mockito spy the outermost object would not work because CGLIB deliberately makes its overriding methods final, i.e. you cannot extend and override them again. If Mockito was just as restrictive, the hierarchical wrapping would not work at all.
Anyway, what can you do?
Either you use a sophisticated approach like described in this tutorial
or you go for the cheap solution to explicitly unwrap an AOP proxy via AopTestUtils.getTargetObject(Object). You can call this method safely because if the passed candidate object is not a Spring proxy (internally easy to identify because it implements the Advised interface which also gives access to the target object), it just returns the passed object again.
In your case the latter solution would look like this:
#Test
void shouldCompleteHappyPath() {
// fetch spy bean by unwrapping the AOP proxy, if any
KafkaService kafkaServiceSpy = AopTestUtils.getTargetObject(kafkaService);
// given mocked
doNothing().when(kafkaServiceSpy).send(ArgumentMatchers.any());
// when (real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaServiceSpy).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
This has the effect that when(kafkaServiceSpy).send(ArgumentMatchers.any()) no longer triggers the aspect advice because kafkaServiceSpy is no longer an AOP proxy. The auto-wired bean kafkaService still is, though, thus AOP gets triggered as expected, but no longer unwantedly while recording the mock interaction.
Actually, for the verification you could even use kafkaService instead of the spy and only unwrap the spy when recording the interaction you want to verify later:
#Test
void shouldCompleteHappyPath() {
// given mocked
doNothing()
.when(
// fetch spy bean by unwrapping the AOP proxy, if any
AopTestUtils.<KafkaService>getTargetObject(kafkaService)
)
.send(ArgumentMatchers.any());
// when(real request)
testClient.create().expectStatus().isOk();
// then
verify(kafkaService).send(ArgumentMatchers.any());
verify(metricService).incKafka();
}
P.S.: Without your MCVE I would never have been able to debug this and find out what the heck was going on. This proves again that asking questions including an MCVE is the best thing you can do for yourself because it helps you get answers to questions which otherwise probably would remain unanswered.
Update: After I had mentioned this problem under the similar closed issue Spring Boot #6871, one of the maintainers has by himself created Spring Boot #22281 which specifically addresses your problem here. You might want to watch the new issue in order to find out if/when it can be fixed.
Firstly i would like to apologize if i could not find anything about what i would like to describe that really solved my problems. This does not mean that i fully searched in the site. Although i have been spending too much time (days). I am also new on here (in the sense that i never wrote/replied to SO users). And i am sorry for my possible english errors.
I have to say i am new to Java EE.
I am working on WildFly 14, using MySQL.
I am now focusing on a JPA problem.
I have a uniqueness constraint. I am doing tests and while performing the uniqueness violation test, from the data source level i get a MySQLIntegrityConstraintViolationException, and that's ok. I have the problem in that the persist() method does not let me catch the exception (i even put Throwable in the clause, but nothing..). I strongly, strictly, need to catch that, in order to manage a crucial procedure (that, indirectly contains the call to .remove()) in my work's code.
By the way, trying to write that exception, the system does not show me the window of the suggested classes/annotations/etc, suggesting me just to create the class "MySQLIntegrityConstraintViolationException". Doesn't working on WildFly, using MySQL, suffice, for having the suggestions?
Not finding the solution, i decided to change: instead of using persist(), i decided to use .createNativeQuery() in which i put as parameter a String describing an insertion in the db. It seems working. Indeed it works (signals uniqueness violation (ok!), does not execute the TRY block code (ok!) and goes into CATCH block (ok!)). But, again, the exception/error is not clear.
Also, when in the code i enter the piece of code that is in charge of managing the catching and then executing what's inside (and i have a .remove(), inside), it raises the exception:
"Transaction is required to perform this operation (either use a transaction or extended persistence context)" --> this referring to my entityManager.remove() execution..
Now i cannot understand.. should not JPA/JTA manage automatically the transactions?
Moreover, trying, later, to put entityManager.getTransaction().begin() (and commit()), it gives me the problem of having tried to manage manually transactions when instead i couldn't.. seems an endless loop..
[edit]: i am working in CMT context, so i am allowed to work with just EntityManager and EntityManagerFactory. I have tried with entityManager.getTransaction().begin() and entityManager.getTransaction().commit() and it hasn't worked.
[edit']: .getTransaction (EntityTransaction object) cannot be used in CMT context, for this reason that didn't work.
[edit'']: i have solved the transaction issue, by means of the transaction management suited for the CMT context: JTA + CMT requires us to manage the transactions with a TRY-CATCH-FINALLY block, in whose TRY body it is needed to put the operation we want to perform on the database and in whose FINALLY body it is needed to put the EntityManager object closing (em.close()). Though, as explained above, i have used em.createNativeQuery(), that, when failing, throws catchable (catchable in my app) exceptions; i would really need to do a roll-back (usage of .createNativeQuery() is temporary) in my work code and use the .persist() method, so i need to know what to do in order to be able to catch that MySQLIntegrityConstraintViolationException.
Thanks so much!
IT SEEMS i have solved the problem.
Rolling back to the use of .persist() (so, discarding createNativeQuery()), putting em.flush() JUST AFTER em.persist(my_entity_object) has helped me, in that, once the uniqueness constraint is violated (see above), the raised exception is now catchable. With the catchable exception, I can now do as described at the beginning of the post.
WARNING: I remind you of the fact that i am new to JavaEE-JPA-JTA. I have been "lucky" because, since my lack of knowledge, i put that instruction (em.flush()) by taking a guess (i don't know how i could think of that). Hence, I would not be able to explain the behaviour; I would appreciate, though, any explanation of what could have happen, of how and when the method flush() is used, and so on and so forth..
Thanks!
Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.
I am trying to deploy a small change to the trigger and I am getting warnings about insufficient (0%) Unit test coverage for another trigger (setTitle as shown below)
There is a test in place for this (please see below) but for some reason it is not getting taken into account. This test is defined similar to other tests which run successfully but in this case the trigger is not getting invoked (leading to the warnings for insufficient coverage)
Any ideas or suggestions where I can look and if there is any way to get past the test?
Trigger Test:
Call_Report__c c = new Call_Report__c(name='test cr', opportunity__c=o.id);
insert c;
Trigger declaration:
trigger setTitle on Call_Report__c (before insert)
Thank you!
I think the best way would be to try running the unit test manually in your target org and examine the debug log. And also manually check from UI if functionality still behaves as expected.
Some tips:
(applicable only when deploying to sandboxes) As stupid as it sounds are you sure the trigger is active? There's a checkbox when you edit them from UI or a status field in the accompanying metadata xml
Similar checkbox - is the trigger valid? If it was calling a method from class that was modified in the meantime you'll have a problem.
Do you have any recently introduced validations on the Call_Report__c or any prerequisites used in the test (like Opportunity):
fields marked as required in field definition,
shortened text field size but you're passing a too long string
Validation Rules (not on the Call_Report__c because these are checked later), but on Opportunity etc.
Can you add some system.debug() to the test to make sure that Opportunity you're using is created ok. Also - sometimes developers are too much VF-centered and don't throw exceptions but swallow them and put VF error messages so check ApexPages.hasMessages() too.
(more and more stupid stuff at that point) ;) Class and function are marked as isTest / testmethod? Is that the only trigger on the object? If there are more before insert - you can't guarantee the order, maybe something fails over there?
...
In regard to potential runtime failures, like database queries, it seems that one must use some form of Either[String, Option[T]] in order to accurately capture the following outcomes:
Some (record(s) found)
None (no record(s) found)
SQL Exception
Option simply does not have enough options.
I guess I need to dive into scalaz, but for now it's straight Either, unless I'm missing something in the above.
Have boxed myself into a corner with my DAO implementation, only employing Either for write operations, but am now seeing that some Either writes depend on Option reads (e.g. checking if email exists on new user signup), which is a majorly bad gamble to make.
Before I go all-in on Either, does anyone have alternate solutions for how to handle the runtime trifecta of success/fail/exception?
Try Box from the fantastic lift framework. It provides exactly what you want.
See this wiki (and the links at the top) for details. Fortunately lift project is well modulized, the only dependency to use Box is net.lift-web % lift-common
Use Option[T] for the cases records found and no records found and throw an exception in the case of SQLException.
Just wrap the exception inside your own exception type, like PersistenceException so that you don't have a leaky abstraction.
We do it like this because we can't and don't want to recover from unexpected database exceptions. The exception gets caught on the top level and our web service returns a 500 Internal server error in such case.
In cases where we want to recover we use Validation from scalaz, which is much like Lift's Box.
Here's my revised approach
Preserve Either returning query write operations (useful for transactional blocks where we want to rollback on for comprehension Left outcome).
For Option returning query reads, however, rather than swallowing the exception with None (and logging it), I have created a 500 error screen, letting the exception bubble up.
Why not just work with Either result type by default when working with runtime failures like query Exceptions? Option[T] reads are a bit more convenient to work with vs Either[Why-Fail, Option[T]], which you have to fold/map through to get at T. Leaving Either to write operations simplifies things (all the more so given that's how the application is currently setup, no refactoring required ;-))
The only other change required is for AJAX requests. Rather than displaying the entire 500 error page response in the AJAX status div container, we check for the status type and display 500 error message accordingly.
if(data.status == 500)
$('#status > div').html("an error occurred, please try again")
Could probably do an isAjax check server-side prior to sending the response; in which case I can send back only status + message rather than the error page itself.