Unexpected behavior from Drools' collectSet and difficulty effecting undo moves - drools

What I actually wanted to do is to have a Drools rule as so:
rule "globalRequiredPredecessorAfterMe"
when
$rpAll: Set(size>1) from accumulate (
Customer(vehicle!= null, vehicle.vehicleTyp != VehicleTyp.DUMMY, $rpAfterMe: requiredPredecessorsAfterMe);
collectSet($rpAfterMe)
)
then
scoreHolder.addMediumConstraintMatch(kcontext, - $rpAll.size()-1);
end
Unfortunately, on the second move in CH drools awards me with a
Exception in thread "main" java.lang.RuntimeException: java.lang.NullPointerException
at org.drools.core.rule.SingleAccumulate.reverse(SingleAccumulate.java:124)
...
Is this a feature?
I swallowed my pride and to wrote a fantastically complex Listener which is supposed to to what the Drools rule above did. With an assertionScoreDirectorFactory (an EasyScoreCalculator) I get score corruption. On a closer look, I get a corruption in one of the two cases:
My planning entity's previousXXX is set exactly to the same value
An Undo move is done
To investigate, I set the solution to the state just before the score corruption occurs. I use the CreateChangeMoves from Optaplanner's 7.4.1.Final SolutionBusiness class. (On a side note, it would be really helpful if one could also set a MoveCountLimit in the SolverConfig.)
So I create the scoreDirector as follows
SolverFactory<MySolution> solverFactory = SolverFactory.createFromXmlResource(
SolverConfigXML);
solver = solverFactory.buildSolver();
ScoreDirectorFactory<MySolution> scoreDirectorFactory = solver.getScoreDirectorFactory();
scoreDirector = scoreDirectorFactory.buildScoreDirector();
scoreDirector.setWorkingSolution(unsolvedSolution);
within a JUnit 5 test and then perform several ChangeMoves to mimick the solution just before the offending move occurs. However, when I perform the offending move as in bullet 1 above,
// customer SUK0002030's previousVehicleOrCustomer is DUMMY_3
cm = createChangeMove(nameToCustomerMap.get("SUK0002030"), "previousVehicleOrCustomer", nameToVehicleMap.get("DUMMY_3"));
cm.doMove(scoreDirector);
I get a
java.lang.IllegalStateException: The entity (SUK0002030) has a variable (previousVehicleOrCustomer) with value (DUMMY_3) which has a sourceVariableName variable (nextCustomer) with a value (null) which is not that entity.
Verify the consistency of your input problem for that sourceVariableName variable.
at org.optaplanner.core.impl.domain.variable.inverserelation.SingletonInverseVariableListener.retract(SingletonInverseVariableListener.java:87)
...
When I set the previousVehicleOrCustomer to null, and then back to DUMMY_3, everything is fine and no score corruption occurs.
Similarly, when attempting to create an (offending) UndoMove of the previous Move cm, bullet 2 above, as so
cm.createUndoMove(scoreDirector).doMove(scoreDirector)
I get the same message:
java.lang.IllegalStateException: The entity (SUK0002014) has a variable (previousVehicleOrCustomer) with value (Vehicle_0) which has a sourceVariableName variable (nextCustomer) with a value (null) which is not that entity.
Verify the consistency of your input problem for that sourceVariableName variable.
at org.optaplanner.core.impl.domain.variable.inverserelation.SingletonInverseVariableListener.retract(SingletonInverseVariableListener.java:87)
...
Of course, if I manually create the UndoMove (by simply setting the previousVehicleOrCustomer back to null), everything is fine.
These moves really should be possible and I'm really curious to find out what's wrong.

Related

How to debug further this dropped record in apache beam?

I am seeing intermittent dropped records(only for error messages though not for success ones). We have a test case that intermittenly fails/passes because of a lost record. We are using "org.apache.beam.sdk.testing.TestPipeline.java" in the test case. This is the relevant setup code where I have tracked the dropped record too ....
PCollectionTuple processed = records
.apply("Process RosterRecord", ParDo.of(new ProcessRosterRecordFn(factory))
.withOutputTags(TupleTags.OUTPUT_INTEGER, TupleTagList.of(TupleTags.FAILURE))
);
errors = errors.and(processed.get(TupleTags.FAILURE));
PCollection<OrderlyBeamDto<Integer>> validCounts = processed.get(TupleTags.OUTPUT_INTEGER);
PCollection<OrderlyBeamDto<Integer>> errorCounts = errors
.apply("Flatten Roster File Error Count", Flatten.pCollections())
.apply("Publish Errors", ParDo.of(new ErrorPublisherFn(factory)));
The relevant code in ProcessRosterRecordFn.java is this
if(dto.hasValidationErrors()) {
RosterIngestError error = new RosterIngestError(record.getRowNumber(), record.toTitleValue());
error.getValidationErrors().addAll(dto.getValidationErrors());
error.getOldValidationErrors().addAll(dto.getOldValidationErrors());
log.info("Tagging record row number="+record.getRowNumber());
c.output(TupleTags.FAILURE, new OrderlyBeamDto<>(error));
return;
}
I see this log for the lost record of Tagging record row for 2 rows that fail. After that however, inside the first line of ErrorPublisherFn.java, we log immediately after receiving each message. We only receive 1 of the 2 rows SOMETIMES. When we receive both, the test passes. The test is very flaky in this regard.
Apache Beam is really annoying in it's naming of threads(they are all the same name), so I added a logback thread hashcode to get more insight and I don't see any and the ErrorPublisherFn could publish #4 on any thread anyways.
Ok, so now the big question: How to insert more things to figure out why this is being dropped INTERMITTENTLY?
Do I have to debug apache beam itself? Can I insert other functions or make changes to figure out why this error is 'sometimes' lost on some test runs and not others?
EDIT: Thankfully, this set of tests are not testing errors upstream and this line "errors = errors.and(processed.get(TupleTags.FAILURE));" can be removed which forces me to remove ".apply("Flatten Roster File Error Count", Flatten.pCollections())" and in removing those 2 lines, the issue goes away for 10 test runs in a row(ie. can't completely say it is gone with this flaky stuff going on). Are we doing something wrong in the join and flattening? I checked the Error structure and rowNumber is a part of equals and hashCode so there should be no duplicates and I am not sure why it would be intermittently failure if there are duplicate objects either.
What more can be done to debug here and figure out why this join is not working in the TestPipeline?
How to get insight into the flatten and join so I can debug why we are losing an event and why it is only 'sometimes' we lose the event?
Is this a windowing issue? even though our job started with a file to read in and we want to process that file. We wanted a constant dataflow stream available as google kept running into limits but perhaps this was the wrong decision?

javax.jcr.InvalidItemStateException: Item cannot be saved

I am getting following exception in a single box cq5 author environment.
javax.jcr.InvalidItemStateException: Item cannot be saved
because node property has been modified externally
more exception details:
Caused by: javax.jcr.InvalidItemStateException: Unable to update a stale item: item.save()
at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:262)
at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
at org.apache.jackrabbit.core.ItemImpl.perform(ItemImpl.java:91)
at org.apache.jackrabbit.core.ItemImpl.save(ItemImpl.java:329)
at org.apache.jackrabbit.core.session.SessionSaveOperation.perform(SessionSaveOperation.java:65)
at org.apache.jackrabbit.core.session.SessionState.perform(SessionState.java:216)
at org.apache.jackrabbit.core.SessionImpl.perform(SessionImpl.java:361)
at org.apache.jackrabbit.core.SessionImpl.save(SessionImpl.java:812)
at com.day.crx.core.CRXSessionImpl.save(CRXSessionImpl.java:142)
at org.apache.sling.jcr.resource.internal.helper.jcr.JcrResourceProvider.commit(JcrResourceProvider.java:511)
... 215 more
Caused by: org.apache.jackrabbit.core.state.StaleItemStateException: 3bec1cb7-9276-4bed-a24e-0f41bb3cf5b7/{}ssn has been modified externally
at org.apache.jackrabbit.core.state.SharedItemStateManager$Update.begin(SharedItemStateManager.java:679)
at org.apache.jackrabbit.core.state.SharedItemStateManager.beginUpdate(SharedItemStateManager.java:1507)
at org.apache.jackrabbit.core.state.SharedItemStateManager.update(SharedItemStateManager.java:1537)
at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:400)
at org.apache.jackrabbit.core.state.XAItemStateManager.update(XAItemStateManager.java:354)
at org.apache.jackrabbit.core.state.LocalItemStateManager.update(LocalItemStateManager.java:375)
at org.apache.jackrabbit.core.state.SessionItemStateManager.update(SessionItemStateManager.java:275)
at org.apache.jackrabbit.core.ItemSaveOperation.perform(ItemSaveOperation.java:258)
Here is the code sample:
adminResourceResolver = resourceResolverFactory
.getAdministrativeResourceResolver(null);
Resource fundPageResource = adminResourceResolver.getResource(page
.getPath() + "/jcr:content");
ModifiableValueMap homePageResourceProperties = fundPageResource
.adaptTo(ModifiableValueMap.class);
homePageResourceProperties.put("ssn",(person.getSsn());
adminResourceResolver.commit();
Any ideas ? It could be possible multiple threads accessing this code, as multiple authors on multiple pages calling this code from a authored component.
Thank you,
Sri
This is an error your see often in CQ5.5 (and lessens with each version upwards). The root cause of this issue is that multiple processes/services are modifying the same resource in roughly the same timespan (usually using different sessions, sometimes even with different users).
A small example to demonstrate perhaps. Session A and B both have a reference to Resource X. Session A modifies some properties on X, saves and commits, and is destroyed. This all goes smoothly. Session B still has a snapshot of the situation before A made modifications, session B makes modifications and all seems well UNTIL it tries to save. At this point, session B detects that it can't commit its changes because it doesn't have the latest node state. It has detected some other sessions made changes to the same node. In essence the current node state conflicts with modifications that session A has done and throws an ItemStale exception. The reason for this exception is the notion that the API doesn't know wether you want to keep the changes made by A, keep the changes made by the current session and discard the changes made by A, or merge them.
This error happens often with long running sessions and with workflow/listener combinations. Therefore the recommendation is to keep sessions as short as possible to prevent this kind of conflicts as much as possible.
One way to deal with this is to call session.refresh(keepChangesBoolean) before calling .save(). This instructs the current session to check for updates made by other sessions and deal with it according to the boolean flag you submit. This however is not a guarantee as it's still possible that between your refresh and your save call, yet another session has done the same. It only lowers the odds of this exception occurring.
Another way to deal with this is to retry again from scratch.

GWTP Invalid attempt to reveal errorplace, but then works normally

I have a couple of places set up, and they work correctly, except with a delay caused by this issue. They're using nested presenters.
For one place, it appears that any repeat attempt to load it causes an infinite loop of reveal error / unauthorized place (no idea why, no gatekeeper set), but then loads the page correctly. The issue I have with it is the delay and unnecessary log spam it causes - it loads the page correctly, why can't it do it without going through the loop first? Anyone have any ideas?
-- UPDATE --
I am using GWTP 1.4 with GWT 2.7.0, but the project was first created using GWTP 0.6 or maybe earlier. We've updated deprecation etc as we've upgraded, but I know there are anachronisms left.
I tried switching out our ClientPlaceManager with the default, bound the ErrorPlace and UnauthorizedPlace to our home page, and removed its gatekeeper, but it still tries to go to the error place (overrode the revealErrorPlace method and noticed it's throwing the error for a valid token that had been loaded at least once already that session. One page in particular, none of the presenter lifecycle phases are firing, though the presenter is visible (only breaking in firefox I think). I really don't understand it.
-- UPDATE 2 --
I've removed gatekeepers (even specifying #NoGatekeeper), have ensured that the error / unauthorized place have #NoGatekeeper and exists, and overrode revealPlace(request, updateUrl) to output results, and added a try/catch - and it does the exact same thing. An infinite loop, but everything is accessible. my debug output even shows it attempting to reveal the error place, but it never does, just errors out.
This is frustrating to no end.
Stacktrace:
SEVERE: Exception caught: Encountered repeated errors resulting in an infinite
loop. Make sure all users have access to the pages revealed by revealErrorPlace
and revealUnauthorizedPlace. (Note that the default implementations call
revealDefaultPlace)
com.google.gwt.event.shared.UmbrellaException: Exception caught:
Encountered repeated errors resulting in an infinite loop. Make sure all users
have access to the pages revealed by revealErrorPlace and
revealUnauthorizedPlace. (Note that the default implementations call
revealDefaultPlace)
at Unknown.fillInStackTrace_0_g$(student-0.js#36:10580)
at Unknown.Throwable_3_g$(student-0.js#8:10535)
at Unknown.Exception_3_g$(student-0.js#18:10678)
at Unknown.RuntimeException_3_g$(student-0.js#18:61481)
at Unknown.UmbrellaException_3_g$(student-0.js#25:133542)
at Unknown.UmbrellaException_5_g$(student-0.js#26:133603)
at Unknown.fireEvent_7_g$(student-0.js#13:133134)
at Unknown.fireEvent_12_g$(student-0.js#22:154354)
at Unknown.fire_8_g$(student-0.js#17:132936)
at Unknown.fireValueChangedEvent_0_g$(student-0.js#3:154358)
at Unknown.onHashChanged_0_g$(student-0.js#29:154297)
at Unknown.apply_0_g$(student-0.js#28:109006)
at Unknown.entry0_0_g$(student-0.js#16:109062)
at Unknown.anonymous(student-0.js#14:109042)
Caused by: java.lang.RuntimeException: Encountered repeated errors resulting in
an infinite loop. Make sure all users have access to the pages revealed by
revealErrorPlace and revealUnauthorizedPlace. (Note that the default
implementations call revealDefaultPlace)
at Unknown.fillInStackTrace_0_g$(student-0.js#36:10580)
at Unknown.Throwable_2_g$(student-0.js#8:10526)
at Unknown.Exception_2_g$(student-0.js#18:10672)
at Unknown.RuntimeException_2_g$(student-0.js#18:61475)
at Unknown.startError_0_g$(student-0.js#11:92009)
at Unknown.error_2_g$(student-0.js#8:91772)
at Unknown.doRevealPlace_0_g$(student-0.js#10:91762)
at Unknown.revealPlace_1_g$(student-0.js#8:91921)
at Unknown.revealPlace_0_g$(student-0.js#8:91908)
at Unknown.revealErrorPlace_1_g$(student-0.js#8:92109)
at Unknown.error_2_g$(student-0.js#8:91773)
at Unknown.doRevealPlace_0_g$(student-0.js#10:91762)
at Unknown.handleTokenChange_0_g$(student-0.js#12:91848)
at Unknown.onValueChange_4_g$(student-0.js#8:91888)
at Unknown.dispatch_87_g$(student-0.js#16:132968)
at Unknown.dispatch_88_g$(student-0.js#8:132972)
at Unknown.dispatch_0_g$(student-0.js#8:49973)
at Unknown.dispatchEvent_2_g$(student-0.js#14:133006)
at Unknown.doFire_0_g$(student-0.js#9:133250)
at Unknown.fireEvent_8_g$(student-0.js#8:133323)
at Unknown.fireEvent_7_g$(student-0.js#25:133128)
at Unknown.fireEvent_12_g$(student-0.js#22:154354)
at Unknown.fire_8_g$(student-0.js#17:132936)
at Unknown.fireValueChangedEvent_0_g$(student-0.js#3:154358)
at Unknown.onHashChanged_0_g$(student-0.js#29:154297)
at Unknown.apply_0_g$(student-0.js#28:109006)
at Unknown.entry0_0_g$(student-0.js#16:109062)
at Unknown.anonymous(student-0.js#14:109042)
If you're using the DefaultPlaceManager, make sure you have bound DefaultPlace, ErrorPlace and UnauthorizedPlace to Presenter name tokens in your Gin module.
From DefaultPlaceManager's javadoc (http://arcbees.github.io/GWTP/javadoc/apidocs/com/gwtplatform/mvp/client/proxy/DefaultPlaceManager.html):
Important! If you use this class, don't forget to bind DefaultPlace,
ErrorPlace and UnauthorizedPlace to Presenter name tokens in your Gin
module.
Note: The default, error and unauthorized places are revealed without
updating the browser's URL (hence the false value passed in
revealPlace). This will avoid stepping into an infinite navigation
loop if the user navigates back (using the browser's back button).
Here's an example of infinite navigation loop that we want to avoid:
An unauthenticated hits #admin (a place reserved to authenticated
admins) The #unauthorized place is revealed, and the browser's URL is
updated to #unauthorized The user clicks the back button in his
browser, lands in #admin, then #unauthorized, then #admin, and so on.
Also, from https://github.com/ArcBees/GWTP/issues/296:
Verify that the Interface of the Proxy in your Presenter inherit from
ProxyPlace.

Calling a method from ABL code not working

When I create a new quote from Epicor I would like to add an item from the parts form automatically.
I am trying to do this using the following ABL code which runs when 'GetNewQuoteHed' is called:
run Update.
run GetNewQuoteDtl.
run ChangePartNumMaster("Rod Tube").
ttQuoteDtl.OrderQty = 5.
run Update.
I am getting the error:
Index -1 is either negative or above rows count.
This error occurs for each line in my ABL code.
What am I doing wrong?
That's not the proper format for a 4GL error message (nor is it at all familiar) so I'd say it is an Epicor application message. Epicor support is probably your best bet. However... Just guessing but it sounds like you might need to somehow initialize the thing that you're updating.
Agree with #Tom, but i would also say try and isolate the error and see where the error is raised as soon as you find the point the error is actually raised it is normally much easier to figure out exactly what is going wrong and how to solve it.
Working between a 0 based and a 1 based system there can be issues with the 1st or last entry depending on which way you moving. As the index for 0 based systems starts at 0 and ends at n-1 where 1 based systems start at 1 and end at n.

Core Data relationship not saved to store

I've got this command line app that iterates through CSV files to create a Core Data SQLite store. At some point I'm building these SPStop objects, which has routes and schedules to-many relationships:
SPRoute *route = (SPRoute*)[self.idTransformer objectForOldID:routeID ofType:#"routes" onContext:self.coreDataController.managedObjectContext];
SPStop *stopObject = (SPStop*)[self.idTransformer objectForOldID:stopID ofType:#"stops" onContext:self.coreDataController.managedObjectContext];
[stopObject addSchedulesObject:scheduleObject];
[stopObject addRoutesObject:route];
[self.coreDataController saveMOC];
If I log my stopObject object (before or after saving; same result), I get the following:
latitude = "45.50909";
longitude = "-73.80914";
name = "Roxboro-Pierrefonds";
routes = (
"0x10480b1b0 <x-coredata://A7B68C47-3F73-4B7E-9971-2B2CC42DB56E/SPRoute/p2>"
);
schedules = (
"0x104833c60 <x-coredata:///SPSchedule/tB5BCE5DC-1B08-4D11-BCBB-82CD9AC42AFF131>"
);
Notice how the routes and schedules object URL formats differ? This must be for a reason, because further down the road when I use the sqlite store and print the same stopObject, my routes set is empty, but the schedules one isn't.
I realize this is very little debugging information but maybe the different URL formats rings a bell for someone? What could I be doing wrong that would cause this?
EDIT: it seems that one SPRoute object can only be assigned to one SPStop at once. I inserted breakpoints at the end of the iteration and had a look a the sqlite every time and I definitely see that as soon as an SPRoute object (that already had been assigned to a previous stop.routes) is assigned to a new SPStop, the previous stop.routes set gets emptied. How can this be?
Well, we had disabled Xcode's inverse relationship's warning which clearly states:
SPStop.routes does not have an inverse; this is an advanced
setting (no object can be in multiple destinations for a specific
relationship)
Which was precisely our issue. We had ditched inverse relationships because Apple states that they're only good for "data integrity". Our store is read-only so we figured we didn't really need them. We learn now that inverse relationships are a little more than just for "data integrity" :P