Kie Server Guided Descision table rules not fired from REST API - rest

I have a small project created on drools workbench deployed on a KIE server. Using the rest API I am able to insert facts, with rules appropriately fired. However the rules included in guided decision tables do not fire. Here is an example of a request I would send to the KIE server:
<batch-execution lookup="defaultKieSession">
<insert out-identifier="applicant" return-object="true" entry-point="DEFAULT">
<models.Applicant>
<timeEmployed>35</timeEmployed>
<employmentStatus>Contract</employmentStatus>
<violations>[]</violations>
</models.Applicant>
</insert>
<fire-all-rules/>
</batch-execution>
All rules that this data should trigger are fired, except for those included in the Decision table.
When I run a test scenario with the same data, all rules, including the decision table's rules, are fired correctly:
The problem seems to be related to the use of the REST API. Any ideas as to what I am doing wrong?
Here is the Table in question:
Violation simply calls a method that appends an error to the violations array.
Inside kmodule.xml I have.
<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
Clarification
Just to be clear my requests fire rules from Guided DRLs, my issue is only with the rules in guided rules table. For example:
Given the rule:
If I send this request:
1994-04-15 11:03:44-0000
1970-01-13 16:19:41-0024
Contract
35
[]
This is a fragment of the response:
This data should also match the rule in the gdst however it is not fired.

While we can't see the condition operators in the table, my guess is the criteria doesn't meet any of the decision table rows/rules.
Mapping the input data to the decision table, we can see that:
Employment Status is Contract, so that matches row 1. In rows 3-11 it is ignored, so depends on the matching of the remaining columns.
Employment Duration is probably timeEmployed, and it does not match equality of any row. If the column operator is >, it matches rows 2-11. If it is <, then it matches row 1.
Job Category is ignored for rows 1 and 2, and has no apparent matching value in the input so rows 3-11 do not match.
Based on those:
3 says only rows 1 and 2 could match
1 says only row 1 could match of rows 1 and 2
2 is indeterminate based on what we see
My guess is row 1 is close but #2 doesn't match.

Related

Drools/ Kogito Dynamic runtime flow/ process

I'm new to drools (I can consider Kogito as well) we have a requirements where the rules with orders are stored in a decision table and it should be executed based on the order given by the users the rules order could change at anytime. Please keep in mind that the response(output) from any rules is the input for the next one. is there anyway to achieve this?
example:
Rules
rank
rule 1
1
rule 2
2
order might change to:
Rules
rank
rule 1
2
rule 2
1

TSQL list of business rules in MDS

I need help with querying my business rules in SQL!
I have created couple of business rules in MDS web service to validate my master data and found out that 2 % of my data is not complying with the rules. Now I have created a SQL subscription view to report on the invalid data in PowerBI. In my PowerBI report I need to tell the business user why the data is invalid but I cannot since the subscription view only tells where the data is invalid but not why the data is invalid. So I need to know how I might query my business rules from MDS database in SQL and map it with my PowerBI data model. Is there a way to query the list of business rules from MDS database?
OK, so there are multiple ways to go about this. Here are some solutions, pls choose one that suits your scenario.
1. SQL - List of all Business Rules
The following query will retrieve the list of all Active Business Rules created in MDS.
SELECT *
FROM [MDM].[mdm].[viw_SYSTEM_SCHEMA_BUSINESSRULES]
WHERE Model_Name = 'YourModelName'
AND BusinessRule_StatusName = 'Active'
You can, of course, further filter by Entity_Name, etc.
The important columns in your case are going to be:
[BusinessRule_Name]
[BusinessRule_Description]
[BusinessRule_RuleConditionText]
[BusinessRule_RuleActionText]
Note: The challenge in your scenario, I think, is going to be that the Subscription View of the entity does not have IDs of the exact Business Rules that failed. So I'm not sure how you'll tie these 2 together (failing Rows -> List of Business Rules). Also remember that each row may have more than 1 business rule that failed.
2. using View viw_SYSTEM_USER_VALIDATION
This is a view that has a historical list of all business rules (+row info) that failed. You may use the view in this way:
SELECT
DISTINCT ValidationIssue_ID, Version_ID, VersionName, Model_ID, ModelName, Entity_ID, EntityName, Hierarchy_ID, HierarchyName, Member_ID, MemberCode, MemberType_ID, MemberType, ConditionText, ActionText, BusinessRuleID, BusinessRuleName, PriorityRank, DateCreated, NotificationStatus_ID, NotificationStatus
FROM [MDM].[mdm].[viw_SYSTEM_USER_VALIDATION]
WHERE --CAST(DateCreated as DATE) = CAST(GETDATE() as DATE) AND
ModelName = 'YourModelName'
--AND EntityName IN ('Entity_1','Entity_2') -- Use this to Filter on specific Entities
Here, use the columns Model_ID, Entity_ID and Member_ID/MemberCode to identify the specific model, entity & row that failed.
Columns BusinessRuleName, ConditionText & ActionText will give your users additional info on the Business Rule that failed.
Note:
One issue with using this view is that even though, let's say, your failure condition was resolved the next day by the user, the view will still show that on a certain date, a validation had failed. (via column DateCreated).
Also note that the same failed data row will appear multiple times here if multiple Business Rules on the same row failed validation (there will be different BusinessRuleID/Name, etc). Just something to take note of.
Similarly, the same row may appear multiple times if it has failed again & again at different times. To workaround this, and if your final report can do with that, add a WHERE clause on the DateCreated column so that you only get to see any row that Failed today. The commented out line of code <--CAST(DateCreated as DATE) = CAST(GETDATE() as DATE) AND> does the same. If you can't use that, just make sure the data row are distinct. However, if you do this, remember that if something Failed yesterday (and is still in the Failed status), it may not show up.
My suggestion would be to use a slightly modified version of Solution #2:
Get the list of Failed Rows from your Subscription View (the data row's ValidationStatus is actually still 'Failed'), then JOIN with viw_SYSTEM_USER_VALIDATION making sure that you only select the row with MAX(DateCreated) value (of course for the same data row AND Business Rule).
Best of luck and if you find anything else while solving this issue, do share your learning here with all of us.
Lastly, if you found this useful, pls remember to Mark it as the Answer :)

webMethods business rules: How to create a decision table that depends on another decision table

I am using the native rule engine of webMethods (wm Business rule) to create complex rule. I need to create a decision table that depends on the result of another decision table. As shown in the graphic below,
I finally find out how to do it.
The idea is to put both decision tables into the same rule set. Then running the rule set by providing var1, var2, param1 and param2
The rule engine is going to infer the result for the first decision table into the second one.
NB. the name of the result of the first table has to be the same in the second table.

CDC multiple insert/delete of the same identity value

I have a table T that contains an ID set as identity and primary key. I have enabled CDC on the table and then later added an XML field that I didn't care capturing so I did not do anything further (to recreate the capture table and/or migrate old capture data).
I now have a stored procedure that (among other things) updates only the newly created field (no other field) in table T. I notice that instead of recording an update (operation=3 followed by operation=4), CDC records a delete (operation=1) followed by an insert (operation=2) and all fields are the same (of course since none of them was updated)
I actually noticed this because I had the same identity value inserted and/or deleted more than once, which is not possible (unless identity_insert is on, which is not)
Why does CDC record operation=1 instead of 3 and operation=2 instead of 4?
Is this documented anywhere or is it a bug?
The reason you are seeing a Delete/insert pair (Operation number 1/2) as opposed to an update pair (3/4) is because you are updating a "set" of data that ALSO has a unique constraint on your column.
For SQL to make sense of this wihout violating the unique cosntraint, it deletes the row and reinserts it (with the "update").
More information on this. Its not an issue or a defect. its the way SQL works and CDC innocently logs it as it sees it. Remember, CDC is just a subscriber and replicates things as they happen.
If you have a need to see an update you may have to look for the 1/2 "pair" and not ONLY the operation code 3/4.
Some great articles:
Bounded Update is the term used to describe certain types of UPDATE statements from the publisher that will replicate as DELETE/INSERT pairs on the subscriber. We perform a bounded update for every set based update that changes a column that is part of a unique index or constraint. In other words, if an UPDATE statement touches more than one row and modifies a column that is has any UNIQUE constraints, the UPDATE statement is sent to the subscriber as a DELETE/INSERT pair ... read more here
https://support.microsoft.com/en-us/kb/238254

Pagination logic in Mainframe CICS

Here is my requirement.
Front (Client) end will do a search based on predefined conditions (for instance: customer id, account number, first name, last name, etc). I need to get the data corresponding to this request from a db2 database and send it back to them (Server). We use CICS channels and containers to pass requests and responses between the Client and Server.
Front end needs the data ordered by: Receive date descending, Customer id Ascending, Account number Ascending. Data are fetched in pages of 500 records. For example, if for a search request from front end would retrieve 50,000 records from the db2 database, we need to return this data in 500 record "pages". For pagination concept, we use the field security deposit number which is primary key to our database but the sorting order is not based on this field.
I would like to know whether we can use scrollable cursor logic in CICS to implement pagination.
Please note that I do not prefer to go for internal array bubble sort to send the data in response as it would degrade performance. I like to do it via query logic.any thoughts?
Example (Initial Front end input request):
Customer id : A
First time request (To identify whether it is first time or next or previous request for pagination)
First security deposit number : 0
Last security deposit number : 0
Since this is first time request, both this field will be having zero from front end and we need to retrieve records from database based on condition of security deposit > 0
Db2 database:
There are 700 records for this criteria
Mainframe response for first time:
We will send the first 500 records
Front end will then send request for getting next set of records which will contain:
Customer id: A
Next request
First security deposit numbr: 0
Last security deposit number : 17980
So for this detail, if I query my datbase based on security deposit number > 17980, it may result in duplicate records listing in the screen once again since our sorting order in database is not based on security deposit number
How to impelement this logic??
Many Client/Server applications in an IBM Mainframe environment involve psuedo conversational CICS transactions.
If you are using CICS in psueudo conversational mode it
is not possible for the Server to hold cusors when it RETURNs to the Client. Therefore scrollable cusors
are of little use in this environment. So to answer your basic question: No scrollable cursors cannot be used here.
The "trick" here is to create an SQL predicate in the Server that is restartable. It will then pick up rows in the correct order from any given
stating point. When the Client calls your Server it must pass all of the positioning information to your Server.
Typically, on a first call from a Client all of the positioning values are set to cause the cursor to
position itself starting with what must be the the first row. The Server then pulls in a "page" worth of data
and returns it to the Client. On the next page forward request the Client sets these positioning values to
the last row it displayed and calls the Server for the next "page" of data.
In your situation I would assume that the page forward cursor would look something like this, all the
variables prefixed with RESTART... are what the Client must provide to the Server to start the cursor
in the correct position.
DECLARE CURSOR Page-forward FOR
SELECT Receive_Date, Customer_id, Account_Nbr, Security_Dep_Id
FROM Table_Name
WHERE ( (Receive_Date < :RESTART-RCV-DT)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id > :RESTART-CUSTOMER-ID)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id = :RESTART-CUSTOMER-ID AND
Account_Nbr > :RESTART-ACCT-NBR)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id = :RESTART-CUSTOMER-ID AND
Account_Nbr = :RESTART-ACCT-NBR AND
Security_Dep_Id > :RESTART-SEC-DEP-ID))
ORDER BY 1 DESC, 2 ASC , 3 ASC, 4 ASC
For the initial call the Client would have passed something like '9999-12-31' as the RESTART-RCV-DT, zero
for the RESTART-CUSTOMER-ID, RESTART-ACCT-NBR and SEC-DEP-ID (assuming these are all numeric). If you look at
the cursor predicate carefully you can verify that there cannot be any rows prior to these values - therefore this
will return the first page of data. If the Client needs to page forward after this, it must tell the Server to start
with the next row after the last one it received. To do this it would populate the RESTART... variables with
the values from the last row on the page it just
displayed. This process will drive the cursor selects forward one page at a time.
When paging up, the process is reversed (you will need a second cursor to support this, and the Client needs to tell you which direction to page: Forward or Back). The Client
will need to populate the RESTART variables with the first row it recieved from the Server. The trick
for the Server on a page up request is to return the data
to the Client in reverse order. You may have to populate the data page passed back
to the Client in reverse order (ie. put the first row retrieved into the last row of the paging area shared between
the Client and the Server). The page backward cursor would look something like:
DECLARE CURSOR Page-backward FOR
SELECT Receive_Date, Customer_id, Account_Nbr, Security_Dep_Id
FROM Table_Name
WHERE ( (Receive_Date > :RESTART-RCV-DT)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id < :RESTART-CUSTOMER-ID)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id = :RESTART-CUSTOMER-ID AND
Account_Nbr < :RESTART-ACCT-NBR)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id = :RESTART-CUSTOMER-ID AND
Account_Nbr = :RESTART-ACCT-NBR AND
Security_Dep_Id < :RESTART-SEC-DEP-ID))
ORDER BY 1 ASC, 2 DESC , 3 DESC, 4 DESC
As has been pointed out in other answers, this type of paging process does not manage or detect concurrent
updates to the database that may occur duing paging transactions. That is another topic for another day...
Developing Restartable Cursors
The the key to building a paging Server is to develop a cursor that is restartable from a set of values received
from a Client transaction. This leaves control of cursor positioning and direction with the Client.
It also means the Client must receive all critical positioning data from the Server even though
the Client might not actually
use these data for any other purpose (e.g. From your question I got the impression that the Client may not require
the Security Deposit Id except to supply as a positioning parameter for your Server)
To build a paging Server you need to know
what the required sorting order of the data are (e.g. Receive Date Descending then Customer Id Ascending then
Account Number Ascending).
You also need know the set of data that uniquely identify a row
returned by the cursor. In your case that would be the Security Deposit Id (this is the primary key for the
table you are selecting from so it must be unique for each and every row in that table). Knowing this you then build a
cursor predicate (the stuff in the WHERE clause) that will return data needed by the Client in the required sort order that
also includes
the full positioning key (i.e. Security Deposit Id). In the event that two or more returned rows may contain identical data if
the final positioning key were elimiminated makes it important that the positioning key be included as a sort condition.
It doesn't matter if it is ascending or descending, but it needs to be included on the sort to ensure consistent
order of data retrieval.
A fairly simple formula may be followed to build the predicate for a restartable cusor needed to
support paging Servers. Basically this is a cascade of "OR" clauses connecting a series of "AND" clauses
that become progressively more selective following the sort order required by the Client and end up with the positioning
key.
To see how this works consider how the query for your Server might be developed...
Start with the column from the sort order that changes least often...
SELECT ...
FROM ...
WHERE Receive_Date < restart value
This will retrieve all rows prior to the specified restart Receieve date regardless of what the other
column restart values are (e.g. Customer ID's can range from minimum to maximum values, as long as the Receive Date
is less than any Receive Date "seen" so far). Since this column only changes value after all subortinate sort columns values
have been exausted you can be sure that this does not pick up any rows prior to the full restart key.
But what about those rows that occur on the same date as the restart request but have a
larger Customer Id? These can be picked up with....
SELECT ...
FROM ...
WHERE Receive_Date = restart value AND
Customer_id > restart value
What about those where the Receive Date and Customer Id are the same as the restart key but have
a larger Account Number? These can be picked up with...
SELECT ...
FROM ...
WHERE Receive_Date = restart value AND
Customer_Id = restart value AND
Account_Nbr > restart value
Continue this pattern until the full restart key has been processed. Notice that the inequality
signs are determined by the sort order. Use < when the column is sorted Descending and > when Ascending.
Also notice that the SELECT and FROM clauses
are exactly the same for each query - which means you can put them all together using OR conjuctions...
SELECT Receive_Date, Customer_id, Account_Nbr, Security_Dep_Id
FROM Table_Name
WHERE ( (Receive_Date < :RESTART-RCV-DT)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id > :RESTART-CUSTOMER-ID)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id = :RESTART-CUSTOMER-ID AND
Account_Nbr > :RESTART-ACCT-NBR)
OR (Receive_Date = :RESTART-RCV-DT AND
Customer_Id = :RESTART-CUSTOMER-ID AND
Account_Nbr = :RESTART-ACCT-NBR AND
Security_Dep_Id > :RESTART-SEC-DEP-ID))
ORDER BY 1 DESC, 2 ASC , 3 ASC, 4 ASC
There you go... a restartable cursor for forward paging. Construction of the cursor for backward paging follows a similar pattern, just flip the
sort orders and repeat.
A simplistic approach: Write your SQL to retrieve data according to your criteria, in the sort order you specify. Then only retrieve the keys to the rows you want. Save the keys somewhere you will have access to upon subsequent invocations of your transaction. Look into multi-row select in DB2. Also understand pseudo-conversational programming techniques in CICS.
And now we get to the design implications Bill Woodger mentions, that you do not specify in your question, and which are the reason I'm just hitting the high points of a simplistic approach.
If changes to your result set occur between one invocation and the next, your results will not reflect those changes. You must decide if this is important.
You mention a "front end" but do not specify what it is. If it is a BMS application, you may be able to save the keys in your commarea or in a container. If your front end is a distributed application invoking your transactions via CICS Web Services or CICS Web Support or MQ or raw sockets or whatever, you must design a mechanism to store those keys such that you can uniquely retrieve them — perhaps by sending a contrived key back to the distributed application which it must supply upon subsequent invocations. Then you must have some process to clean up your key store.
Creating a solution to your problem that is unique in your IT shop is not something to be done in isolation. You must involve others who will be tasked with maintaining your application, there may be a group external to your project tasked with making such decisions, there may be infrastructure issues with your solution.
So this isn't so much as an answer to your question as it is an elaboration upon why you may not get an answer, or at least the answer you seem to desire.