Business Central call a DMN file from another DMN - redhat

I am using RedHat Business Central and trying to call one DMN file from another.
Use case - if salary > 40000 then calculate Tax from firstdmn else from seconddmn.
I have added a context and literal expression in the Tax DMN decision and included a model below. But Don't know how to proceed further. Please suggest what to do.

Use case - if salary > 40000 then calculate Tax from firstdmn else from seconddmn. [...] But Don't know how to proceed further. Please suggest what to do.
Here is an example using Red Hat Business Central, the Drools DMN open source engine and Scenario Simulation. Also this example is making use of best practices from DMN methodologies advising for Import and re-use of Business Knowledge Model nodes (or alternatively Decision Services)
Starting with a skeleton of the model as partially suggested you suggested:
In this model, we defined a BKM for a function that calculates Tax as 20% (mnemonic: this is the first DMN model, so ten percent). This is represented with the BKM called Tax10.
Then we define in the second DMN model, a BKM for a function that calculates Tax as 20% (mnemonic: second DMN model, twenty percent):
We go back to the first DMN model, and we Import the second one we just defined:
We can now include the imported BKM from the second model, into the DRG:
Now the FEEL literal expression matches your original requirement:
We can use Scenario Simulation to verify, test and non-regression test the requirements:
This is as expected, when salary is above 40K we used Tax10, else from second model we used Tax20.

Related

Correct classification of client requirements for UML diagrams?

I needed to classify the following RQs as a
Design objective,
Design Decisions,
Functional Req,
Non-Functional Req
(so I can do class diagram and use case diagram based on them later).
I wanted to know if I'm on the right track here (the bold face is my guess for each requirement):
Requirement document
Purchase Commitment System.
The software is to calculate a number of details needed to purchase by a factory in order to produce its products. (Design decision)
The software must be written in C++ or Java Programming Languages on the computer IBM PC. (Design decision)
The number of products should be equal to 4. (Non-Functional Req)
A general aim in the design of the software is to improve the portability of software. (Non-Functional Req)
The system should accept as input (make as a text file) the data about a number, amount and price of detail for every type of products. (Functional Req)
A number of details for every type of products should not be less than 5.
The first and second type of products should have 2 same details. The second and fourth type of products should have one same detail. The third type of products should have 2 same details with the fourth type and one same detail with the first type of products. (Design Objective)
The operator should be logged in and logged out to the system by login and password. (Design Objective)
At the beginning an operator must provide the following items of data (a validation of input data should be provided):
A number of every type of products to be produced by the factory for 3 months ahead. (Functional Req)
The software must produce for each action of an operator a report (the report should be saved in a file by the operator's request). The report must consist of : (Functional or Design Objective Req)
-A number of every detail needed to purchase.
The total price for every detail.
The total price for all the details
A functional requirement tells what the software shall do. A non functional requirement tells something about how the software shall be or how well it should do what it does.
Software design is about the structure and the behavior of the software. If some statement seems arbitrary and you think the software could fulfil all the requirements but differently, then there are chances that it's more about design than requirements. A design objective tells what the design must ensure (ambiguous: at the stage of the requirements, it's difficult to make the difference between non functional requirements and design objective). A design decision is a decision on the behavior or the structure of the software.
With this in mind, here an analysis:
What the software shall do ==> Functional requirement (FR) If we'd change this, the software would no longer do what is expected, so it can't be a design decision.
How the software shall be ==> Non functional requirement (NFR) Not really about structure or behavior of the software. The language will not impact use case nor class model, so it's not really a design decision IMHO.
Arbitrary decision about cardinality in object model ==> Design decision (DD)
"aim in the design" ==> Design objective (DO)
What the software shall do ==> FR
Arbitrary constraint about object model ==> DDIf it would be no less than 3 or no less than 10, the software would still fulfil the functional requirements. However this depends on the context. If it would turn out that the software would not be fit for purpose if these limits would not be respected, then it could be FR.
Arbitrary constraint on object model ==> DDThe purpose of this statement is unclear. It looks like some arbitrary constraints that could allow to generalize some categories.
What the software shall do ==> FR
Arbitrary decision on the interaction ==> DD I think that the data could be entered at another moment, or in a different way (3 times 1 month). Therefore I think it is DD. However, one could argue that the system shall offer a 3 month planning. So FR cannot be excluded, although I would expect it to be expressed differently.
What the software shall do ==> FR
I remember long discussions in the past about RQs whether a specific one were Non-F or F. However, Wikipedia has a simple definition.
As defined in requirements engineering, functional requirements specify particular results of a system.
So your classification does not look bad. Though, I wonder what your first two classifications should be. Looks a bit like MoSCoW, but then again it does not. Design decisions (at least to me) are nothing to be found in requirements. They are, what the name suggests, decisions coming from a design process. Further a design objective is a sub-category of NF. Even more important is the fact that your NFs are not classified. There should be at least a handful of sub-classes (legal, performance, etc.). See Wikipedia for a rather complete list.

Any alternative to BPMN and DMN notations for describing business logic?

I am looking for some tool capable of creating complex process of data manipulation which can be more or less easily modified by people who do not write code.
For example, my task is:
fetch data from sourceA
2.1 if data is full - filter it by condition 45
2.2 if data is not full - fetch additional data from source B
if result passes validation - return 1, otherwise 0
This should be described in some readable manner, best option is if one can modify this process in some UI tool.
What are the requirements?
Each process consists of two parts: steps, and a way to arrange them in a sequence.
(1)
The process in each step should be able to
1. emit commands for fetching some data from data-sources and inserting this into process context
2. filter, enrich, transform datasets obtained
Thus each step of this process should be described with some more or less simple DSL.
(2)
The selection of the step to go, i.e. the consequence of steps should be described by some visual tool, or again, as in (1), with some simple dsl.
Can you advise something for this typical, from my point of view, task?
Meanwhile, here are my own ideas.
First think comes to mind is BPMN combined with Drools.
For steps I may use DRL rules: they can make only basic data manipulation themselves, but I can call Java functions from them if I need something complicated.
For steps consequence I may use standart BPMN diagramm.
Mat be, there is something better?
The combination of BPMN with DMN would allow you indeed to describe with these visual standards, the execution of the process and decision logic to be applied, in order to achieve what in the "For example" paragraph.
In order to make it fully accessible by the business people, the BPMN task for fetching the data or performing any interaction with external system, should be prepared in advance and made available during the composition of the BPMN/DMN diagrams.
Alternatively to BPMN+DMN combination, you can look into Fuse or Fuse Online, it cannot describe all the semantics of the BPMN+DMN combination, but with Fuse Online for instance you can fully visually implement the steps you described in the "For example" paragraph.

evaluation NLP classifier with annotated data

if we want to evaluate a classifier of NLP application with data that are annotated with two annotators, and they are not completely agreed on the annotation, how is the procedure?
That is, if we should compare the classifier output with just the portion of data that annotators agreed on? or just one of the annotator data? or the both of them separately and then compute the average?
Taking the majority vote between annotators is common. Throwing out disagreements is also done.
Here's a blog post on the subject:
Suppose we have a bunch of annotators and we don’t have perfect agreement on items. What do we do? Well, in practice, machine learning evals tend to either (1) throw away the examples without agreement (e.g., the RTE evals, some biocreative named entity evals, etc.), or (2) go with the majority label (everything else I know of). Either way, we are throwing away a huge amount of information by reducing the label to artificial certainty. You can see this pretty easily with simulations, and Raykar et al. showed it with real data.
What's right for you depends heavily on your data and how the annotators disagree; for starters, why not use only items they agree on and see what then compare the model to the ones they didn't agree on?

Is it expensive to create a new Drools KnowledgeBase every time a rule needs to be evaluated?

I have a situation where for each product I have a different rule.
Thus, I will have 1 drl per each product.
Consequently, as far I understand I have a choice:
add all those knowledge packages into a single KnowledgeBase.
and then let Drools match the right rule using the id of the product.
when
avs : AvailabilityStatus( available == true, quantity <= 50, productId = 7899 )
then
avs.setDiscountRate("0.65");
end
create a new KnowledgeBase for each product i.e. for 50 products, 50 KnowledgeBases with just one drl loaded for each.
In my web app each request requires a new evaluation of the rules for the product.
I don't know which approach is more efficient.
The KnowledgeBase is a very, very expensive object to instantiate, so I wouldn't create one each time a rule needs to be evaluated.
I think the first approach you mentioned is better (to have all rules into one drl). This also leaves the option open to do rules across products (imagine if you want to add price rules, and you want to model a buy 2, get the 3rd free).
There's a third approach, in which you can still have one drl per product, but you load all of them in the same knowledge base. This is similar to the sigle responsibility principle, but applied to the rules.
If you need to write a rule per your product it is likely that you have multiple rules that have the same kind of LHS with only parameter changes, eg
when
avs : AvailabilityStatus( available == true, quantity <= 50, productId = 7899 )
then
//update
then you can consider using a decision table. Drools can generate rules itself. As stated in users guide
I suggest you to create just one knowledge base and store the binary packages in the file system. When you want to add a new rule, you regenerate the binary package. Don't create a drl per rule, that doesn't make any sense.
Cheers

simulation with arena

I want to simulate a supermarket with the arena to find the proper number of cashiers which market needs.
I want to start the simulation with one cashier then increase the number of cashiers in next simulations until the utilization of cashiers is less than 70%.
each cashier is a "resource module" and has a "process module" for it's service time.
am I make a separate model for each different number of the cashier(for example a model for a supermarket with one cashier, another model for a supermarket with two cashiers and so on) or is there a better way?
It's a little more advance but it sounds like Arena's Process Analyzer would help you determine the number of cashiers needed.
The Process Analyzer assists in the evaluation of alternatives
presented by the execution of different simulation model scenarios.
This is useful to simulation model developers, as well as decision-
makers
The Process Analyzer is focused at post-model development
comparison of models. The role of the Process Analyzer then is to
allow for comparison of the outputs from validated models based on
different model inputs.
via pelincec.isep.pw.edu.pl/doc/Simulation_Warsaw%20Part%205.pdf
A Google search for Arena Process Analyzer provides plenty of lecture notes, book references and examples:
https://www.google.com/search?q=arena+process+analyzer
Also, it sounds like this model isn't very complicated so, although it may be tedious, it'll probably be quicker to alter your model and run n simulations for each solution {1 cashier, 2 cashiers, ...}.
Also, if the model is indeed pretty simple, why not create multiple independent models in the same simulation file. For instance, one simulation file has three independent models of 1, 2 and 3 cashiers. The next has 4, 5 and 6 cashiers and so on. This would consolidate the statistics a little more and make analysis easier.
There are several ways to do this without making multiple models. A cashier is simply a resource, but it could also be an entity.
You can build your model to require throughput (customers) to be processed when two entities are available - a register entity and a cashier entity. This could be done with a batch module.
cashier entities would be set up according to a schedule you would like to test... from minimum cashier availability to full cashier availability.
Register entities would probably be held constant, but you could make them variable according to a schedule, too.
Your batched entity would then go into the process entity until a schedule called for the cashier to "leave" the system - split the batch and destroy the cashier entity. Register entity loops back to the batch to be grouped with another cashier or wait.