How to debug drools decision table - drools

Could someone help me how to debug a decision table in Drools. For our project we are creating a decision table with more than 1000 rules. Whenever their is a mistake in a rule who spreadsheet is not working and also its not displaying where is the exact error.

Drools: version 7.15.0.Final
I currently follow two approaches for debugging decision tables:
Compilation phase
In my case, I have to serialize the decision tables to save time –– normally they're converted into the .drl files which are then evaluated. I skip the line and compile them directly and get the knowledge bases and serialize them. My application uses these serialized knowledge bases.
Sometimes my decision tables fail to compile.
I debug them by generating the .drl file. The errors that drl parser generates are mostly identifiable from the generated .drl file.
Code snippet for converting a drools decision table into its corresponding drl file
Runtime phase
Sometimes, even if my decision tables have compiled successfully, they have some runtime issues –– some rules don't fire as expected. To debug these I've found the use of AgendaEventListener helpful. Drools provides two helpful agenda event listener implementations for debugging purposes out of the box: DebugAgendaEventListener and DebugRuleRuntimeEventListener.
There are two variations of the DebugAgendaEventListener and DebugRuleRuntimeEventListener. The ones from the org.drools.core.event package use the Logger instance for logging the events where as the ones from org.kie.api.event.rule package use the stderr. However, both have the exact same functionality.
Moreover, the Kie Event Model can be leveraged to get more information out and custom debugging. More information can be found in the drools 7.15.0.Final docs.
Additional links and references:
https://javadude.wordpress.com/2012/03/06/debugging-drools-rules/

Related

Reconciliation during Dataphor's registration of libraries

When registering libraries in Dataphor what is the difference between registering with and without reconciliation?
In my experience from learning and using this DBMS we've always registered without reconciliation. What are some example cases where we may choose one option over the other?
Registering with reconciliation means that the Data Definition Language (DDL) statements will be run against the target device(s). This is desired behavior when starting from a blank or non-existing database, where you want Dataphor to create the needed structures. Otherwise, the preferred methodology is to register without reconciliation so that any existing database is ignored, and use the DeviceReconciliationScript() operator to reconcile the changes.

Adding item to Available Reports in Trac (via Plugin)

Is it possible to add a Ticket Report to the Available Reports via a plugin, so that after installing the plugin it becomes automatically available? Or would one have to manually save a custom query with the intended columns and filters?
If it is possible to do this via python code, what is the Trac interface to be implemented?
You can insert the reports through an implementation of IEnvironmentSetupParticipant. Example usage in env.py. In env.py the reports are inserted in environment_created. You'll want to insert reports in upgrade_environment. Example code for inserting the report can be found in report.py. needs_upgrade determines whether upgrade_environment is called on Environment startup so you'll need to provide the logic that determines whether your reports are already present, returning False if they are present and True if they are not. If True is returned, then upgrade_environment will get called (for your case environment_created can just be pass, or it can directly call upgrade_environment, the latter being a minor optimization discussed in #8172). See the Database Schema for information on the report table.
In Trac 1.2 (not yet released) we've tried to make it easier to work with the database by adding methods to the DatabaseManager class. DatabaseManager for Trac 1.1.6 includes methods set_database_version and get_database_version. This is useful for reducing the amount of code needed in IEnvironmentSetupParticipant.needs_upgrade when checking whether the database tables need to be upgraded (even simpler would be to just call DatabaseManager.needs_upgrade). There's also an insert_into_tables method that you could use to insert reports.
That said, I'm not sure you need to put an entry for your plugin in the system table using set_database_version. You can probably just query the report table and check if your report is present, use that check to return a boolean from IEnvironmentSetupParticipant.needs_upgrade, which will determine whether IEnvironmentSetupParticipant.upgrade_environment gets called. If you are developing for Trac 1.0.x, you can copy code from the DatabaseManager class in Trac 1.1.6. Another approach can be seen with CodeReviewerPlugin, for which I made a compat.py module that adds the methods I needed. The advantage of the compat.py approach is that the methods can be copied from the DatabaseManager class without polluting the main modules of your codebase. In the future when your plugin drops support for Trac < 1.2 you can just delete the compat.py module and modify the imports in your plugin code, but not have to change any of your primary plugin logic.

Getting the current Experiment instance at runtime

I'm running JUnit 4 with AnyLogic. In one of my tests, I need access to the Experiment running the test. Is there any clean way to access it at runtime? E.g., is there a static method along the lines of Experiment.getRunningExperiment()?
There isn't a static method that I know of (and, if there was, it might be complicated by multi-run experiments which permit parallel execution, although perhaps not since there's still a single Experiment, though there'd be thread-safety issues).
However, you can use getEngine().getExperiment() from within a model. You probably need to explain more about your usage context. If you're using AnyLogic Pro and exporting the model to run standalone, then you should have access to the experiment instance anyway (as in the help "Running the model from outside without UI").
Are you trying to run JUnit tests from within an Experiment? If so, what's your general design? Obviously JUnit doesn't sit as well in that scenario since it 'expects' to be instantiating and running the thing to be tested. For my automated tests (where I can't export it standalone because I don't use AnyLogic Pro), I judged that it was easier to avoid JUnit (it's just a framework after all) and implement the tests 'directly' (by having my model components write outputs and, at the end of the run in the Experiment, having the Experiment compare the outputs to pre-prepared expected ones and flag if the test was passed or failed). With AnyLogic Pro, you could still export standalone and use JUnit to run the 'already-a-test' Experiments (with the JUnit test checking the Experiment for a testPassed Boolean being set at the end or whatever).
The fact that you want to get running experiments suggests that you're potentially doing this whilst runs are potentially executing. If so, could you explain a bit about your requirements?

Explain the steps for db2-cobol's execution process if both are db2 -cobol programs

How to run two sub programs from a main program if both are db2-cobol programs?
My main program named 'Mainpgm1', which is calling my subprograms named 'subpgm1' and 'subpgm2' which are a called programs and I preferred static call only.
Actually, I am now using a statement called package instead of a plan and one member, both in 'db2bind'(bind program) along with one dbrmlib which is having a dsn name.
The main problem is that What are the changes affected in 'db2bind' while I am binding both the db2-cobol programs.
Similarly, in the 'DB2RUN'(run program) too.
Each program (or subprogram) that contains SQL needs to be pre-processed to create a DBRM. The DBRM is then bound into
a PLAN that is accessed by a LOAD module at run time to obtain the correct DB/2 access
paths for the SQL statements it contains.
You have gone from having all of your SQL in one program to several sub-programs. The basic process
remains the same - you need a PLAN to run the program.
DBA's often suggest that if you have several sub-programs containing SQL that you
create PACKAGES for them and then bind the PACKAGES into a PLAN. What was once a one
step process is now two:
Bind DBRM into a PACKAGE
Bind PACKAGES into a PLAN
What is the big deal with PACKAGES?
Suppose you have 50 sub-programs containing SQL. If you create
a DBRM for each of them and then bind all 50 into a PLAN, as a single operation, it is going
to take a lot of resources to build the PLAN because every SQL statement in every program needs
to be analyzed and access paths created for them. This isn't so bad when all 50 sub-programs are new
or have been changed. However, if you have a relatively stable system and want to change 1 sub-program you
end up reBINDing all 50 DBRMS to create the PLAN - even though 49 of the 50 have not changed and
will end up using exactly the same access paths. This isn't a very good apporach. It is analagous to compiling
all 50 sub-programs every time you make a change to any one of them.
However, if you create a PACKAGE for each sub-program, the PACKAGE is what takes the real work to build.
It contains all the access paths for its associated DBRM. Now if you change just 1 sub-program you
only have to rebuild its PACKAGE by rebinding a single DBRM into the PACKAGE collection and then reBIND the PLAN.
However, binding a set of PACKAGES (collection) into a PLAN
is a whole lot less resource intensive than binding all the DBRM's in the system.
Once you have a PLAN containing all of the access paths used in your program, just use it. It doesn't matter
if the SQL being executed is from subprogram1 or subprogram2. As long as you have associated the PLAN
to the LOAD that is being run it should all work out.
Every installation has its own naming conventions and standards for setting up PACKAGES, COLLECTIONS and
PLANS. You should review these with your Data Base Administrator before going much further.
Here is some background information concerning program preperation in a DB/2 environment:
Developing your Application

COBOL DB2 program

If I have 1 COBOL DB2 program which is calling 2 other COBOL DB2 sub programs, then how many DBRMs,Packages,Plans it will create? If I am changing any one of the sub program then do I need to recompile and bind all the programs?I am really confused with DBRMs,Plans and Packages.
Regards,
Manasi
Oh my... This is a huge topic so this
answer is going to be very simplified and therefore incomplete.
The answer depends somewhat on whether you are using the DB/2 pre-compiler or co-compiler. For this
answer I will assume you are using the pre-compiler. If you are using the co-compiler the
principle is pretty much the same but the mechanics are a bit different.
The goal here is to generate:
a Load module from your COBOL source
DB/2 Plan to provide your program with access paths into the DB/2 database
Everything in between just supports the mechanics of creating an appropriate DB/2 Plan
for your program to run against.
The Mechanics
Each program and/or sub-program containing DB/2 statements needs
to be pre-processed by the DB/2 pre-compiler. The pre-compiler
creates a DBRM (Data Base Request Module). The pre-compile also alters your source program by commenting
out all the EXEC SQL...END-EXEC statements and replaces them with specific calls to the DB/2 subsystem.
You then use the regular COBOL compiler to compile the code emitted by the pre-processor to produce an object module which you then link into an executable.
The DBRM produced by the pre-compile contains a listing of the SQL statements contained
in your program plus some other information that DB/2 uses to
associate these specific SQL statements to your program. The DBRM is typically written to
a permanent dataset (usually a PDS) and then input to
the DB/2 Binder where the specific access
paths for each SQL statement in your program are compiled into a form that DB/2
can actually use. The binder does for DB/2 roughly the same function as the compiler does for COBOL.
Think of the DBRM as your source code and the Binder as the compiler.
The access path information produced when the DBRM is bound
needs to be stored somewhere so that it can be located and used
when your program calls DB/2.
Where to put the binder output? Your options are to bind it into a Package or directly into a Plan.
The shortest route is to bind a group of DBRMs directly into a Plan.
However, as with many short cuts, this may not be the
most efficient thing to do (I will touch upon the reason later on).
Most larger systems will not bind DBRMs directly into Plans, they will bind into a Package. The bound
Package is stored within the DB/2 sub-system (same way a Plan is). What then is a Package?
A Package is a bound single DBRM. A Plan, on the other hand, typically contains the
access paths for multiple DBRMs.
Now we have a set of Packages, each Package contains the SQL access paths to its respective DBRM which
was derived from a given
program. We need to construct a Plan from these. To do this, a set of Bind Cards
is created, usually by your Data Base Administrator. Bind Cards are just a sort of "source code"
to the DB/2 Binder (they are not punched cards).
The Bind Cards define which Packages form
a given Plan. These are then submitted to the Binder which assembles them into a Plan. Note:
you may also hear mention of Collections, these are just named groupings of Packages that have
been defined by your DBA.
To summarize, we have the following process:
Program -> (Pre-Compiler) -> Modified Program -> (COBOL compiler) -> Object -> (Link) -> Load Module
-> DBRM -> (DB/2 Bind) -> Package
Bind Cards -> (DB/2 Bind) -> DB/2 Plan
Package(s) ->
The two fundamental outputs here are a Load Module (your executable COBOL program) and a DB/2 Plan. The Program
and Plan come together in your JCL where you have to give the Plan name somewhere within the EXEC statement
along with the program to run.
With this brief, and highly simplified, background, lets try to answer your questions:
How may DBRMs are created?
One DBRM per program containing SQL EXEC statements
How many Packages are created?
A package is constructed from one DBRM. There is a 1:1 correspondence between Source Program and Package
How many Plans are created?
Any given Package may be included in multiple Collections and or multiple Bind Card sets. This
means that a given Package may be included in multiple Plans.
If I change a program what is affected?
If you bind your DBRM directly into a Plan, then you must rebind every Plan that uses
that DBRM. This can be a very time consuming and error prone proposition.
However, if you bind your DBRM into a Package, then you only need to rebind that Package.
Since there is a 1:1 correspondence between Program and Package, only a single bind
needs to be done. The only time the Plan needs to be rebound is when a Package or Collection
is added or removed from the Bind Card set that defines it.
The advantage of using Packages should be clear from the above and this is why most
shops do not bind DBRMs directly into Plans, but use Packages instead.