Drools for test case formulation - drools

I am testing a rather complex system, which behaves according to some business rules (written as semi-formal text).
The goal is to create test cases, which cover as many states of the system as possible. I want to automate this task in the following way:
1) Formalize the business rules in Drools
2) Then use some mechanism to create a list of all possible situations (which need to be tested)
For example, I have following business rule package with two rules (this is only an example, real business rules are much more complex):
global List outErrorCodes;
global Boolean condition1;
global Boolean condition2;
global Boolean condition3;
rule "01"
when
eval( condition3 == false);
then
outErrorCodes.add("ERROR_CODE1");
end
rule "02"
when
eval((condition1 == true) && (condition2 == true));
then
outErrorCodes.add("ERROR_CODE2");
end
condition1, condition2 and condition3 are inputs.
outErrorCode is the output.
That is, condition1, condition2 and condition3 describe a certain situation, and outErrorCode describes the expected behaviour of the system in that particular situation.
I want to create a mechanism, which automatically creates a list of all possible tuples (condition1, condition2, condition3, outErrorCodes), based on the logic in the rules. Each tuple represents a state of the system.
These tuples will then be used as a basis for creating actual test cases.
Is it possible with Drools? If so - how?
Many thanks in advance
Dmitri

We had success in taking the ruleset, deploying it as a service using drools-server, and then writing a script to make web service calls exercising each possible value of each input variable. Within about an hour we were able to make 5000+ calls to our rule base and see output for each case.
The downside (and I don't think you will get around this with any solution) is that you are testing business logic. When you think about it, you can generate all the test data you want, but you can't generate test input and an expected result without running the business logic (which you are testing). Just saying that if you get into auto generating input to business logic, you can't actually know the expected results without violating the integrity of the test.

Related

IBM Watson Assistant: Regular expressions with context variables

I am gathering some context variables with slots, and they work just fine.
So I decided to do in another node of the conversation, check if one of these context variables is a specific number:
I was thinking on enabling multi-responses and check if, for example $dni:1 (it is an integer, pattern of 1 integer only), or if it is 2 or 3:
But this is not working. I was trying to solve it for some days with different approaches but I really cannot find a way through it.
My guess is that a context variable has a value, and you can print it to use it like responding with the user's name and stuff like that (which indeed is useful!), but comparing values is not possible.
Any insights on this I can receive?
Watson Assistant uses a short-hand syntax but also supports the more complex expressions. What you could do is to edit the condition in the JSON editor. There, for the condition, use a function like matches() on the value of the context variable.
Note that it is not recommended to check for context variables in the slot conditions. You can use multi-responses. An alternative way is to put the check into the response itself. There, you can use predicates to generate the answer.
<? context.dni==1 ? 'Very well' : 'Your number is not 1' ?>
You can nest the evaluation to have three different answers. Another way is to build an array of responses and use dni as key.
Instead of matching to specific integers, you could consider using the Numbers system entity. Watson Assistant supports several languages. As a benefit, users could answer "the first one", "the 2nd option", etc., and the bot still would understand and your logic could still route to the correct answer.

#karate How to pass parameter to a feature file in gatling simulation class?

Let's consider a scenario, we have to run the performance test for "create an account api" which takes input as header/path param "Auth token" and input data like user account name . So for above scenario we have 2 feature file as,
to run performance test for POST http://baseUrl/auth_param/create/input_data
1. One feature(e.g: generateAuth.feature) file which will have the auth
token
2. Second feature(createAccount.feature) file which take parameter as a
auth token, input data.
Here is my simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val generateAuthTest = scenario("generateAuth").exec(karateFeature("classpath:path/generateAuth.feature"))
val createAccountTest = scenario("test").exec(karateFeature("classpath:path/createAccount.feature"))
setUp(
createAccountTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
after {
println("Simulation is finished!")
}
}
Here, can i read auth from generateAuth.feature file which is input for createAccount.feature file, so that i can pass as a parameter?
Please suggest me how to pass parameters to createAccount.feature while calling in karateFeature method.
Let me put a requirement here,
let's say we have some feature files for CRUD operations on a particular data. Here how i go to write functional scenario,
I will create new feature file to write a scenario
just use CRUD files to test a SINGLE flow.
Now if i go for Performance test cases on individual operation, i feel there are 2 ways,
Create new 4 performance test feature files (one for each CRUD
method) and call these CRUD feature files in the respective test
feature file. Finally we just call test feature files in the
respective gatling simulation class.
**(In this case, I will end up with creating more test feature files as well simulation classes for
performance, which I want to avoid) **
Just call CRUD files in the respective gatling simulation class and
pass the required parameters to them.(In this case , we just need to create only 4 simulation
classes and run them on the basic of operation like create,read,delete and so on)
Here just wanted to know 2nd way of performance test, is it achievable or not in karate and if yes please let me know how?
Summary- I think its achievable using 3rd feature file (extra) for
individual use case but I do not want to make an extra feature file
for each case so that I can avoid maintenance work and can take
advantage of re-usability of existing feature file from functional
test to performance test.
Just use the normal Karate concepts such as karate-config.js
You can easily switch environments by setting the karate.env system property.
For example:
mvn test -DargLine="-Dkarate.env=e2e"
EDIT: after you edited your question, it is clear you have a SINGLE flow you want to test. please use a SINGLE feature. I suggest you move the generateAuth into the Background of the feature. Also refer to the docs on callSingle() for advanced options.
If you are expecting 2 feature files to magically share data that is not possible and not needed if you structure your tests correctly.
If you really really need this, please create a Java singleton and access it from each feature. Totally don't recommend this though.
EDIT: In Karate 0.9.0 onwards, you can call a single scenario within a feature if it has a tag:
classpath:animals/cats/create.feature#sometagname

CodeEffects Rule Engine. How to use + operator in evaluation rules

We are doing POC on business rules using CodeEffects rule engine. Trying to write evaluation rules using rule editor. Here question is how to use + operator between custom functions to evaluate specific rule. For example, I would like to write rule like below
check if(somefunc(somevar1)+somefunc(somevar2)+somefun(somevar3) is greater than [1]
Please help how to write such rule in editor.
You need to use the calculation option (the "Add calculation..." menu item) as the value of your condition. Keep in mind that in Code Effects each condition must begin with either a field or an in-rule method. Because of that, your rule needs to be changed as follows:
Check if Somefunc(Somevar1) is greater than
{ Somefunc(Somevar2) + Somefunc(Somevar3) - [1] }
Notice that the result of the evaluation is still the same, I just moved some rule elements around.

Best way to return values to calling program from Drools 6 rules?

My rules in a drl file will return 1 or more String values back to the calling program.
Right now the way I do that is via global variables as below
global java.util.List<String> statuses;
The calling program will pass it an empty ArrayList() and within my rules, I'll add the String values into the List. Finally, my calling program will retrieve the statuses list that might contain zero or many String items in the list as below
session.getGlobal("statuses")
But in the drools user guide, it said that it's not recommended that rules should change values of the global variables.
Are global variables the best way to return values back to the calling program? If not, what's the best way please? I have to deal with concurrency in my web application as well so I'm looking for the best way to return values back to the calling program for concurrency as well.
Thanks
There is nothing wrong with changing the value of a global variable in the code within the consequence ("right hand side") of a rule. What you describe is one of standard use cases for globals.
What the author of the Drools user guide (please add the exact reference!) meant is that changes of a global that is used in the condition ("left hand side") of a rule are not considered in the (re-)evaluation of these conditions. Therefore, this is to be avoided.
As for concurrency: Use a properly synchronized List object as a global.

Doing Integration Testing on Simulink Models

Say, I've UNITs 1,2,3,4 ( either as Model reference or Subsystem) for which I've units tests ready using matlab.unittest.TestCase framework.
What could be the easiest way to write integration test fro entire system ?
I need some way to set Global_Inputx ( x = 1,2,3 ) and verify Global_Outy ( y =1,2 ) in easiest possible way (may be utilizing the Unit tests) ?
I can use Matlab 14a
PS: I've already gone through this but it didn't help.
I think the question of integration testing in Simulink is a complex one that may involve formal methods like code and coverage analysis of the dynamic system under test, automatic test generation e.t.c. If you haven't already, you may want to check out "Verification, Validation and Test" section of the MathWorks product line up: http://www.mathworks.com/products/?s_tid=gn_ps.
However to answer your specific question of how you would set the global input and verify global output in your test:
Depending on where your global input data that feeds the inport blocks resides (MATLAB base workspace, model workspace, e.t.c), do you think you could set the external input of the model to that data. For example:
set_param(, 'ExternalInput', )
This could be defined in your test class setup, test method setup or in the test depending on when the data is available and where defining it is appropriate. The data could also be passed in directly to the sim() command. Parameterized testing (http://www.mathworks.com/help/matlab/matlab_prog/create-basic-parameterized-test.html) is an option to consider if you want to test the system with different sets of inputs. The external input value becomes the parameter in this context.
If you have your model set up for output logging, then once the simulation is done, you would get the logged outputs, which you could then compare against a baseline.
Does that help? or am I way off the base here. If you can add more details, I can try again.