Is there any test framework for Drools mutation testing - drools

I'm looking for test framework which will perform mutation testing for code written in Drools, i.e. it should check if there're tests that fails when one of the rules is removed from KnowledgeBase. It's needed for confidence that every rule is covered by tests

Try droolsassert. You can specify a 'snapshot' of rules you want to be triggered for predefined scenario.

Related

Are Dart/Flutter tests executed synchronously or asynchronously

could someone explain please how Flutter/Dart tests are executed using test runner?
Are the tests executed synchronously or asynchronously?
Does the testing framework execute every single test synchronously, meaning that only a single test and test suite is executed at any single time?
Or does the testing framework only execute a single test at a time within a test suite, but are able to execute multiple test suites at the same time?
Or testing framework run all tests and test suites completely independent of each other at the same time, completely asynchronously?
This is important because it has a direct impact on the way we are or aren't able to structure our tests, especially when it comes to the set up and tear downs of tests, and the way we assert functionality is working correctly.
Thanks!
In general, dart test will execute many tests in parallel (the parallelism level varies based on CPU core count), but you can disable this with a command line flag.
You should not write tests with any inter-dependence (i.e. one test should not rely on some global state set up by another test). For example, you may find that because your laptop has a different CPU configuration to your CI server, your tests might pass locally but fail in CI due to different ordering.
If you have some setup logic that is very expensive, and needs to be reused between multiple tests, you can use setUpAll() to run some code once before every test in a test group, however this is still discouraged. Personally, I prefer to just join the tests into one long test, to keep all tests self-contained.
This has some advantages. For example, you can use --total-shards and --shard-index to parallelize tests in CI (by creating multiple jobs to each run a different subset of the test suite).
You can also randomize the order of your tests with --test-randomize-ordering-seed, to make sure you aren't accidentally setting up such dependences between tests that might invalidate their results (i.e. perhaps test2 only passes if it happens after test1, randomizing the ordering will catch this).
TLDR
Many tests run in parallel. Try to keep tests self-contained, with no dependence on the order of tests. Extract setup logic into functions and pass it into setUp. If you really really need the performance, you can try setUpAll but avoid it if possible.

Test methods are not executed when associating same test method to multiple test cases in VSTest running XUnit tests

We have an ASP.Net application and UI tests written with xUnit. Test plans are in VSTS, and in some cases same xUnit test method is associated to multiple test cases.
An azure build pipeline executes these tests using VSTest. The problem is when multiple test cases are associated with a single method seems only one of them is executed. E.g. The test cases in below screen shot are associated to same method and only one is executed.
We tried both 'Test assemblies' & 'Test Plan' option in Vstest, but results are same.
As per below link, it is not possible in xUnit to run the same test method multiple times in the same test session.
https://developercommunity.visualstudio.com/content/problem/995269/executing-multiple-test-cases-from-testplan-which.html?childToView=995554#comment-995554
Some solutions I can think of are,
Creating dummy test methods for all test cases and maintain one to one, test method to test case mapping. Where one method will have testing logic, while other methods will just assert true.
Create multiple test methods, where only one method will contain the implementation. Other methods will just call the test method which contains the implementation.
Please suggest if there is any better solution to the problem.
Thanks in advance!
Test methods are not executed when associating same test method to multiple test cases in VSTest running XUnit tests
As we know:
By default, each test class is a unique test collection. Tests within
the same test class will not run in parallel against each other.
So, the response we got from Azure Devops Developer Community forum and xunit is XUnit does not allow running one test multiple times in the same session.
I personally think that the two workrounds you proposed are correct. You can use one of the two methods to solve this problem. You are already on the right way.
Hope this helps.

#BeforeScenario / #AfterScenario to Specific Scenario in Test Story by using Given

I am a newbie to JBheave and Hive frameworks.
While exploring Q&A repositories, I happen to see the following phrase from one of right Answer to a Question,-
writing a JBehave story
That's what I've seen - and the data object should be setup/cleared
with a #BeforeScenario/#AfterScenario method.
At present I am in the process of writing Test Stories. Yet not get into Steps further.
From the JBehave product website, I get the following sample Test Story. I have Question considering the phrase which I plugged out from the Q&A repo of StackOverFlow.
A story is a collection of scenarios
Narrative:
In order to communicate effectively to the business some functionality
As a development team
I want to use Behaviour-Driven Development
Lifecycle:
Before:
Given a step that is executed before each scenario
After:
Outcome: ANY
Given a step that is executed after each scenario regardless of outcome
Outcome: SUCCESS
Given a step that is executed after each successful scenario
Outcome: FAILURE
Given a step that is executed after each failed scenario
Scenario: A scenario is a collection of executable steps of different type
Given step represents a precondition to an event
When step represents the occurrence of the event
Then step represents the outcome of the event
Scenario: Another scenario exploring different combination of events
Given a [precondition]
When a negative event occurs
Then a the outcome should [be-captured]
Examples:
|precondition|be-captured|
|abc|be captured |
|xyz|not be captured|
I could see the pretty same just as like #BeforeScenario/#AfterScenario over here.
I do have Question here. Is I could write Given before and after to specific Scenario: in a Test Story.
And is that Scenario: output is open to consecutive Scenario:'s in the Test Story.
There is a few differences between #BeforeScenario/#AfterScenario annotations and Lifecycle:Before/After steps
A java method annotated with #BeforeScenario or #AfterScenario is called for all executed scenarios in all stories, while a Lifecycle-Before or -After step will be executed only for scenarios from this one, concrete story.
#AfterScenario method is executed always, regardless of a result of the scenario. Lifecycle After steps can be called always (using Outcome: ANY clause), only on failures (using Outcome: Failure clause) or only on success (using Outcome: SUCCESS clause)
You cannot pass any parameters from a scenario (story) to #BeforeScenario and #AfterScenario java methods, while Lifecycle steps can have parameters, like any other ordinary steps, for example:
Lifecycle:
Before:
Given a step that is executed before each scenario with some parameter = 2
JBehave is for Data Mining. And it is uses Test Driven Development, TDD. We call that as Steps. BDD - Behavior Driven Development, that yields the Mining capability of that framework injected towards any Higher-Level language.
Answering the Question,- In a test story, if we put Scenario in the mid of two then statements, it clears the buffers as it is a new scenario. That way Given clause datasets is applied as-is, rather implied. That way Given clause values are taken forward. For new Scenario only Lifecycle prerequisites which is been set is only applied on before and after respectively.

Dynamic test cases

We are using NUnit to run our integration tests. One of tests should always do the same, but take different input parameters. Unfortunately, we cannot use [TestCase] attribute, because our test cases are stored in an external storage. We have dynamic test cases which could be added, removed, or disabled (not removed) by our QA engineers. The QA people do not have ability to add [TestCase] attributes into our C# code. All they can do is to add them into the storage.
My goal is to read test cases from the storage into memory, run the test with all enabled test cases, report if a test case is failed. I cannot use "foreach" statement because if test case #1 is failed, then rest of the test cases will not be run at all. We already have build server (CruiseControl.net) where generated NUnit reports are shown, therefore I would like to continue using NUnit.
Could you point to a way how can I achieve my goal?
Thank you.
You can use [TestCaseSource("PropertyName")\] which specifies a property (or method etc) to load data from.
For example, I have a test case in Noda Time which uses all the BCL time zones - and that could change over time, of course (and is different on Mono), without me changing the code at all.
Just make your property/member load the test data into a collection, and you're away.
(I happen to have always used properties, but it sounds like it should work fine with methods too.)

How do I set up a multi-stage test pipeline in sbt?

Specifically, for a Scalatra project, but the question probably applies to most.
For example, I typically want to run:
unit tests
code quality checks (coverage, duplication, complexity, jsLint!)
integration tests (not too many!)
acceptance tests (usually a "pre-checkin" subset)
regression tests (basically the same as acceptance tests, but a bigger set)
performance tests
I want to run different subsets of these by context - i.e. after a simple code change I might just run the first three; before checking in I might want to run a bigger set, and the Continuous Integration server might have a "fast" and a "slow" build that have even bigger sets.
The basic sbt docs seem to assume a single "test" target - is there a recommended way to implement multiple test phases like this?
You may want to look at this blog about using integrated testing with SBT and Hudson:
http://henkelmann.eu/2010/11/14/sbt_hudson_with_test_integration
Then, to add your own actions you can use this page:
http://code.google.com/p/simple-build-tool/wiki/CustomActions
Basically, though, you will probably want to add a new action for each of your testing steps, in order to get the particular events you want to happen.