Specs2 + Scalacheck test failing due to many discarded - specs2

In a ScalaCheck + Specs2 based test, I need two dates whose distance (in days) it's at maximum of Int.MAX_VALUE.
I am using at the moment ScalaCheck provided arbitraries to generating two dates: since the date generator is backed by the Long generator, this leads to too many discarded cases, making my test to fail.
What is the right approach to solve the problem:
Shall I modify my generators or
Shall I modify the test parameters?

The best approach is probably to create your own generators for your domain.

Related

How to get nunit filters at runtime?

Does anybody know how to get list of categories (provided with 'where' filter to nunit-console) at runtime?
Depending on this, I need to differently initialize the test assembly.
Is there something static like TestExecutionContext that may contain such information?
The engine doesn't pass information on to the framework about "why" it's running a particular test... i.e. if it's running all tests or if it was selected by name or category. That's deliberately kept as something the test doesn't know about with the underlying philosophy being that tests should just run based on the data provided to them.
On some platforms, it's possible to get the command-line, which ran the test. With that info you could decode the various options and make some conclusions but it seems as if it would be easier to restructure the tests so they didn't need this information.
As a secondary reason, it would also be somewhat complicated to supply the info you want and to use it. A test may have multiple categories. Imagine a test selected because two categories matched, for example!
Is it possible that what you really want to do is to pass some parameters to your tests? There is a facility for doing that of course.
I think this is a bit of an XY problem. Depending on what you are actually trying to accomplish, the best approach is likely to be different. Can you edit to tell us what you are trying to do?
UPDATE:
Based on your comment, I gather that some of your initialization is both time-consuming and not needed unless certain tests are run.
Two approaches to this (or combine them):
Do less work in all your initialization (i.e. TestCase, TestCaseSource, SetUpFixture. It's generally best not to create your classes under test or initialize databases. Instead, simply leave strings, ints, etc., which allow the actual test to do the work IFF it is run.
2.Use a SetUpFixture in some namespace containing all the tests, which require that particular initialization. If you dont' run any tests from that namespace, then the initialization won't be done.
Of course both of the above may entail a large refactoring of your tests, but the legacy app won't have to be changed.

Optaplanner starting from existing result

My team has a plan to apply optaplanner to existing system.
Existing system has its own rule-sets.
it tries own rule-sets one by one and pick the one as best result.
We want to start from its result as heuristics
and start to solve the problem as meta-heuristics.
We have reviewed optaplanner manual especially in repeated planning section.
but we can't find the way.
Is there a way to accept existing system's result?
your cooperation would be highly appreciated
Best regards.
For OptaPlanner, it makes no difference where the input solution comes from. Consider the following code:
MyPlanningSolution solution = readSolution();
Solver<MyPlanningSolution> solver = SolverFactory.create(...)
.buildSolver();
solver.solve(solution);
Notice how solution comes from a custom method, readSolution(). Whether that method generates the initial solution randomly, reads it from a file, from a database etc., that does not matter to the solver. It also does not matter if it is initialized or not - construction heuristic, if configured, will just skip the initialized entities.
That means you have absolute freedom in how you create your initial solution and, to the solver, they all look the same.

Is it possible to have different rules for main and test code?

Is it possible to set different rules for main versus test code in Codacy? I know I can eliminate inspection of test code. But I don't want to do that. However, there are many rules, especially regarding duplication, that just don't make sense for tests.
My main language is Scala.
No. The best you can do is ignore folders (for instance the test folder).
We typically relax on test code, but it makes sense to avoid duplication on test code as well. Your (real) code will evolve over time, and eventually will make you need to change tests. Why change in 100 places instead of a single method that is shared among several tests?

API testing: Can I reduce my API functional testing effort by increasing my unit tests? Can I replace a functional test for a unit test?

I am trying to optimize my API (both restful and SOAP services) testing effort. I am thinking, one way to do so is by eliminating the redundant functional tests. I am calling them redundant because the same tests might be executed at unit testing level.
I understand that there's developer bias to unit testing so independent functional testing is crucial. I am not trying to replace functional tests by performing extensive unit testing but I am trying to optimize my testing effort by eliminating some of the functional tests while covering them at unit test level.
How can I achieve this? What’s the correlation between unit tests and functional tests?
Let's take an example of customerAccount/add service. Say if I have 6 tests, 2 positive (happy-path) tests, 2 exceeding boundary-value tests, 1 customer not found test, 1 invalid customer test. Can I eliminate one of 2 positive tests and one of 2 'exceeding boundary-value’ tests provided those 2 are tested at unit testing level? So now, 2 tests are covered at unit test level, 4 are at functional testing level.
Developers may not be testing services against the end points, they may test classes and methods instead. But in the above example, we are still testing the end points. So that’s covered.
What do you think of this approach?
I agree that having redundant tests should be avoided. But - what makes tests redundant? My view is, that a test is redundant if all the potential bugs this test intends to detect are also intended to be found by the remaining tests.
What distinguishes unit-testing, interaction-testing (aka integration-testing), subsystem-testing (aka component-testing or functional-testing) is the respective test goal, which means, which bugs the respective test is intending to catch.
Unit-testing is about catching those bugs that you could find by testing small (even minimally small) isolated pieces of software. Since unit-testing is the earliest testing step that also has the chance to go deepest into the code, a rule of thumb is that if a bug could be already found by unit-testing, you shold really find it with unit-testing rather than trying to catch that bug in higher level tests. This seems to be in line with your approach to "eliminating the redundant functional tests" where "the same tests might be executed at unit testing level". For example, if you aim to find potential bugs in case of an arithmetic overflow within some code, this should exactly be done with unit-testing. Trying to find the same problem on the level of integration-testing or subsystem-testing would be the wrong approach.
You should be aware, however, that possibly the goal of the subsystem-test (= functional-test if you and me have the same terminology here) was a different one than the similar-looking test on unit-test level. On subsystem-level you could aim at catching integration bugs, for example if wrong versions of classes are combined (in which case the respective unit-tests of each of the classes might all pass). Subsystem-tests could also intend to find build-system bugs, like, if generated classes for the environment of your manually written code are not generated as expected. And so on.
Therefore, before eliminating redundant tests, be sure you have understood the test goals of these tests to be definite about that these tests are truly redundant.

Why is ScalaCheck discarding so many generated values in my specification?

I have written a ScalaCheck test case within Specs2. The test case gives up because too many tests were discarded. However, it doesn't tell me why they were discarded. How can I find out why?
Set a breakpoint on the org.scalacheck.Gen.fail method and see what is calling it.
Incidentally, in my case the problem was twofold:
I had set maxDiscarded to a value (1) that was too small, because I was being too optimistic - I didn't realise that ScalaCheck would start at a collection of size 0 by default even if I asked for a non-empty collection (I don't know why it does this).
I was generating collections of size 1 and up, even though, as I later realised, they should have been of size 2 and up for what I was trying to test - which was causing further discards in later generators based on that generator.