API testing: Can I reduce my API functional testing effort by increasing my unit tests? Can I replace a functional test for a unit test? - rest

I am trying to optimize my API (both restful and SOAP services) testing effort. I am thinking, one way to do so is by eliminating the redundant functional tests. I am calling them redundant because the same tests might be executed at unit testing level.
I understand that there's developer bias to unit testing so independent functional testing is crucial. I am not trying to replace functional tests by performing extensive unit testing but I am trying to optimize my testing effort by eliminating some of the functional tests while covering them at unit test level.
How can I achieve this? What’s the correlation between unit tests and functional tests?
Let's take an example of customerAccount/add service. Say if I have 6 tests, 2 positive (happy-path) tests, 2 exceeding boundary-value tests, 1 customer not found test, 1 invalid customer test. Can I eliminate one of 2 positive tests and one of 2 'exceeding boundary-value’ tests provided those 2 are tested at unit testing level? So now, 2 tests are covered at unit test level, 4 are at functional testing level.
Developers may not be testing services against the end points, they may test classes and methods instead. But in the above example, we are still testing the end points. So that’s covered.
What do you think of this approach?

I agree that having redundant tests should be avoided. But - what makes tests redundant? My view is, that a test is redundant if all the potential bugs this test intends to detect are also intended to be found by the remaining tests.
What distinguishes unit-testing, interaction-testing (aka integration-testing), subsystem-testing (aka component-testing or functional-testing) is the respective test goal, which means, which bugs the respective test is intending to catch.
Unit-testing is about catching those bugs that you could find by testing small (even minimally small) isolated pieces of software. Since unit-testing is the earliest testing step that also has the chance to go deepest into the code, a rule of thumb is that if a bug could be already found by unit-testing, you shold really find it with unit-testing rather than trying to catch that bug in higher level tests. This seems to be in line with your approach to "eliminating the redundant functional tests" where "the same tests might be executed at unit testing level". For example, if you aim to find potential bugs in case of an arithmetic overflow within some code, this should exactly be done with unit-testing. Trying to find the same problem on the level of integration-testing or subsystem-testing would be the wrong approach.
You should be aware, however, that possibly the goal of the subsystem-test (= functional-test if you and me have the same terminology here) was a different one than the similar-looking test on unit-test level. On subsystem-level you could aim at catching integration bugs, for example if wrong versions of classes are combined (in which case the respective unit-tests of each of the classes might all pass). Subsystem-tests could also intend to find build-system bugs, like, if generated classes for the environment of your manually written code are not generated as expected. And so on.
Therefore, before eliminating redundant tests, be sure you have understood the test goals of these tests to be definite about that these tests are truly redundant.

Related

Is it possible to have different rules for main and test code?

Is it possible to set different rules for main versus test code in Codacy? I know I can eliminate inspection of test code. But I don't want to do that. However, there are many rules, especially regarding duplication, that just don't make sense for tests.
My main language is Scala.
No. The best you can do is ignore folders (for instance the test folder).
We typically relax on test code, but it makes sense to avoid duplication on test code as well. Your (real) code will evolve over time, and eventually will make you need to change tests. Why change in 100 places instead of a single method that is shared among several tests?

Should I add white/black box rendundant Unit Tests?

I've written black-box unit tests for my project.
After a refactoring, I've adopted a strategy pattern in my code.
This code is covered by the black-box unit test, even after the refactoring.
However I was wondering: should I add white-box unit tests, for example, checking that each strategy is doing what is supposed to?
Or is this redundant because I already have the black box that are the checking the final outcome?
One of the primary goals of testing in general and also for unit-testing is to find bugs (see Myers, Badgett, Sandler: The Art of Software Testing, or, Beizer: Software Testing Techniques, but also many others). In your project you may have a more relaxed position on this, but there are many software projects where it would have serious consequences if implementation level bugs escape to later development phases or even to the field. Some say, your goal should rather be to increase confidence in your code - and this is also true, but confidence can only be a consequence of doing testing right. If you don't test to find bugs, then I will simply not have confidence in your code after you have finished testing.
When finding bugs is a primary goal of unit-testing, then attempts to keep unit-test suites completely independent of implementation details is likely to result in inefficient test suites - that is, test suites that are not suited to find all bugs that could be found. Different implementations have different potential bugs. If you don't use unit-testing for finding these bugs, then any other test level (integration, subsystem, system) is definitely less suited for finding them systematically.
Thus, your statement that you have tested your code initially using black box tests already leaves me with a doubt that the test suite was fully effective in the first place. And, consequently, yes, I would add specific tests for each of the strategies.
However, keep in mind that the goal to have an effective test suite is in competition with another goal, namely to have a maintenance friendly test suite. But I see finding bugs as the primary goal and test suite maintainability as a secondary goal. Still, even when going into white box testing try to keep the maintenance effort low: Only use a white box test for finding bugs that a black box test would not also find. And, try hiding use of implementation details between test helper functions.

Should you make tests for constants?

I'd like to know if there is any value in providing a test to say that a constant equals x in my test suite.
A couple of benefits I see to doing it:
You know when this value has changed, because the developer changing it will get a failed test
If the developer updating the constant updates the test with the new value, the test will confirm that it was properly updated
Would that be beneficial or just a nuisance?
imho, there is no benefit in it.
if dev see failed tests because of a constant then he will simply update the same value in tests. so it's not a benefit, it's a duplication
you should have tests for the code that uses the constant. if not unit tests then integration tests (e.g. address of the mail server) those tests should fail if something is wrong with the constant
the only tests for constant i can imagine is when your constant is some complex object then maybe you can tests if all required constraints between its properties are met. but if it's just, let's say, a number or string then, imho, it's just a waste of time because test maintenance costs and it gives you absolutely no additional security

How can I run tests in the same script in parallel? [duplicate]

In all the tutorials I've read for Test::Class, there seems to be one runner script that loads all of the classes. And I think from the perspective of Test::Harness this is just one giant test. I don't think it can parallelize the tests inside the runner.
My X problem is that I am trying to factor out superclass behaviors when testing subclasses. Each subclass should have its own subclass test (that can be parallelized), but also exercise behaviors inherited from the superclass. How does one that?
Edit: I found these two posts from 2007 that seem to imply that what I'm asking for is incompatible/not possible. Any update since then?
http://www.hexten.net/pipermail/tapx-dev/2007-October/001756.html (speculation for Test::Class to support parallelism
http://perlbuzz.com/2007/08/organizing-tests-with-testclass.html (implying that Test::Class and Test::Harness are ideologically exclusive)
Test::Class doesn't support parallelisation on its own. Probably the easiest solution would be to have separate .t runners for each of your test classes (or for logical groups of test classes), and run using e.g. prove -j9.
If you really want to run all of the tests in parallel you could write a simple script to auto-generate a .t file for each test class. You'd lose the performance benefit of running multiple test classes within a single perl interpreter, but the parallelisation might compensate for the additional startup overhead. And I'd argue that no matter how much Test::Class tries to provide test isolation, it's impossible to guarantee that in Perl. Once you start taking advantage of modifying the symbol table for mocking purposes etc, your tests will start to interfere with each other unless you always get the cleanup right. Running each test in a separate perl interpreter is a better way to provide guaranteed isolation.
To make Test::Class parallel, I had Used the following mechanism. Hope it could help you.
I had made use of the Parallel::ForkManager module to invoke the tests. But had
parameterized the TEST_METHOD environment variable, so that the required tests are run
in each thread parallely
This provides a isolation among other tests because, each test is invoked independently, and
the thread process is managed to wait until all the child process are completed

Can Test::Class tests be run in parallel? (or how to factor out superclass tests)

In all the tutorials I've read for Test::Class, there seems to be one runner script that loads all of the classes. And I think from the perspective of Test::Harness this is just one giant test. I don't think it can parallelize the tests inside the runner.
My X problem is that I am trying to factor out superclass behaviors when testing subclasses. Each subclass should have its own subclass test (that can be parallelized), but also exercise behaviors inherited from the superclass. How does one that?
Edit: I found these two posts from 2007 that seem to imply that what I'm asking for is incompatible/not possible. Any update since then?
http://www.hexten.net/pipermail/tapx-dev/2007-October/001756.html (speculation for Test::Class to support parallelism
http://perlbuzz.com/2007/08/organizing-tests-with-testclass.html (implying that Test::Class and Test::Harness are ideologically exclusive)
Test::Class doesn't support parallelisation on its own. Probably the easiest solution would be to have separate .t runners for each of your test classes (or for logical groups of test classes), and run using e.g. prove -j9.
If you really want to run all of the tests in parallel you could write a simple script to auto-generate a .t file for each test class. You'd lose the performance benefit of running multiple test classes within a single perl interpreter, but the parallelisation might compensate for the additional startup overhead. And I'd argue that no matter how much Test::Class tries to provide test isolation, it's impossible to guarantee that in Perl. Once you start taking advantage of modifying the symbol table for mocking purposes etc, your tests will start to interfere with each other unless you always get the cleanup right. Running each test in a separate perl interpreter is a better way to provide guaranteed isolation.
To make Test::Class parallel, I had Used the following mechanism. Hope it could help you.
I had made use of the Parallel::ForkManager module to invoke the tests. But had
parameterized the TEST_METHOD environment variable, so that the required tests are run
in each thread parallely
This provides a isolation among other tests because, each test is invoked independently, and
the thread process is managed to wait until all the child process are completed