Should I add white/black box rendundant Unit Tests? - swift

I've written black-box unit tests for my project.
After a refactoring, I've adopted a strategy pattern in my code.
This code is covered by the black-box unit test, even after the refactoring.
However I was wondering: should I add white-box unit tests, for example, checking that each strategy is doing what is supposed to?
Or is this redundant because I already have the black box that are the checking the final outcome?

One of the primary goals of testing in general and also for unit-testing is to find bugs (see Myers, Badgett, Sandler: The Art of Software Testing, or, Beizer: Software Testing Techniques, but also many others). In your project you may have a more relaxed position on this, but there are many software projects where it would have serious consequences if implementation level bugs escape to later development phases or even to the field. Some say, your goal should rather be to increase confidence in your code - and this is also true, but confidence can only be a consequence of doing testing right. If you don't test to find bugs, then I will simply not have confidence in your code after you have finished testing.
When finding bugs is a primary goal of unit-testing, then attempts to keep unit-test suites completely independent of implementation details is likely to result in inefficient test suites - that is, test suites that are not suited to find all bugs that could be found. Different implementations have different potential bugs. If you don't use unit-testing for finding these bugs, then any other test level (integration, subsystem, system) is definitely less suited for finding them systematically.
Thus, your statement that you have tested your code initially using black box tests already leaves me with a doubt that the test suite was fully effective in the first place. And, consequently, yes, I would add specific tests for each of the strategies.
However, keep in mind that the goal to have an effective test suite is in competition with another goal, namely to have a maintenance friendly test suite. But I see finding bugs as the primary goal and test suite maintainability as a secondary goal. Still, even when going into white box testing try to keep the maintenance effort low: Only use a white box test for finding bugs that a black box test would not also find. And, try hiding use of implementation details between test helper functions.

Related

Is it possible to have different rules for main and test code?

Is it possible to set different rules for main versus test code in Codacy? I know I can eliminate inspection of test code. But I don't want to do that. However, there are many rules, especially regarding duplication, that just don't make sense for tests.
My main language is Scala.
No. The best you can do is ignore folders (for instance the test folder).
We typically relax on test code, but it makes sense to avoid duplication on test code as well. Your (real) code will evolve over time, and eventually will make you need to change tests. Why change in 100 places instead of a single method that is shared among several tests?

API testing: Can I reduce my API functional testing effort by increasing my unit tests? Can I replace a functional test for a unit test?

I am trying to optimize my API (both restful and SOAP services) testing effort. I am thinking, one way to do so is by eliminating the redundant functional tests. I am calling them redundant because the same tests might be executed at unit testing level.
I understand that there's developer bias to unit testing so independent functional testing is crucial. I am not trying to replace functional tests by performing extensive unit testing but I am trying to optimize my testing effort by eliminating some of the functional tests while covering them at unit test level.
How can I achieve this? What’s the correlation between unit tests and functional tests?
Let's take an example of customerAccount/add service. Say if I have 6 tests, 2 positive (happy-path) tests, 2 exceeding boundary-value tests, 1 customer not found test, 1 invalid customer test. Can I eliminate one of 2 positive tests and one of 2 'exceeding boundary-value’ tests provided those 2 are tested at unit testing level? So now, 2 tests are covered at unit test level, 4 are at functional testing level.
Developers may not be testing services against the end points, they may test classes and methods instead. But in the above example, we are still testing the end points. So that’s covered.
What do you think of this approach?
I agree that having redundant tests should be avoided. But - what makes tests redundant? My view is, that a test is redundant if all the potential bugs this test intends to detect are also intended to be found by the remaining tests.
What distinguishes unit-testing, interaction-testing (aka integration-testing), subsystem-testing (aka component-testing or functional-testing) is the respective test goal, which means, which bugs the respective test is intending to catch.
Unit-testing is about catching those bugs that you could find by testing small (even minimally small) isolated pieces of software. Since unit-testing is the earliest testing step that also has the chance to go deepest into the code, a rule of thumb is that if a bug could be already found by unit-testing, you shold really find it with unit-testing rather than trying to catch that bug in higher level tests. This seems to be in line with your approach to "eliminating the redundant functional tests" where "the same tests might be executed at unit testing level". For example, if you aim to find potential bugs in case of an arithmetic overflow within some code, this should exactly be done with unit-testing. Trying to find the same problem on the level of integration-testing or subsystem-testing would be the wrong approach.
You should be aware, however, that possibly the goal of the subsystem-test (= functional-test if you and me have the same terminology here) was a different one than the similar-looking test on unit-test level. On subsystem-level you could aim at catching integration bugs, for example if wrong versions of classes are combined (in which case the respective unit-tests of each of the classes might all pass). Subsystem-tests could also intend to find build-system bugs, like, if generated classes for the environment of your manually written code are not generated as expected. And so on.
Therefore, before eliminating redundant tests, be sure you have understood the test goals of these tests to be definite about that these tests are truly redundant.

how to track errors in FPGA/ASIC development using post place'n' route and/or post synthesis simulation?

I am a bit confused on the usefulness of post PnR and/or post synthesis simulations for FPGA/ASIC development. If the synthesis or PnR process complete successfully in the design flow, is there any chance that the respective 'post' simulation will reveal errors in the design? Could someone give an example?
In typical design flow post Synthesis and/or post PnR simulations are not useful and the aim should be to avoid them.
Post synthesis simulation can only unearth bugs in the, well, synthesis tool which are extremely rare in established FPGA tools. Checking these should not be an integral part of any design flow.
Albeit there are some very rare cases where the PnR tools might make e.g. technology mapping error or fail to give a warning from design rule violation, at minimum 99% of the cases that reveal problems in Post PnR simulation are due to design error, most typically clock domain crossing, memory access race conditions a good, but already very rare, second.
Therefore, the emphasis should be in adhering to the design rules and having rigorous design methodology to avoid the problems rather than trying to catch them in the post PnR simulation.
To your question - if there is no negative slack and the design rule check is ok, there should not be anything more that either of the post simulations can reveal.
One practical use for post PnR simulation is when you have complex design that is failing occasionally due to timing variation of an external component or mistake in I/O constraints, but you don't have a clue about the error mechanism. Combination of integrated logic analyzer and post PnR simulation can help the trickiest of situations to find out the root cause.
Post-PnR simulations don't only verify the functionality, but also the timing. The timing information of the circuit can be dumped to the simulation in several formats, however the most popular one is Standard Delay Format (SDF), which is published as IEEE 1497.
What kind of errors can we catch then?
It is hard to catch some unwanted glitches in RTL simulations. If some outputs are generated by a combinational logic, post-PnR simulations are more important than ever.
There may be some mistakes in the synthesis and/or PnR constraints. It is always better to double check everything.
Synthesis/PnR tools may have bugs. Logic Equivalence Checking (LEC) can also catch bugs, but it performs for functionality only.
Post PnR simulations are what is called as Gate Level Simulation in industry. This is of two types timing and non timing. This kind of simulation is used to detect
Timing paths, not checked by STA or timing closure.
Bugs in power and reset operation as HFNS (High Fanout Net synthesis) and CTS (Clock tree synthesis) may have caused irregularities in the reset of some resettable flops causing them to deliver x to the next logic in the path causing an x-propagation.
Bugs in DFT logic which was not checked during RTL simulation and might have been removed during PnR.
x on logic path due to relaibility issues for clock domain cross paths skipped by STA
Mostly stable process in terms of translating the logic from mapped to PAR. But, of course, if pedantic, you could use a LEC for both syn->map and map-> PAR.
Post PAR Sims could be useful though if you have issues in the lab, maybe because you didn't fully constrain your design for timing, and need to simulate with the back-annotated SDF, as someone else mentioned above. This, of course, does not help you though, wrt to IO,h if you haven't created models with timing in your TB and/or constrained your IO properly as provided to you by the Board designer.
I think it is best-practice to run regression suite at least once against the PAR netlist with back-annotated SDF. It costs you nothing, and provides one more confidence data point.

hooks versus middleware in slim 2.0

Can anyone explain if there are any significant advantages or disadvantages when choosing to implement features such as authentication or caching etc using hooks as opposed to using middleware?
For instance - I can implement a translation feature by obtaining the request object through custom middleware and setting an app language variable that can be used to load the correct translation file when the app executes. Or I can add a hook before the routing and read the request variable and then load the correct file during the app execution.
Is there any obvious reason I am missing that makes one choice better than the other?
Super TL/DR; (The very short answer)
Use middleware when first starting some aspect of your application, i.e. routers, the boot process, during login confirmation, and use hooks everywhere else, i.e. in components or in microservices.
TL/DR; (The short answer)
Middleware is used when the order of execution matters. Because of this, middleware is often added to the execution stack in various aspects of code (middleware is often added during boot, while adding a logger, auth, etc. In most implementations, each middleware function subsequently decides if execution is continued or not.
However, using middleware when order of execution does not matter tends to lead to bugs in which middleware that gets added does not continue execution by mistake, or the intended order is shuffled, or someone simply forgets where or why a middleware was added, because it can be added almost anywhere. These bugs can be difficult to track down.
Hooks are generally not aware of the execution order; each hooked function is simply executed, and that is all that is guaranteed (i.e. adding a hook after another hook does not guarantee the 2nd hook is always executed second, only that it will simply be executed). The choice to perform it's task is left up to the function itself (to call out to state to halt execution). Most people feel this is much simpler and has fewer moving parts, so statistically yields less bugs. However, to detect if it should run or not, it can be important to include additional state in hooks, so that the hook does not reach out into the app and couple itself with things it's not inherently concerned with (this can take discipline to reason well, but is usually simpler). Also, because of their simplicity, hooks tend to be added at certain named points of code, yielding fewer areas where hooks can exist (often a single place).
Generally, hooks are easier to reason with and store because their order is not guaranteed or thought about. Because hooks can negate themselves, hooks are also computationally equivalent, making middleware only a form of coding style or shorthand for common issues.
Deep dive
Middleware is generally thought of today by architects as a poor choice. Middleware can lead to nightmares and the added effort in debugging is rarely outweighed by any shorthand achieved.
Middleware and Hooks (along with Mixins, Layered-config, Policy, Aspects and more) are all part of the "strategy" type of design pattern.
Strategy patterns, because they are invoked whenever code branching is involved, are probably one of if not the most often used software design patterns.
Knowledge and use of strategy patterns are probably the easiest way to detect the skill level of a developer.
A strategy pattern is used whenever you need to apply "if...then" type of logic (optional execution/branching).
The more computational thought experiments that are made on a piece of software, the more branches can mentally be reduced, and subsequently refactored away. This is essentially "aspect algebra"; constructing the "bones" of the issue, or thinking through what is happening over and over, reducing the procedure to it's fundamental concepts/first principles. When refactoring, these thought experiments are where an architect spends the most time; finding common aspects and reducing unnecessary complexity.
At the destination of complexity reduction is emergence (in systems theory vernacular, and specifically with software, applying configuration in special layers instead of writing software in the first place) and monads.
Monads tend to abstract away what is being done to a level that can lead to increased code execution time if a developer is not careful.
Both Monads and Emergence tend to abstract the problem away so that the parts can be universally applied using fundamental building blocks. Using Monads (for the small) and Emergence (for the large), any piece of complex software can be theoretically constructed from the least amount of parts possible.
After all, in refactoring: "the easiest code to maintain is code that no longer exists."
Functors and mapping functions
A great way to continually reduce complexity is applying functors and mapping functions. Functors are also usually the fastest possible way to implement a branch and let the compiler see into the problem deeply so it can optimize things in the best way possible. They are also extremely easy to reason with and maintain, so there is rarely harm in leaving your work for the day and committing your changes with a partially refactored application.
Functors get their name from mathematics (specifically category theory, in which they are referred to a function that maps between two sets). However, in computation, functors are generally just objects that map problem-space in one way or another.
There is great debate over what is or is not a functor in computer science, but in keeping with the definition, you only need to be concerned with the act of mapping out your problem, and using the "functor" as a temporary thought scaffold that allows you to abstract the issue away until it becomes configuration or a factor of implementation instead of code.
As far as I can say that middleware is perfect for each routing work. And hooks is best for doing anything application-wide. For your case I think it should be better to use hooks than middleware.

Analysing and generating statistics on your code

I was wondering if anyone had any ideas or procedures for generating general statistics on your source code.
Off the top of my head I would love to know how many functions in my project's code are called once or very few times or any classes that are only instantiated once.
I'm sure there is a ton of other interesting things to be found out.
I could do something like the above using grep magic but has anyone come across tools or tips?
Coverity is the first thing coming to mind. It currently offers (on one of their products)
Software DNA Map™ analysis system: Generates a comprehensive representation of the entire build system including a semantically correct parsing of every line of code.
Defect Manager: Intuitive interface makes it easy to establish ownership of defects and resolve them via a customized workflow that mirrors your existing development process.
Local Analysis: Enables code to be analyzed locally on developers’ desktops to ensure quality before sharing with other developers.
Boolean Satisfiability: Translates the code into questions based on Boolean values, then applies SAT solvers for the most accurate defect detection and the lowest false positive rate available. Only Prevent offers the added precision of this proprietary method.
Race Conditions Checker: Features an industry-first race conditions checker built specifically for today’s complex multi-threaded applications.
Path Simulation: Simulates 100% of all values and data paths, enabling detection of the most critical defects.
Statistical & Interprocedural Analysis: Ensures a comprehensive analysis of your entire build system by inferring correct behavior based on previously observed behavior and performing whole-program analysis similar to the executing Bin.
False Path Pruning: Efficiently removes false positives to give Prevent an average FP rate of about 15%, with some users reporting FP rates of as low as 5%.
Incremental Analysis: Analyzes source code wholly or incrementally, allowing you to save time by checking only those components that are affected by a change.
Reporting: Measures software quality trends over time via customizable reporting so you can show defects grouped by checker, classification, component, and other defect information.
There are lots of tools that do this. But afaik none of them are language independent (which in turn would be mostly impossible e.g. some languages might not even have functions).
Generally you will find those tools under the categories of "code coverage tools" or "profilers".
For .Net you can use Visual Studio or Clrprofiler.