What all modules or test cases need to be tested in Regression testing? - regression-testing

Few days back I went to an interview there they asked me What all modules you will test in Regression Testing? How you find out which test cases need to be executed in Regression testing?

Those modules that are having little or more modification in existing modules or code are required regression testing

The best way to do this is to have some insight into which test cases cover which parts of the product. Then when a part of the product changes, you can run just the cases that cover the change. This isn't always easy. In a complex piece of software, a change in one part can have an effect in a seemingly disconnected part.
The best solution I have seen to this problem is to use code coverage data. If you know which blocks are hit by each test and you know which blocks were changed by the fix, you can know exactly which test cases to run.
If you don't have a lot of data, your best bet is to think about the change and what things it could affect and then run cases that are in those areas.

Related

E2E Test Metrics

Apologies if this question is better suited for Stack Exchange Programmers, I've posted this on both sites cause I figured it was on the fence.
Question: Are there any valuable metrics on E2E automated tests? (Does it make sense to gather any data around them?)
Context: For example, when I wrote my unit tests, I implemented a test coverage package which covers % of Classes, # of methods touched etc...
A few points I've come across:
Test efficiency (speed of specs)
App performance (response/loading time)
Automation Progress (# of tests currently automated VS # of tests able to be automated)
Defect Efficiency (# of defects found during testing VS # of defects found after delivery)
Any ideas? If it matters, I'm using Protractor on an Angular app.
Something that comes to my mind with protractor:
You can measure code coverage with e2e tests as well, but that would require some tricks to setup, check this: https://www.npmjs.com/package/protractor-istanbul-plugin
https://www.npmjs.com/package/grunt-protractor-coverage
Be aware that it is not clean solution - your code could be minified, server side not included, just keep this in mind.
Also you could measure page performance (load speed, JS execution speed, CSS render other client-side calculations) with something like protractor-perf:
https://github.com/axemclion/protractor-perf
But also keep in mind that it requires a lot of preparations to achieve that, but anyway it is cool.
About Automation Progress/Percent automatable - i think you can't track this automatically, only if your requirements will be very detailed, and stored in some system with API. Then you could link your test case with specific requirement and track that. I never saw this working actually.
Defect Efficiency - easier to track with JIRA reports.
Yes, it makes sense to collect data on E2E test metrics. Apart from test cases execution, performance and other measurements,it helps us to take the decision for next phase of activities such as, estimate the cost & schedule of future projects.
Understand the kind of improvement required to success the project.
Take decision on process or technology to be modified etc. Test Metrics are the most important to measure the quality of the software.

Analyzing coverage of numba-wrapped functions

I've written a python module, much of which is wrapped in #numba.jit decorators for speed. I've also written lots of tests for this module, which I run (on Travis-CI) with py.test. Now, I'm trying to look at the coverage of these tests, using pytest-cov, which is just a plugin that relies on coverage (with hopes of integrating all of this will coveralls).
Unfortunately, it seems that using numba.jit on all those functions makes coverage think that the functions are never used -- which is kind of the case. So I'm getting basically no reported coverage with my tests. This isn't a huge surprise, since numba is taking that code and compiling it, so the code itself really never is used. But I was hoping there'd be some of that magic you see with python some times...
Is there any useful way to combine these two excellent tools? Failing that, is there any other tool I could use to measure coverage with numba?
[I've made a minimal working example showing the difference here.)
The best thing might be to disable the numba JIT during coverage measurement. That relies on you trusting the correspondence between the Python code and the JIT'ed code, but you need to trust that to some extent anyway.
Not that this answers the question, but I thought I should advertise another way that someone might be interested in working on. There's probably something really beautiful that could be done using llvm-cov. Presumably, this would have to be implemented within numba, and the llvm code would have to be instrumented, which would require some flag somewhere. But since numba knows about the correspondence between lines of python code and llvm code, there must be something that could be implemented by somebody more clever than I am.

generate jbehave stories dynamically

We are planing to generate JBehave stories dynamically by entering the when then commands (?) in a simple web form. I am not sure, if I like that idea.
I mean, I could programatically save the .story file before starting the test and point to this file in an overwritten StoryPathResolver.resolve method.
But, do you think that this would make a lot sense?
Thanks
I'm not sure if I understood your planned feature, but based on my assumptions *), I say: no, it does not make sense for the following reasons.
My assumption on this here is that you want to perform (test) the system on-the-fly and/or you don't know how beforehand what should be tested.
But either way, whatever is typed in wont be reproducible for later regression tests nor any other typical benefit that comes with automated tests would apply, first and foremost: speed.
Instead of implementing such a framework that would present you the options available at each step as well as the stop-and-go system that would be needed, it would be way better to optimize the path of writing the .story files and inject them into a running system. This way the writer still can take any time to specify the examples as well as having them reproducible from the start.
*) Should I be wrong, please rephrase your question. There are a few other questions in your first post that I could deduct.

Is Micro Code Generation Considered Harmful?

I recently wrote a small tool to generate a class for each tier I hand write for the boring "forms over data" work where I spend almost 90% of my time (depressing I know) ... more on this as the economy improves ;)
My question is this - will using this tool instead of hand typing all this code from day to day actually hurt me as a developer? I feel like I will always be making changes to this tool and thus I "should" stay on top of the patterns used/ choices made/ etc... but some small part of me feels like I might lose my edge ... am I wrong?
If the tool can spit the code out without thought, then it probably saves you lots of thoughtless typing.
Writing the tool in the first place requires thinking, so I'd guess you'd be more "on the edge" maintaining and writing the tool.
That's good! Of course writing a tool to do all the job for you is impossible and wrong.
But automating repeatable tasks is always good - and sometimes writing specific types of code is repeatable.
It is even encouraged in the "Pragmatic Programmer" book.
Make sure that in the source control you have checked in a code generator and not its output (unless you have to modify the code later by hand)!
You are most definitely not wrong. I use code generators anywhere I can - I currently use CodeSmith to create my DAO's by looking at the database.
What edge are you afraid of losing? In my mind going to code generation is actually giving you an edge.
Larry Wall (of Perl fame) describes the three cardinal virtues of programming as Laziness, Impatience, and Hubris.
Congratulations! You have shown good laziness, in that you have identified some work you can pass off to an automated process and done so. (Bad laziness leads to cutting corners, procrastination, and generally postponing rather than eliminating work.) If you can successfully palm off some work onto another program, you are spending less time on annoying triviality and more on accomplishing things and learning.
Generate what you can. Code generation is one of the best tools I've picked up over the last 2 or 3 years. Typing the same code over and over (or copy and pasting it) is prone to error.
Spending less time doing something by having something/someone else do it, and more time researching better ways to do it will generally lead to doing it in a better way.
This doesn't have to just apply to programming....
Your code generator (at least in principle - I haven't looked at it myself) is The Right Thing, at least as far as it goes.
The next step would be to see whether you can, instead of generating all this redundant code, create a base class whose functionality matches the generated code and then derive your application code from it. Using inheritance rather than generation will allow you to benefit from improvements without needing to re-run the generator on all your projects. Perhaps more importantly, if you customize the generated code, the customizations would be lost if you re-run the generator, but customizations in a derived class will be preserved when the base class is changed.
No. Why do you think IDE's are so popular. Imagine if all the people who use Visual Studio had to programmatically create the GUI's without help from the IDE, it'd be terrible. I would be willing to bet most people who use VisualStudio won't know how to manualy create the forms they're creating in the IDE. But there's nothing wrong with that.
I believe in code generation wherever possible to remove the rote tasks of programming. You will not lose your edge, you will probably become a better programmer because you will spend more time working on the important and interesting stuff.
BTW, your tool sounds interesting. Have you released it anywhere?
Code generation is fine as long as you understand what you are generating. Physicists use calculators because they understand the formulas they are automating and realize that their precious time is better spent on important tasks.
Code generation is one of those invaluable DO:s that The Pragmatic Programmer advocates. I truly recommend that book. Here's a Pragmatic Programmer quick ref.
Its almost hypocritical not to code generate. Here we are automating all of these tasks that were traditionally done by hand... and yet many of us still hand crank all of our code, even if it can be easily generated.
My only experience with code generation is the macros of Common Lisp. They are used all the time. Everything that automats repetitive tasks is beneficial; that is what programming is about.
Read the story of Mac.
Imagine that each time you made a change to the tool and regenerated your code, that you made that design change by hand on all of your modules.
Since I've started generating code and gotten up to speed, I've found that I rarely get bugs in the generated code.
I find that writing code gen does help me learn the nuances of good architecture. You start seeing common patterns as opposed to a narrow view of your design. That said, don't use code gen as a substitute for good object-oriented code, and don't love your code gen so much you ignore new technologies. For example, if you're in .NET and are writing code-gen for data access, you'd better have a good excuse for not using Linq to SQL or NHibernate. Similarly, Dynamic Data can help in many forms-on-data scenarios. So, my advice: spike new stuff and code gen as needed.
My 2cents on code gen is that it is also critical for use in refactoring. I have found that partial classes and a good file comparison utility (Araxis or BeyondCompare) are essential.
Keep your generated code in one file and the custom Tweaks you made for that class in another file.
This practice will allow you to make those comprehensive framework changes implemented quickly and will also help you move to a new paradigm while easily being able to save your custom logic.
CodeSmith FTW!
While build servers are great to make sure all your code compiles, it doesn't address the differences in signatures with your stored procs or the like. If you routinely run the code gen you can more easily identify when those changes occur. A unit test will tell you the SP is wrong, code gen will tell you how to make it right.

Are mock frameworks and high test coverage important?

Mock frameworks, e.g. EasyMock, make it easier to plugin dummy dependencies. Having said that, using them for ensuring how different methods on particular components are called (and in what order) seems bad to me. It exposes the behaviour to test class, which makes it harder to maintain production code. And I really don't see the benefit; mentally I feel like I've been chained to a heavy ball.
I much rather like to just test against interface, giving test data as input and asserting the result. Better yet, to use some testing tool that generates test data automatically for verifying given property. e.g. adding one element to a list, and removing it immediately yields the same list.
In our workplace, we use Hudson which gives testing coverage. Unfortunately it makes it easy to get blindly obsessed that everything is tested. I strongly feel that one shouldn't test everything if one wants to be productive also in maintenance mode. One good example would be controllers in web frameworks. As generally they should contain very little logic, testing with mock framework that controller calls such and such method in particular order is nonsensical in my honest opinion.
Dear SOers, what are your opinions on this?
I read 2 questions:
What is your opinion on testing that particular methods on components are called in a particular order?
I've fallen foul of this in the past. We use a lot more "stubbing" and a lot less "mocking" these days.
We try to write unit tests which test only one thing. When we do this it's normally possible to write a very simple test which stubs out
interactions with most other components. And we very rarely assert ordering. This helps to make the tests less brittle.
Tests which test only one thing are easier to understand and maintain.
Also, if you find yourself having to write lots of expectations for interactions with lots of components there could well be a problem in the code you're testing anyway. If it's difficult to maintain tests the code you're testing can often be refactored.
Should one be obsessed with test coverage?
When writing unit tests for a given class I'm pretty obsessed with test coverage. It makes it really easy to spot important bits of behaviour that I haven't tested. I can also make a judgement call about which bits I don't need to cover.
Overall unit test coverage stats? Not particularly interested so long as they're high.
100% unit test coverage for an entire system? Not interested at all.
I agree - I'm in favor of leaning heavily towards state verification rather than behavior verification (a loose interpretation of classical TDD while still using test doubles).
The book The Art of Unit Testing has plenty of good advice in these areas.
100% test coverage, GUI testing, testing getters/setters or other no-logic code, etc. seem unlikely to provide good ROI. TDD will provide high test coverage in any case. Test what might break.
It depends on how you model the domain(s) of your program.
If you model the domains in terms of data stored in data structures and methods that read data from one data structure and store derived data in another data structure (procedures or functions depending how procedural or functional your design is), then mock objects are not appropriate. So called "state-based" testing is what you want. The outcome you care about is that a procedure puts the right data in the right variables and what it calls to make that happen is just an implementation detail.
If you model the domains in terms of message-passing communication protocols by which objects collaborate, then the protocols are what you care about and what data the objects store to coordinate their behaviour in the protocols in which they play roles is just implementation detail. In that case, mock objects are the right tool for the job and state based testing ties the tests too closely to unimportant implementation details.
And in most object-oriented programs there is a mix of styles. Some code will be written purely functional, transforming immutable data structures. Other code will be coordinating the behaviour of objects that change their hidden, internal state over time.
As for high test coverage, it really doesn't tell you that much. Low test coverage shows you where you have inadequate testing, but high test coverage doesn't show you that the code is adequately tested. Tests can, for example, run through code paths and so increase the coverage stats but not actually make any assertions about what those code paths did. Also, what really matters is how different parts of the program behave in combination, which unit test coverage won't tell you. If you want to verify that your tests really are testing your system's behaviour adequately you could use a Mutation Testing tool. It's a slow process, so it's something you'd run in a nightly build rather than on every check-in.
I'd asked a similar question How Much Unit Testing is a Good Thing, which might help give an idea of the variety of levels of testing people feel are appropriate.
What is the probability that during your code's maintenance some junior employee will break the part of code that runs "controller calls such and such method in particular order"?
What is the cost to your organization if such a thing occurs - in production outage, debugging/fixing/re-testing/re-release, legal/financial risk, reputation risk, etc...?
Now, multiply #1 and #2 and check whether your reluctance to achieve a reasonable amount of test coverage is worth the risk.
Sometimes, it will not be (this is why in testing there's a concept of a point of diminishing returns).
E.g. if you maintain a web app that is not production critical and has 100 users who have a workaround if the app is broken (and/or can do easy and immediate rollback), then spending 3 months doing full testing coverage of that app is probably non-sensical.
If you work on an app where a minor bug can have multi-million-dollar or worse consequences (think space shuttle software, or guidance system for a cruise missile), then the thorough testing with complete coverage becomes a lot more sensical.
Also, i'm not sure if i'm reading too much into your question but you seem to be implying that having mocking-enabled unit testing somehow excluds application/integration functional testing. If that is the case, you are right to object to such a notion - the two testing approaches must co-exist.