When a software is developed,various types of testing is done - unit,integration,functional,manual.In my current project(winforms with sql server),which has legacy code(no tests),we do have lot of bugs.
We are trying to remove them using combination of manual + tests(mostly integration)
But,still some bugs can escape.
For example(a hypothetical scenario) - if a customer has purchased some worth of goods for last 6 months,he should be given some discount on purchases he makes once 6 months have lapsed.His status should be updated to privilege.
But,for some reason(bug in the code) the system is not doing so.How should we tackle such scenarios? Should we have a script running on the database which looks for scenarios such as described? Another extension of the scenario could be,the customer must be send a gift once he is privileged,but system is missing to do so.
Thoughts?
"Should we have a script running on the database which looks for scenarios such as described?"
Do you mean "put a script in the database to correct the problem", then no.
NO. Never. Under no circumstances. Working around a bug by adding peculiar special-case logic is really a very bad idea.
When that peculiar special-case logic has it's own bugs, you've added buggy code to try and correct buggy code. A net loss.
When you try to enhance the system, you have this peculiar special-case logic that doesn't make any sense.
a. If you're lucky, you fixed the bug it was supposed to work around, and it will be redundant. What now? Which copy to remove?
b. Otherwise, it will contradict other code. What now? Which is right?
If you mean "put a script in the database to help find and debug the problem", then yes. For a short time, use every tool at your disposal to find and fix bugs. Once found and fixed, this script is then useless and must be deleted.
If you mean "write a script in the database to test the application", then yes. That's what unit test scripts are for. Use them.
It is far better to create unit tests than it is to create scripts that you put in the database. Unit tests are the best approach.
You should have an automated test-suite in place. This test-suite will implement all the scenarios that the specification requires. Since one cannot wait for six months to test that the discounting works, the actual implementation is replaced by a mock implementation (the example is in java but the same principles apply in other languages), that for example "simulates" that 6 months have passed. One can use assertions to automate the tests.
Once you have the whole test-suite ready, if all tests pass after (just as before) a refactoring/changing of the code, one can be sure that no feature has broken due to the refactoring.
Related
I have seen many theories about Theory but the truth is that none of them is good enough to explain why not always use Theory. I do not see a reason (apart of ... well ... that the words Theory and Test are not the same words) why not always use a Theory.
The funny thing is that in all examples out there you could easily setup your test to be either a Test or a Theory and I am sure I am missing something here but I do not see the need to put myself in a philosophical uncertainty about what type of attribute I should use.
Let's suppose that the difference is only for the sake of self documented code. Then, why you use Datapoint and DatapointSource for Theory and something else for Test?
The truth is that I feel that nobody has come with a simple and clean answer that lights where the difference really is. At least one example where Theory makes absolute no sense and Test nicely fit, or the other way...
As a programmer I am.....If the answer is not simple there is something wrong there so.. help me to see what i am missing.
Since you are referring to NUnit, I'll answer WRT what NUnit means by a theory. Note that this is entirely different from how xUnit.Net uses the term.
I got the idea of theories from JUnit and particularly from the work of David Saff. Here's one paper on the subject: http://groups.csail.mit.edu/pag/pubs/test-theory-demo-oopsla2007.pdf Google "saff theories" and you'll find more.
Basically, when creating a test you sometimes have a theory about how the tested code should work. (In this answer, lower case theory is the English word while Theory is the NUnit thing) This is especially common in mathematical reasoning. For example, consider a program that calculates square roots. I could theorize that the square root of any non-negative number is a value such that it gives the original number when multiplied by itself. That's a completely self-contained statement about the computation.
Using NUnit, I could write a test like this...
public void SqrtTest(double value)
{
Assume.That(value >=0 );
double answer = SquareRootOf(value);
Assert.That(answer*answer, Is.EqualTo(value));
}
This test works no matter what number we give it. If the number is negative, the result is inconclusive and doesn't affect the overall outcome. If it is positive, an actual assertion is performed and the test either succeeds or fails.
In the ideal world, the provided values can come from anywhere. In JUnit, they come from Datapoints and I copied that. I also permitted them to be programmer-specified mainly as an interim solution. Ultimately, the idea was that the test framework would support a range of ways to generate data for a Theory, without the intervention of the programmer or tester.
Unfortunately, we are still waiting for that last bit. :-)
Bottom line, I think you should use Theory when you have a theory. Use a test when you just have examples without any logic binding them together. IME that's what happens in most business applications.
One of these days I hope to write a pretty long chapter about Theories.
I have been on a "cleaning spree" lately at work, doing a lot of touch-up stuff that should have been done awhile ago. One thing I have been doing is deleted modules that were imported into files and never used, or they were used at one point but not anymore. To do this I have just been deleting an import and running the program's test file. Which gets really, really tedious.
Is there any programmatic way of doing this? Short of me writing a program myself to do it.
Short answer, you can't.
Longer possibly more useful answer, you won't find a general purpose tool that will tell you with 100% certainty whether the module you're purging will actually be used. But you may be able to build a special purpose tool to help you with the manual search that you're currently doing on your codebase. Maybe try a wrapper around your test suite that removes the use statements for you and ignores any error messages except messages that say Undefined subroutine &__PACKAGE__::foo and other messages that occur when accessing missing features of any module. The wrapper could then automatically perform a dumb source scan on the codebase of the module being purged to see if the missing subroutine foo (or other feature) might be defined in the unwanted module.
You can supplement this with Devel::Cover to determine which parts of your code don't have tests so you can manually inspect those areas and maybe get insight into whether they are using code from the module you're trying to purge.
Due to the halting problem you can't statically determine whether any program, of sufficient complexity, will exit or not. This applies to your problem because the "last" instruction of your program might be the one that uses the module you're purging. And since it is impossible to determine what the last instruction is, or if it will ever be executed, it is impossible to statically determine if that module will be used. Further, in a dynamic language, which can extend the program during it's run, analysis of the source or even the post-compile symbol tables would only tell you what was calling the unwanted module just before run-time (whatever that means).
Because of this you won't find a general purpose tool that works for all programs. However, if you are positive that your code doesn't use certain run-time features of Perl you might be able to write a tool suited to your program that can determine if code from the module you're purging will actually be executed.
You might create alternative versions of the modules in question, which have only an AUTOLOAD method (and import, see comment) in it. Make this AUTOLOAD method croak on use. Put this module first into the include path.
You might refine this method by making AUTOLOAD only log the usage and then load the real module and forward the original function call. You could also have a subroutine first in #INC which creates the fake module on the fly if necessary.
Of course you need a good test coverage to detect even rare uses.
This concept is definitely not perfect, but it might work with lots of modules and simplify the testing.
we have an ERP system running in our company based on Progress 8. Can you give an indication how compatible OpenEdge 11 is to version 8? Is it like "compile the source" and it will run (of course testing :-)) or more like every second line will need rework?
I know it's a general question but maybe you can provide a general answer? :o)
Thanks,
Gunter
Yes. Convert the db and recompile.
Sometimes you might run across keyword conflicts. A quick fix for that is the -k parameter (the "keyword forget list"). Using -k is a quick way to get old code that has variables or table/field names that have become new keywords to compile while you work on changing the names.
You might also see the occasional situation where the compiler has tightened up the rules a bit. For instance, there was some tightening of rules around defining shared variables in the v8/v9 time frame -- most of what I remember about that was looking at the impacted code and asking myself "how did that ever compile to start with?"
Another potential issue -- if your application uses a framework (such as "smart objects") whose API might change from release to release it is important to make sure that you compile against the version of that framework that your code requires -- not something newer but different.
Obviously you need to test but the overwhelmingly vast majority of code recompiles and runs without any issues.
We just did the conversion from Progress 8.3E to OpenEdge 11 a few days ago. It went on much like Tom wrote. Convert and recompile.
The only problem was one database that was originally created in Progress version 7 . Here conversion failed - but since it was a small database, it was quicker to dump , recreate and load.
From the current version (0.98) of the Moose::Manual::MooseX are the lines:
We have high hopes for the future of
MooseX::Method::Signatures and
MooseX::Declare. However, these
modules, while used regularly in
production by some of the more insane
members of the community, are still
marked alpha just in case backwards
incompatible changes need to be made.
I noticed that for MooseX::Method::Signatures the change log for September 2009 mentions the removal of the "scary ALPHA disclaimer".
So, are these still "alpha"?
Would I still be considered one of the "more insane" to use them?
I'd say they are production ready - I'm using them in production - but there are several things to consider:
Performance
MooseX::Declare and dependencies do almost all of their magic at compile time. Depending on the size of your program, you might find anywhere from half a second to several seconds of additional initialization overhead. If this a problem, don't use MooseX::Declare.
At runtime, the main overhead is type and argument checking, which you should (ideally) be doing anyway. That said, Moose type constraints have some overheads, namely coercion and the more complex (MooseX::Types::Structured-style) constraints. Don't use these if performance is an issue.
Stability
MooseX::Declare and MooseX::Method::Signature's external syntax is now stable. But it is important to know that the internals are subject to extreme change. (fortunately, changes for the better)
To give you an idea, the signature itself is grabbed using a big block of C code stolen from the Perl tokenizer (toke.c). This can break in some situations since it isn't actually parsing anything. The bit inside the brackets is parsed using PPI, which is designed for pure Perl, but the resulting PPI tree is then hacked up to get something useful. Devel::Declare itself is a hack - after it sees specific keywords (e.g. 'role', 'class', 'method') the Devel::Declare-using module must rewrite the source code by hand, with no interaction with the real Perl parser.
Corner cases may cause Perl to segfault. Or rewrite the source code badly, so you get syntax errors but have no idea what's causing them without -MO::Deparse. If you mess up the MooseX::Declare syntax by accident, there is no guarantee that the module will detect this and give you a sensible error. The ALPHA message may have gone, but this is still doing dark and scary things internally, and you should be prepared for that.
UPDATE
MooseX::Declare has not been updated much, and you may wish to look at alternatives such as Moops. Personally, I have decided to stick with pure Moose until Perl itself begins to support class/method/has syntax natively, which is possibly on the cards.
I think it's a matter of differing perspectives as much as anything -- rafl is one of the aforementioned "more insane members of the community" while Rolsky is more conservative. It's up to you to decide who you agree with, and really I think that the most important variable is your own code.
MooseX::Declare is good code. It won't randomly blow up your machine, it's not awful for performance, and it offers a lot of nifty stuff while reducing the amount of boilerplate that you have to write. But it might change in the future, making your code refuse to compile until it's updated; it might make your editor and other development tools confused when it sees syntax that it can't parse, it might piss off your collaborators by making them learn a new module to work with your code, or it might piss off your boss by making it so any future maintainer has to learn a new module to work with your code. Which of those things apply to you, and to what degree? You know better than I do, I hope.
There are people who feel that the maturity and stability of MooseX::Delcare, Devel::Declare on which it's based, or even Moose itself are not yet ready for "prime time". I also know of two large companies with millions of visitors a month, who have MooseX::Declare in their production environment. I personally am happy with the stack I am provided with Moose and do not see a need yet to bring in MooseX::Declare. I know people who's opinion I deeply respect who refuse to write new code without the declarative sugar from MooseX::Declare.
All of this is to say, the decision on whether something is or is not production ready is highly dependent upon your production environment, your development needs, and taste for risk. Without being in your shoes we can't possibly give an informed decision as to how well any given tool matches that profile.
MooseX::Method::Signatures (MXMS), and MooseX::Declare which uses it, is not production ready. This is not because the code isn't stable, but because it is appallingly slow. Simply using the method keyword, no types or arguments, is a 500-1000x runtime performance hit over a regular method call. My Macbook Pro can do about 6,000 simple method calls per second using MXMS vs 5,000,000 with plain Perl.
Method::Signatures, in contrast, has almost no performance hit above what it would normally cost to do the requested checks. The syntax is almost exactly the same as MXMS and it supports Moose (and Mouse) types. Both rely on the same underlying syntax modifying technique. (Full disclosure, I am the author of Method::Signatures.)
If you like MooseX::Declare but want the performance of Method::Signatures, try Method::Signatures::Modifiers.
It depends on what you mean by "production ready". I wouldn't depend on them until their velocity slows down quite a bit. I like my production stuff to not need frequent care from external code changes, API adjustments, and so on. That's not something particular to Moose, but any young project.
You have to judge how much that matters to you. In some situations, pushing stuff into production is a lengthy process, so you must be circumspect with such things. At the other extreme, some places let you edit files directly on the production server. That is, you have to define your tolerance before anyone can tell you which side a given MooseX module is on.
MooseX::Declare and MooseX::Method::Signatures are working well but they can have really nasty penalty depending on what does your code do. This can be fixed by just not using method keyword or using Method::Signatures::Modifiers.
Performance penalty I am seeing is around 2-5x compared to Method::Signatures::Modifiers (5x being mostly for one specific class I was using). And it seems that it is mostly compile time or maybe first time initialization, because it is getting under 2x when the computation is longer.
Method::Signatures::Modifiers has better errors but you have to turn this optimization off when you use debugger (it goes haywire because it does not see these methods, you can see for yourself in -MO=Deparse output).
It may be worth it to get rid of Perl argument shifting hell.
I am battling to understand why a post compiler, like PostSharp, should ever be needed?
My understanding is that it just inserts code where attributed in the original code, so why doesn't the developer just do that code writing themselves?
I expect that someone will say it's easier to write since you can use attributes on methods and then not clutter them up boilerplate code, but that can be done using DI or reflection and a touch of forethought without a post compiler. I know that since I have said reflection, the performance elephant will now enter - but I do not care about the relative performance here, when the absolute performance for most scenarios is trivial (sub millisecond to millisecond).
Let's try to take an architectural point on the issue. Say you are an architect (everyone wants to be an architect ;)
You need to deliver the architecture to your team:
a selected set of libraries, architectural patterns, and design patterns. As a part of your design, you say: "we will implement caching using the following design pattern:"
string key = string.Format("[{0}].MyMethod({1},{2})", this, param1, param2 );
T value;
if ( !cache.TryGetValue( key, out value ) )
{
using ( cache.Lock(key) )
{
if (!cache.TryGetValue( key, out value ) )
{
// Do the real job here and store the value into variable 'value'.
cache.Add( key, value );
}
}
}
This is a correct way to do tracing. Developers are going to implement this pattern thousands of times, so you write a nice Word document telling how you want the pattern to be implemented. Yeah, a Word document. Do you have a better solution? I'm afraid you don't. Classic code generators won't help. Functional programming (delegates)? It works fairly well for some aspects, but not here: you need to pass method parameters to the pattern. So what's left? Describe the pattern in natural language and trust developers will implement them.
What will happen?
First, some junior developer will look at the code and tell "Hm. Two cache lookups. Kinda useless. One is enough." (that's not a joke -- ask the DNN team about this issue). And your patterns cease to be thread-safe.
As an architect, how do you ensure that the pattern is properly applied? Unit testing? Fair enough, but you will hardly detect threading issues this way. Code review? That's maybe the solution.
Now, what is you decide to change the pattern? For instance, you detect a bug in the cache component and decide to use your own? Are you going to edit thousands of methods? It's not just refactoring: what if the new component has different semantics?
What if you decide that a method is not going to be cached any more? How difficult will it be to remove caching code?
The AOP solution (whatever the framework is) has the following advantages over plain code:
It reduces the number of lines of code.
It reduces the coupling between components, therefore you don't have to change much things when you decide to change the logging component (just update the aspect), therefore it improves the capacity of your source code to cope with new requirements over time.
Because there is less code, the probability of bugs is lower for a given set of features, therefore AOP improves the quality of your code.
So if you put it all together:
Aspects reduce both development costs and maintenance costs of software.
I have a 90 min talk on this topic and you can watch it at http://vimeo.com/2116491.
Again, the architectural advantages of AOP are independent of the framework you choose. The differences between frameworks (also discussed in this video) influence principally the extent to which you can apply AOP to your code, which was not the point of this question.
Suppose you already have a class which is well-designed, well-tested etc. You want to easily add some timing on some of the methods. Yes, you could use dependency injection, create a decorator class which proxies to the original but with timing for each method - but even that class is going to be a mess of repetition...
... or you can add reflection to the mix and use a dynamic proxy of some description, which lets you write the timing code once, but requires you to get that reflection code just right -which isn't as easy as it might be, especially if generics are involved.
... or you can add an attribute to each method that you want timed, write the timing code once, and apply it as a post-compile step.
I know which seems more elegant to me - and more obvious when reading the code. It can be applied even in situations where DI isn't appropriate (and it really isn't appropriate for every single class in a system) and with no other changes elsewhere.
AOP (PostSharp) is for attaching code to all sorts of points in your application, from one location, so you don't have to place it there.
You cannot achieve what PostSharp can do with Reflection.
I personally don't see a big use for it, in a production system, as most things can be done in other, better, ways (logging, etc).
You may like to review the other threads on this matter:
Anyone with Postsharp experience in production?
Other than logging, and transaction management what are some practical applications of AOP?
Aspect Oriented Programming: What do you use PostSharp for?
etc (search)
Aspects take away all the copy & paste - code and make adding new features faster.
I hate nothing more than, for example, having to write the same piece of code over and over again. Gael has a very nice example regarding INotifyPropertyChanged on his website (www.postsharp.net).
This is exactly what AOP is for. Forget about the technical details, just implement what you are being asked for.
In the long run, I think we all should say goodbye to the way we are writing software now. It's tedious and plainly stupid to write boilerplate code and iterate manually.
The future belongs to declarative, functional style being held together by an object oriented framework - and the cross cutting concerns being handled by aspects.
I guess the only people who will not get it soon are the guys who are still payed for lines of code.