How to determine from which TestCaseSource value is coming from? - nunit

I am using multiple TestCaseSource attributes. Is it possible to determine which value is coming from which source?
Code example:
[TestCaseSource(CountryListA)]
[TestCaseSource(CountryListB)]
public void SomeTest(Country country){
...tests on country...
// access the source (either CountryListA or CountryListB)
}
There are multiple reasons to access the source, for example I need to check that country is unique in that list, or if something went wrong I want to log for which country in which source the test failed.

Short answer: No. NUnit is, in fact, designed to make this impossible. Tests are basically not supposed to know where their arguments came from or who supplied them. This is important in some kinds of advanced test generation scenarios as well as for [Theory] tests.
The general approach to this problem is to take steps that ensure your tests are uniquely named. Any duplicates, whether within the same list or between two lists, make it impossible to determine exactly which source is the problem.
The key question is whether the apparently duplicate tests are true duplicates. You won't know that unless the Type of each returned argument overrides ToString() in a way that allows you to determine the exact case. For example, if Country were a class without a ToString override, every instance of the test would be named something like SomeTest(<Country>). OTOH, this would not be a problem if Country is an enum or if its ToString() is overridden in a unique way.
In your case, there is a relatively simple way to give your tests a unique full name even if the display names are the same.
Put all the tests in an abstract base class.
Derive two different fixtures from that base.
Use CountryListA in one of those fixtures and CountryListB in the other.
That said, it seems to me that the better approach is to keep the current structure and make the test cases more identifiable.

Related

Scala legacy code: how to access input parameters at different points in execution path?

I am working with a legacy scala codebase, and as is always the case modifying the code is quite difficult without touching different parts.
One of my new requirement in to make several decisions based on some input parameters. Problem is that these decisions are to be made at various points along the execution. So either I encapsulate all those parameters in a case class instance and pass it along. But it means I would have to modify multiple methods signatures, and I want to avoid this approach as much as possible.
Another approach can be to create a global object containing all those input parameters and accessible from different points in the execution. Is it a good approach in Scala?
No, using global mutable variables to pass “hidden” parameters is not a good idea, not in Scala and not in any other programming language. It makes the code hard to understand and modify, because a function's behaviour will now depend on which functions were invoked earlier. And it's extremely fragile, because you might forget setting one of those global parameters before invoking the function, which means that it will use whatever value was stored there before. This is the kind of thing that can appear to work for years, and then break when you modify a completely unrelated part of the program.
I can't stress this enough: do not use global mutable variables, period. The solution is to man up and change those method signatures. Depending on the details, dependency injection may or may not help in your particular case.

Should naming of methods within interfaces be concrete or abstract?

Often when I create new classes, I first create a new interface. I name the methods of my interface exactly as I would like them to behave. A colleague of mine prefers to have these method names being more abstract, ie: areConditionsMet(). The reason, he wants to hide the 'implementation details'.
IMO implementation details are different from the expected behavior. Could anyone perhaps give more insight. My goal is to reach a common ground with my colleague.
Your method names should describe what the method does, but not how it does it. The example you gave is a pretty poor method name, but it's better than isXGreatherThan1AndLessThan6(). Without knowing the details about what it should do, I would say that it should be specific to the problem at hand, but general enough that the implementation could change without affecting the name itself, i.e., you don't want the name of the method to be brittle. An example might be isTemperatureWithinRange() - that describes what I'm checking but doesn't describe how it's accomplished. The user of the method should be confident that the output will reflect whether the temperature is within a certain range -- whether this is supplied as an argument or defined by the contract of the class, is immaterial.
Interfaces should represent some behavior or capability and not the way it is to be accomplished. Users of interfaces should not be interested in the way a target is achieved, they just want to know its done.
Implementation issues should not be included within the name of methods for that exact reason. The name of the table updated as a result of this method or the technology used has nothing to do in your domain object's method's name.
However from your question it is hard to say what is the exact case at hand.
If you could provide more details perhaps i could provide an additional help.
The names of your interface methods should leave the user of the interface in no doubt about what the method proposes to do from a functional perspective. If the implementation matches that, well and good.
Based on your updated comments:
Sounds to me like you need two methods: isModified() and hasProperties(). Leave it up to the user (or higher layer) of the domain object to determine if a particular criteria is fulfilled.
An interface should also be designed with the view that after it is released it will never be changed. By saying isDomainObjectModifiedAndHasProperties() you are setting in concrete that this is the criteria of fullfilment (regardless of any future unforseen implementation).

Is the word "Helper" in a class name a code smell?

We seems to be abstracting a lot of logic way from web pages and creating "helper" classes. Sadly, these classes are all sounding the same, e.g
ADHelper, (Active Directory)
AuthenicationHelper,
SharePointHelper
Do other people have a large number of classes with this naming convention?
I would say that it qualifies as a code smell, but remember that a code smell doesn't necessarily spell trouble. It is something you should look into and then decide if it is okay.
Having said that I personally find that a name like that adds very little value and because it is so generic the type may easily become a bucket of non-related utility methods. I.e. a helper class may turn into a Large Class, which is one of the common code smells.
If possible I suggest finding a type name that more closely describes what the methods do. Of course this may prompt additional helper classes, but as long as their names are helpful I don't mind the numbers.
Some time ago I came across a class called XmlHelper during a code review. It had a number of methods that obviously all had to do with Xml. However, it wasn't clear from the type name what the methods had in common (aside from being Xml-related). It turned out that some of the methods were formatting Xml and others were parsing Xml. So IMO the class should have been split in two or more parts with more specific names.
As always, it depends on the context.
When you work with your own API I would definitely consider it a code smell, because FooHelper indicates that it operates on Foo, but the behavior would most likely belong directly on the Foo class.
However, when you work with existing APIs (such as types in the BCL), you can't change the implementation, so extension methods become one of the ways to address shortcomings in the original API. You could choose to names such classes FooHelper just as well as FooExtension. It's equally smelly (or not).
Depends on the actual content of the classes.
If a huge amount of actual business logic/business rules are in the helper classes, then I would say yes.
If the classes are really just helpers that can be used in other enterprise applications (re-use in the absolute sense of the word -- not copy then customize), then I would say the helpers aren't a code smell.
It is an interesting point, if a word becomes 'boilerplate' in names then its probably a bit whiffy - if not quite a real smell. Perhaps using a 'Helper' folder and then allowing it to appear in the namespace keeps its use without overusing the word?
Application.Helper.SharePoint
Application.Helper.Authentication
and so on
In many cases, I use classes ending with Helper for static classes containing extension methods. Doesn't seem smelly to me. You can't put them into a non-static class, and the class itself does not matter, so Helper is fine, I think. Users of such a class won't see the class name anyway.
The .NET Framework does this as well (for example in the LogicalTreeHelper class from WPF, which just has a few static (non-extension) methods).
Ask yourself if the code would be better if the code in your helper class would be refactored to "real" classes, i.e. objects that fit into your class hierarchy. Code has to be somewhere, and if you can't make out a class/object where it really belongs to, like simple helper functions (hence "Helper"), you should be fine.
I wouldn't say that it is a code smell. In ASP.NET MVC it is quite common.

Do Extension Methods Hide Dependencies?

All,
Wanted to get a few thoughts on this. Lately I am becoming more and more of a subscriber of "purist" DI/IOC principles when designing/developing. Part of this (a big part) involves making sure there is little coupling between my classes, and that their dependencies are resolved via the constructor (there are certainly other ways of managing this, but you get the idea).
My basic premise is that extension methods violate the principles of DI/IOC.
I created the following extension method that I use to ensure that the strings inserted into database tables are truncated to the right size:
public static class StringExtensions
{
public static string TruncateToSize(this string input, int maxLength)
{
int lengthToUse = maxLength;
if (input.Length < maxLength)
{
lengthToUse = input.Length;
}
return input.Substring(0, lengthToUse);
}
}
I can then call my string from within another class like so:
string myString = "myValue.TruncateThisPartPlease.";
myString.TruncateToSize(8);
A fair translation of this without using an extension method would be:
string myString = "myValue.TruncateThisPartPlease.";
StaticStringUtil.TruncateToSize(myString, 8);
Any class that uses either of the above examples could not be tested independently of the class that contains the TruncateToSize method (TypeMock aside). If I were not using an extension method, and I did not want to create a static dependency, it would look more like:
string myString = "myValue.TruncateThisPartPlease.";
_stringUtil.TruncateToSize(myString, 8);
In the last example, the _stringUtil dependency would be resolved via the constructor and the class could be tested with no dependency on the actual TruncateToSize method's class (it could be easily mocked).
From my perspective, the first two examples rely on static dependencies (one explicit, one hidden), while the second inverts the dependency and provides reduced coupling and better testability.
So does the use of extension methods conflict with DI/IOC principles? If you're a subscriber of IOC methodology, do you avoid using extension methods?
I think it's fine - because it's not like TruncateToSize is a realistically replaceable component. It's a method which will only ever need to do a single thing.
You don't need to be able to mock out everything - just services which either disrupt unit testing (file access etc) or ones which you want to test in terms of genuine dependencies. If you were using it to perform authentication or something like that, it would be a very different matter... but just doing a straight string operation which has absolutely no configurability, different implementation options etc - there's no point in viewing that as a dependency in the normal sense.
To put it another way: if TruncateToSize were a genuine member of String, would you even think twice about using it? Do you try to mock out integer arithmetic as well, introducing IInt32Adder etc? Of course not. This is just the same, it's only that you happen to be supplying the implementation. Unit test the heck out of TruncateToSize and don't worry about it.
I see where you are coming from, however, if you are trying to mock out the functionality of an extension method, I believe you are using them incorrectly. Extension methods should be used to perform a task that would simply be inconvenient syntactically without them. Your TruncateToLength is a good example.
Testing TruncateToLength would not involve mocking it out, it would simply involve the creation of a few strings and testing that the method actually returned the proper value.
On the other hand, if you have code in your data layer contained in extension methods that is accessing your data store, then yes, you have a problem and testing is going to become an issue.
I typically only use extension methods in order to provide syntactic sugar for small, simple operations.
Extension methods, partial classes and dynamic objects. I really like them, however you must tread carefully , there be monsters here.
I would take a look at dynamic languages and see how they cope with these sort of problems on a day to day basis, its really enlightening. Especially when they have nothing to stop them from doing stupid things apart from good design and discipline. Everything is dynamic at run time, the only thing to stop them is the computer throwing a major run time error. "Duck Typing" is the maddest thing I have ever seen, good code is down to good program design, respect for others in your team, and the trust that every member, although have the ability to do some wacky things choose not to because good design leads to better results.
As for your test scenario with mock objects/ICO/DI, would you really put some heavy duty work in an extension method or just some simple static stuff that operate in a functional type way? I tend to use them like you would in a functional programming style, input goes in, results come out with no magic in the middle, just straight up framework classes that you know the guys at MS have designed and tested :P that you can rely on.
If your are doing some heavy lifting stuff using extension methods I would look at your program design again, check out your CRC designs, Class models, Use Cases, DFD's, action diagrams or whatever you like to use and figure out where in this design you planned to put this stuff in an extension method instead of a proper class.
At the end of the day, you can only test against your system design and not code outside of your scope. If you going to use extension classes, my advice would be to look at Object Composition models instead and use inheritance only when there is a very good reason.
Object Composition always wins out with me as they produce solid code. You can plug them in, take them out and do what you like with them. Mind you this all depends on whether you use Interfaces or not as part of your design. Also if you use Composition classes, the class hierarchy tree gets flattened into discrete classes and there are fewer places where your extension method will be picked up through inherited classes.
If you must use a class that acts upon another class as is the case with extension methods, look at the visitor pattern first and decide if its a better route.
Its a pain because they are hard to mock. I usually use one of these strategies
Yep, scrap the extension its a PITA to mock out
Use the extension and just test that it did the right thing. i.e. pass data into the truncate and check it got truncated
If it's not some trivial thing, and I HAVE to mock it, I'll make my extension class have a setter for the service it uses, and set that in the test code.
i.e.
static class TruncateExtensions{
public ITruncateService Service {private get;set;}
public string TruncateToSize(string s, int size)
{
return (Service ?? Service = new MyDefaultTranslationServiceImpl()). TruncateToSize(s, size);
}
}
This is a bit scary because someone might set the service when they shouldn't, but I'm a little cavalier sometimes, and if it was really important, I could do something clever with #if TEST flags, or the ServiceLocator pattern to avoid the setter being used in production.

Understanding Interfaces

I have class method that returns a list of employees that I can iterate through. What's the best way to return the list? Typically I just return an ArrayList. However, as I understand, interfaces are better suited for this type of action. Which would be the best interface to use? Also, why is it better to return an interface, rather than the implementation (say ArrayList object)? It just seems like a lot more work to me.
Personally, I would use a List<Employee> for creating the list on the backend, and then use IList when you return. When you use interfaces, it gives you the flexability to change the implementation without having to alter who's using your code. If you wanted to stick with an ArrayList, that'd be a non-generic IList.
# Jason
You may as well return IList<> because an array actually implements this interface.
The best way to do something like this would be to return, as you say, a List, preferably using generics, so it would be List<Employee>.
Returning a List rather than an ArrayList means that if later you decide to use, say, a LinkedList, you don't have to change any of the code other than where you create the object to begin with (i.e, the call to "new ArrayList())".
If all you are doing is iterating through the list, you can define a method that returns the list as IEnumerable (for .NET).
By returning the interface that provides just the functionality you need, if some new collection type comes along in the future that is better/faster/a better match for your application, as long as it still implements IEnumerable you can completely rewrite your method, using the new type inside it, without changing any of the code that calls it.
Is there any reason the collection needs to be ordered? Why not simply return an IEnumerable<Employee>? This gives the bare minimum that is required - if you later wanted some other form of storage, like a Bag or Set or Tree or whatnot, your contract would remain intact.
I disagree with the premise that it's better to return an interface. My reason is that you want to maximize the usefulness a given block of code exposes.
With that in mind, an interface works for accepting an item as an argument. If a function parameter calls for an array or an ArrayList, that's the only thing you can pass to it. If a function parameter calls for an IEnumerable it will accept either, as well as a number of other objects. It's more useful
The return value, however, works opposite. When you return an IEnumerable, the only thing you can do is enumerate it. If you have a List handy and return that then code that calls your function can also easily do a number of other things, like get a count.
I stand united with those advising you to get away from the ArrayList, though. Generics are so much better.
An interface is a contract between the implementation and the user of the implementation.
By using an interface, you allow the implementation to change as much as it wants as long as it maintains the contract for the users.
It also allows multiple implementations to use the same interface so that users can reuse code that interacts with the interface.
You don't say what language you're talking about, but in something .NETish, then it's no more work to return an IList than a List or even an ArrayList, though the mere mention of that obsolete class makes me think you're not talking about .NET.
An interface is essentially a contract that a class has certain methods or attributes; programming to an interface rather then a direct implementation allows for more dynamic and manageable code, as you can completely swap out implementations as long as the "contract" is still held.
In the case you describe, passing an interface does not give you a particular advantage, if it were me, I would pass the ArrayList with the generic type, or pass the Array itself: list.toArray()
Actually you shouldn't return a List if thats a framework, at least not without thinking it, the recommended class to use is a Collection. The List class has some performance improvements at the cost of server extendability issues. It's in fact an FXCop rule.
You have the reasoning for that in this article
Return type for your method should be IList<Employee>.
That means that the caller of your method can use anything that IList offers but cannot use things specific to ArrayList. Then if you feel at some point that LinkedList or YourCustomSuperDuperList offers better performance or other advantages you can safely use it within your method and not screw callers of it.
That's roughly interfaces 101. ;-)