Methods of simplifying ugly nested if-else trees in C# - c#-3.0

Sometimes I'm writing ugly if-else statements in C# 3.5; I'm aware of some different approaches to simplifying that with table-driven development, class hierarchy, anonimous methods and some more.
The problem is that alternatives are still less wide-spread than writing traditional ugly if-else statements because there is no convention for that.
What depth of nested if-else is normal for C# 3.5? What methods do you expect to see instead of nested if-else the first? the second?
if i have ten input parameters with 3 states in each, i should map functions to combination of each state of each parameter (really less, because not all the states are valid, but sometimes still a lot). I can express these states as a hashtable key and a handler (lambda) which will be called if key matches.
It is still mix of table-driven, data-driven dev. ideas and pattern matching.
what i'm looking for is extending for C# such approaches as this for scripting (C# 3.5 is rather like scripting)
http://blogs.msdn.com/ericlippert/archive/2004/02/24/79292.aspx

Good question. "Conditional Complexity" is a code smell. Polymorphism is your friend.
Conditional logic is innocent in its infancy, when it’s simple to understand and contained within a
few lines of code. Unfortunately, it rarely ages well. You implement several new features and
suddenly your conditional logic becomes complicated and expansive. [Joshua Kerevsky: Refactoring to Patterns]
One of the simplest things you can do to avoid nested if blocks is to learn to use Guard Clauses.
double getPayAmount() {
if (_isDead) return deadAmount();
if (_isSeparated) return separatedAmount();
if (_isRetired) return retiredAmount();
return normalPayAmount();
};
The other thing I have found simplifies things pretty well, and which makes your code self-documenting, is Consolidating conditionals.
double disabilityAmount() {
if (isNotEligableForDisability()) return 0;
// compute the disability amount
Other valuable refactoring techniques associated with conditional expressions include Decompose Conditional, Replace Conditional with Visitor, Specification Pattern, and Reverse Conditional.

There are very old "formalisms" for trying to encapsulate extremely complex expressions that evaluate many possibly independent variables, for example, "decision tables" :
http://en.wikipedia.org/wiki/Decision_table
But, I'll join in the choir here to second the ideas mentioned of judicious use of the ternary operator if possible, identifying the most unlikely conditions which if met allow you to terminate the rest of the evaluation by excluding them first, and add ... the reverse of that ... trying to factor out the most probable conditions and states that can allow you to proceed without testing of the "fringe" cases.
The suggestion by Miriam (above) is fascinating, even elegant, as "conceptual art;" and I am actually going to try it out, trying to "bracket" my suspicion that it will lead to code that is harder to maintain.
My pragmatic side says there is no "one size fits all" answer here in the absence of a pretty specific code example, and complete description of the conditions and their interactions.
I'm a fan of "flag setting" : meaning anytime my application goes into some less common "mode" or "state" I set a boolean flag (which might even be static for the class) : for me that simplifies writing complex if/then else evaluations later on.
best, Bill

Simple. Take the body of the if and make a method out of it.
This works because most if statements are of the form:
if (condition):
action()
In other cases, more specifically :
if (condition1):
if (condition2):
action()
simplify to:
if (condition1 && condition2):
action()

I'm a big fan of the ternary operator which get's overlooked by a lot of people. It's great for assigning values to variables based on conditions. like this
foobarString = (foo == bar) ? "foo equals bar" : "foo does not equal bar";
Try this article for more info.
It wont solve all your problems, but it is very economical.

I know that this is not the answer you are looking for, but without context your questions is very hard to answer. The problem is that the way to refactor such a thing really depends on your code, what it is doing, and what you are trying to accomplish. If you had said that you were checking the type of an object in these conditionals we could throw out an answer like 'use polymorphism', but sometimes you actually do just need some if statements, and sometimes those statements can be refactored into something more simple. Without a code sample it is hard to say which category you are in.

I was told years ago by an instructor that 3 is a magic number. And as he applied it it-else statements he suggested that if I needed more that 3 if's then I should probably use a case statement instead.
switch (testValue)
{
case = 1:
// do something
break;
case = 2:
// do something else
break;
case = 3:
// do something more
break;
case = 4
// do what?
break;
default:
throw new Exception("I didn't do anything");
}
If you're nesting if statements more than 3 deep then you should probably take that as a sign that there is a better way. Probably like Avirdlg suggested, separating the nested if statements into 1 or more methods. If you feel you are absolutely stuck with multiple if-else statements then I would wrap all the if-else statements into a single method so it didn't ugly up other code.

If the entire purpose is to assign a different value to some variable based upon the state of various conditionals, I use a ternery operator.
If the If Else clauses are performing separate chunks of functionality. and the conditions are complex, simplify by creating temporary boolean variables to hold the true/false value of the complex boolean expressions. These variables should be suitably named to represent the business sense of what the complex expression is calculating. Then use the boolean variables in the If else synatx instead of the complex boolean expressions.

One thing I find myself doing at times is inverting the condition followed by return; several such tests in a row can help reduce nesting of if and else.

Not a C# answer, but you probably would like pattern matching. With pattern matching, you can take several inputs, and do simultaneous matches on all of them. For example (F#):
let x=
match cond1, cond2, name with
| _, _, "Bob" -> 9000 // Bob gets 9000, regardless of cond1 or 2
| false, false, _ -> 0
| true, false, _ -> 1
| false, true, _ -> 2
| true, true, "" -> 0 // Both conds but no name gets 0
| true, true, _ -> 3 // Cond1&2 give 3
You can express any combination to create a match (this just scratches the surface). However, C# doesn't support this, and I doubt it will any time soon. Meanwhile, there are some attempts to try this in C#, such as here: http://codebetter.com/blogs/matthew.podwysocki/archive/2008/09/16/functional-c-pattern-matching.aspx. Google can turn up many more; perhaps one will suit you.

try to use patterns like strategy or command

In simple cases you should be able to get around with basic functional decomposition. For more complex scenarios I used Specification Pattern with great success.

Related

Is this bad style to replace map or flatMap with return and get on Try in Scala?

I'm wondering if this is a bad code style to replace map with return and get on Try for readability? Say I have some variable with Try inside it and then I need to do anything on it.
val myData: Try[String]
I can do:
myData.flatMap{
some long code
}
Or I can do:
if (myData.isFailure) return myData
val myString = myData.get
some long code that use myString
Yes I would regard this as bad style. One of the biggest gains I have made as a programmer have come from replacing statements with expressions. So I would rewrite your example using pattern matching.
myData match {
case Success(myString) =>
some long code that use myString
case Failure(_) => tFileDf
}
I understand that if-guards are very common in languages like java and C# and they even make the code easier to read in these cases. Since moving to Scala I find less cases where if-guards would improve my code. However the move to expressions has greatly improved the quality of all of my code.
Once you make this change you will gradually use less mutable state and side effects. Gradually your methods will become pure functions. They will probably become shorter. Then you will discover the benefits of total functions. These things improve all of your code where if-guards are local optimisations that will start to clash with modern scala code, and even make it error prone to change.
In the case above I would probably consider exposing the Try, this exposes the failure case more explicitly than returning a default or error value of the same return type.
Another reason that return is discouraged is that it does not alway play nicely in functions.
My advice is to embrace the more functional aspects of Scala and see where it takes you. It will take you more time to write than the equivalent in Java it pay dividends very quickly.

Is there any advantage to avoiding while loops in Scala?

Reading Scala docs written by the experts one can get the impression that tail recursion is better than a while loop, even when the latter is more concise and clearer. This is one example
object Helpers {
implicit class IntWithTimes(val pip:Int) {
// Recursive
def times(f: => Unit):Unit = {
#tailrec
def loop(counter:Int):Unit = {
if (counter >0) { f; loop(counter-1) }
}
loop(pip)
}
// Explicit loop
def :#(f: => Unit) = {
var lc = pip
while (lc > 0) { f; lc -= 1 }
}
}
}
(To be clear, the expert was not addressing looping at all, but in the example they chose to write a loop in this fashion as if by instinct, which is what the raised the question for me: should I develop a similar instinct..)
The only aspect of the while loop that could be better is the iteration variable should be local to the body of the loop, and the mutation of the variable should be in a fixed place, but Scala chooses not to provide that syntax.
Clarity is subjective, but the question is does the (tail) recursive style offer improved performance?
I'm pretty sure that, due to the limitations of the JVM, not every potentially tail-recursive function will be optimised away by the Scala compiler as so, so the short (and sometimes wrong) answer to your question on performance is no.
The long answer to your more general question (having an advantage) is a little more contrived. Note that, by using while, you are in fact:
creating a new variable that holds a counter.
mutating that variable.
Off-by-one errors and the perils of mutability will ensure that, on the long run, you'll introduce bugs with a while pattern. In fact, your times function could easily be implemented as:
def times(f: => Unit) = (1 to pip) foreach f
Which not only is simpler and smaller, but also avoids any creation of transient variables and mutability. In fact, if the type of the function you are calling would be something to which the results matter, then the while construction would start to be even more difficult to read. Please attempt to implement the following using nothing but whiles:
def replicate(l: List[Int])(times: Int) = l.flatMap(x => List.fill(times)(x))
Then proceed to define a tail-recursive function that does the same.
UPDATE:
I hear you saying: "hey! that's cheating! foreach is neither a while nor a tail-rec call". Oh really? Take a look into Scala's definition of foreach for Lists:
def foreach[B](f: A => B) {
var these = this
while (!these.isEmpty) {
f(these.head)
these = these.tail
}
}
If you want to learn more about recursion in Scala, take a look at this blog post. Once you are into functional programming, go crazy and read Rúnar's blog post. Even more info here and here.
In general, a directly tail recursive function (i.e., one that always calls itself directly and cannot be overridden) will always be optimized into a while loop by the compiler. You can use the #tailrec annotation to verify that the compiler is able to do this for a particular function.
As a general rule, any tail recursive function can be rewritten (usually automatically by the compiler) as a while loop and vice versa.
The purpose of writing functions in a (tail) recursive style is not to maximize performance or even conciseness, but to make the intent of the code as clear as possible, while simultaneously minimizing the chance of introducing bugs (by eliminating mutable variables, which generally make it harder to keep track of what the "inputs" and "outputs" of the function are). A properly written recursive function consists of a series of checks for terminating conditions (using either cascading if-else or a pattern match) with the recursive call(s) (plural only if not tail recursive) made if none of the terminating conditions are met.
The benefit of using recursion is most dramatic when there are several different possible terminating conditions. A series of if conditionals or patterns is generally much easier to comprehend than a single while condition with a whole bunch of (potentially complex and inter-related) boolean expressions &&'d together, especially if the return value needs to be different depending on which terminating condition is met.
Did these experts say that performance was the reason? I'm betting their reasons are more to do with expressive code and functional programming. Could you cite examples of their arguments?
One interesting reason why recursive solutions can be more efficient than more imperative alternatives is that they very often operate on lists and in a way that uses only head and tail operations. These operations are actually faster than random-access operations on more complex collections.
Anther reason that while-based solutions may be less efficient is that they can become very ugly as the complexity of the problem increases...
(I have to say, at this point, that your example is not a good one, since neither of your loops do anything useful. Your recursive loop is particularly atypical since it returns nothing, which implies that you are missing a major point about recursive functions. The functional bit. A recursive function is much more than another way of repeating the same operation n times.)
While loops do not return a value and require side effects to achieve anything. It is a control structure which only works at all for very simple tasks. This is because each iteration of the loop has to examine all of the state to decide what to next. The loops boolean expression may also have to be come very complex if there are multiple potential exit paths (or that complexity has to be distributed throughout the code in the loop, which can be ugly and obfuscatory).
Recursive functions offer the possibility of a much cleaner implementation. A good recursive solution breaks a complex problem down in to simpler parts, then delegates each part on to another function which can deal with it - the trick being that that other function is itself (or possibly a mutually recursive function, though that is rarely seen in Scala - unlike the various Lisp dialects, where it is common - because of the poor tail recursion support). The recursively called function receives in its parameters only the simpler subset of data and only the relevant state; it returns only the solution to the simpler problem. So, in contrast to the while loop,
Each iteration of the function only has to deal with a simple subset of the problem
Each iteration only cares about its inputs, not the overall state
Sucess in each subtask is clearly defined by the return value of the call that handled it.
State from different subtasks cannot become entangled (since it is hidden within each recursive function call).
Multiple exit points, if they exist, are much easier to represent clearly.
Given these advantages, recursion can make it easier to achieve an efficient solution. Especially if you count maintainability as an important factor in long-term efficiency.
I'm going to go find some good examples of code to add. Meanwhile, at this point I always recommend The Little Schemer. I would go on about why but this is the second Scala recursion question on this site in two days, so look at my previous answer instead.

if (Option.nonEmpty) vs Option.foreach

I want to perform some logic if the value of an option is set.
Coming from a java background, I used:
if (opt.nonEmpty) {
//something
}
Going a little further into scala, I can write that as:
opt.foreach(o => {
//something
})
Which one is better? The "foreach" one sounds more "idiomatic" and less Java, but it is less readable - "foreach" applied to a single value sounds weird.
Your example is not complete and you don't use minimal syntax. Just compare these two versions:
if (opt.nonEmpty) {
val o = opt.get
// ...
}
// vs
opt foreach {
o => // ...
}
and
if (opt.nonEmpty)
doSomething(opt.get)
// vs
opt foreach doSomething
In both versions there is more syntactic overhead in the if solution, but I agree that foreach on an Option can be confusing if you think of it only as an optional value.
Instead foreach describes that you want to do some sort of side effects, which makes a lot of sense if you think of Option being a monad and foreach just a method to transform it. Using foreach has furthermore the great advantage that it makes refactorings easier - you can just change its type to a List or any other monad and you will not get any compiler errors (because of Scalas great collections library you are not constrained to use only operations that work on monads, there are a lot of methods defined on a lot of types).
foreach does make sense, if you think of Option as being like a List, but with a maximum of one element.
A neater style, IMO, is to use a for-comprehension:
for (o <- opt) {
something(o)
}
foreach makes sense if you consider Option to be a list that can contain at most a single value. This also leads to a correct intuition about many other methods that are available to Option.
I can think of at least one important reason you might want to prefer foreach in this case: it removes possible run-time errors. With the nonEmpty approach, you'll at one point have to do a get*, which can crash your program spectacularly if you by accident forget to check for emptiness one time.
If you completely erase get from your mind to avoid bugs of that kind, a side effect is that you also have less use for nonEmpty! And you'll start to enjoy foreach and let the compiler take care of what should happen if the Option happens to be empty.
You'll see this concept cropping up in other places. You would never do
if (age.nonEmpty)
Some(age.get >= 18)
else
None
What you'll rather see is
age.map(_ >= 18)
The principle is that you want to avoid having to write code that handles the failure case – you want to offload that burden to the compiler. Many will tell you that you should never use get, however careful you think you are about pre-checking. So that makes things easier.
* Unless you actually just want to know whether or not the Option contains a value and don't really care for the value, in which case nonEmpty is just fine. In that case it serves as a sort of toBoolean.
Things I didn't find in the other answers:
When using if, I prefer if (opt isDefined) to if (opt nonEmpty) as the former is less collection-like and makes it more clear we're dealing with an option, but that may be a matter of taste.
if and foreach are different in the sense that if is an expression that will return a value while foreach returns Unit. So in a certain way using foreach is even more java-like than if, as java has a foreach loop and java's if is not an expression. Using a for comprehension is more scala-like.
You can also use pattern matching (but this is also also less idiomatic)
You can use the fold method, which takes two functions so you can evaluate one expression in the Some case and another in the None case. You may need to explicitly specify the expected type because of how type inference works (as shown here). So in my opinion it may sometimes still be clearer to use either pattern matching or val result = if (opt isDefined) expression1 else expression2.
If you don't need a return value and really have no need to handle the not-defined case, you can use foreach.

The case for point free style in Scala

This may seem really obvious to the FP cognoscenti here, but what is point free style in Scala good for? What would really sell me on the topic is an illustration that shows how point free style is significantly better in some dimension (e.g. performance, elegance, extensibility, maintainability) than code solving the same problem in non-point free style.
Quite simply, it's about being able to avoid specifying a name where none is needed, consider a trivial example:
List("a","b","c") foreach println
In this case, foreach is looking to accept String => Unit, a function that accepts a String and returns Unit (essentially, that there's no usable return and it works purely through side effect)
There's no need to bind a name here to each String instance that's passed to println. Arguably, it just makes the code more verbose to do so:
List("a","b","c") foreach {println(_)}
Or even
List("a","b","c") foreach {s => println(s)}
Personally, when I see code that isn't written in point-free style, I take it as an indicator that the bound name may be used twice, or that it has some significance in documenting the code. Likewise, I see point-free style as a sign that I can reason about the code more simply.
One appeal of point-free style in general is that without a bunch of "points" (values as opposed to functions) floating around, which must be repeated in several places to thread them through the computation, there are fewer opportunities to make a mistake, e.g. when typing a variable's name.
However, the advantages of point-free are quickly counterbalanced in Scala by its meagre ability to infer types, a fact which is exacerbated by point-free code because "points" serve as clues to the type inferencer. In Haskell, with its almost-complete type inferencing, this is usually not an issue.
I see no other advantage than "elegance": It's a little bit shorter, and may be more readable. It allows to reason about functions as entities, without going mentally a "level deeper" to function application, but of course you need getting used to it first.
I don't know any example where performance improves by using it (maybe it gets worse in cases where you end up with a function when a method would be sufficient).
Scala's point-free syntax is part of the magic Scala operators-which-are-really-functions. Even the most basic operators are functions:
For example:
val x = 1
val y = x + 1
...is the same as...
val x = 1
val y = x.+(1)
...but of course, the point-free style reads more naturally (the plus appears to be an operator).

Boolean functions: if statements or simple return

I was checking a friend's code, and this pattern showed up quite a bit whenever he wrote functions that returned a boolean value:
def multiple_of_three(n):
if (n % 3) is 0:
return True
else:
return False
I maintain that it's simpler (and perhaps a bit faster) to write:
def multiple_of_three(n):
return (n % 3) is 0
Am I correct that the second implementation is faster? Also, is it any less readable or somehow frowned upon?
I very much doubt there is a compiler or interpreter still out there where there is a significant speed difference - most will generate exactly the same code in both situations. But your "direct return" method is, in my opinion, clearer and more maintainable.
I can't speak of the Python interpreter's exact behavior, but doing one way over another like that (in any language) simply for reasons of "faster" is misguided, and qualifies as "premature optimization". As Paul Tomblin stated in another answer, the difference in speed, if any, is quite negligible. Common practice does dictate, however, that the second form in this case is more readable. If an expression is implicitly boolean, then the if statement wrapper is frivolous.
See also http://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
The second form is the preferred form. In my experience the first form is usually the sign of an inexperienced programmer (and this does not apply solely to Python - this crops up in most languages).