What are your thinking about Yoda conditions in Swift IOS [closed] - swift

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I just to know from experienced IOS Developers, what are the advantages and disadvantages of using the "yoda condition". As I am learning the swift language. I don't know much about this topic. I did not find any suitable answer. Any help would Highly be appreciated. Thank you.

C, C++, Objective-C, and Java (and some other languages) have three properties that work together in an error-prone way:
Assignment is an expression. That is, x = 1 is an expression with a value (the value 1 in this example).
Conditionals in control flow statements (like if and while statements) allow any expression that can be coerced to a Boolean value. Integers, floating point numbers, and pointers can all be coerced to Boolean: zero or null means false and anything else means true.
The assignment operator = and the equality operator == are very similar.
Because of these three features, it is easy to accidentally write if (x = 1) when you meant to write if (x == 1). Your program still compiles, but probably behaves incorrectly some of the time.
The “Yoda conditional” is a style that helps prevent this error. If you are in the habit of writing if (1 == x), and you accidentally write if (1 = x), your program will not compile.
However, the if (x = 1) error cannot happen in Swift, because Swift lacks two of the three properties described above.
In Swift, an assignment like x = 1 is a statement, not an expression. It cannot be used as the conditional of a control statement.
In Swift, the conditional of a control statement must explicitly be a Bool. No other types are automatically coerced to Bool. Even if assigment were an expression, if (x = 1) would still be prohibited because x = 1 would have the type of x, which cannot be Bool because 1 is not a Bool value.
So there is no reason to ever use a Yoda condition in Swift (unless you find it clearer for some other reason).
Note also that many modern compilers for other languages will print a warning if you use an assignment as a conditional, so Yoda syntax isn't as useful as it used to be in C, C++, and Objective-C.

The concept of a "Yoda condition" is nonsensical, because:
the putative idea of "Yoda" style is: the two sides of the "==" operator, are reversed "from what the usually would be"...
However, that is nonsensical.
Because
there is no sense at all of what they "usually would be".
It's a fairly dumb piece of "made-up" lingo.
Programmers write the if condition,
simply depending on,
what feels best to express the algorithm at hand.
Forget about it.
At best, it is of historic interest as a curiosity, irrelevant to anyone young enough to be alive today. To any new programmers learning (such as the OP here), it's irrelevant.
In the sense of the OP's question, it is irrelevant: there is no right or wrong order.
At best it's a matter of style to express the idea of the algorithm at hand.

Just to expand on what others have already said, yoda conditions are a holdover from C because of a specific bug that is easy to make in C and impossible in Swift. The bug in question goes like this:
if (x = 5) { ... }
The problem is that x = 5 is an assignment. You probably meant x == 5. In C, assignment statements return the value, so x = 5 returns 5, which is not 0, so it's true no matter what x is. This was a really common bug until compilers finally started to warn you about it. C actually relies on this, so it's not something C could remove. It's very common to use this in code like:
if (ch = getchar()) { ... }
The fix to this bug was to invert the condition by habit. if (5 = x) { ... } is a syntax error because 5 is a literal.
This is all irrelevant in Swift. Swift made = a statement rather than an expression, specifically to avoid this kind of bug. So if x = 5 { ...} is a syntax error already.
Yoda style just makes things harder to read; don't use it in Swift. Even in C, the compiler will give you stern warnings about this bug, so unless you are working in an old code base where it is the style, there's no reason to use it even there.
I'm not an experienced PHP programmer, where I believe it is also common. There it may still make sense. But not in Swift.

Related

Is it possible to have a compiler which optimizes a = func(a)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Say I have an object of type A. Consider this case for any function of the type A -> A (i.e. takes object of type A and returns another object of type A):
foo = func(foo)
Here, the simplest case would be to for the result of func(foo) to be copied into foo.
Is it possible to optimize this so that:
foo gets modified inplace in func
There are no constraints on the language used. What I want to know is what constraints and properties the language must have to enable such an optimization. Are there any existing languages which perform such an optimization?
Example(in pseudo code):
type Matrix = List<List<int>>
Matrix rotate90Deg(Matrix x):
Matrix result(x.columns, x.rows) #Assume it has a constructor which takes as args the num of rows, and num of cols.
for (int i = 0; i < x.rows; i++):
for (int j = 0; j < x.columns; j++):
result[i][j] = x[j][i]
return result
Matrix a = [[1,2,3],[4,5,6],[7,8,9]]
a = rotate90Deg(a)
Here, is it possible to optimize the code so that it doesn't allocate memory for a new matrix(result), and instead just modifies the original matrix passed.
First of all, you have to realize that some operations are inherently not possible to be computed in-place. Matrix-matrix multiplication is an example of this, and rotate90Deg would fall under this category since such an operation is actually a matrix multiplication by the appropriate multiplication matrix.
Now as for your example, you actually coded up a matrix transpose function. Matrix transpose can be done in-place since you are swapping pairs of numbers, but I doubt that any compilers can automatically detect this and optimize it for you. Indeed, there are many, many tricks that one can do to optimize matrix transpose in order to be cache-friendly in order to gain huge performance increases. Nevertheless, with an naive implementation, you will almost certainly end up with something very similar to what Aditya Kumar describes in his answer.
As I have foreshadowed by using the word "naive" earlier, programmers can coax the compiler to inline lots and lots of things in extremely optimized ways through advanced templating and other meta-programming techniques. (At least in C++, and maybe other languages that allow you to overload operator =.) For anyone interested in a case study of how this is done and what is involved, take a look at the Eigen matrix library, and how it handles a simple operation like u = v + w; where the three variables are all matrices of floats. Following is a brief overview of the key points.
A naive implementation would overload operator+ to return a temporary and operator= to copy that temporary to the result. Of course, in C++11 it is pretty easy to avoid the final copy during assignment by way of move constructors, but you will still have unnecessary temporaries if you had something a little more complex with multiple operators on the right hand side like u = 3.15f * u.transposed() + 5.0f; since each operator/method would return a temporary, and that temporary would have to be looped over in order to process the next operator.
Long story short, what Eigen does is rather than perform each operation when the corresponding function call occurs, the calls return a templated functor of sorts which merely describes the operation that needs to take place, and all the actual work ends up happening in operator =, thus enabling the compiler to emit a single, inlined loop for traversing the data only once and doing the operation truly in-place.
Yes it is possible, and this optimization is provided by at least C++11 (inlining).
To explain the optimization a little bit.
e.g.
foo_t foo;
foo = func(foo); // #1
foo_t func(foo_t foo1) {
foo_t new_foo;
// operate on new_foo by using foo1
return new_foo;
}
There are three instances of foo_t being made:
foo is copied and passed as foo1 to func
new_foo is created.
new_foo is assigned to foo by copying the contents of new_foo into foo;
All the three copies can be eliminated provided there are some invariants.
foo (the argument to be passed to function is never used later with the same original value. This is equivalent to saying that foo is 'dead' at line #1. This is established here as foo is reassigned.
the scope of object new_foo in function func has its lifetime that does not extend the life of function func. This is also established here as the way new_foo is created, it will be on stack and the lifetime of objects in stack is the same as the lifetime of the function in which the object was created.
In C++ it can be achieved using inlining the function func. After inlining, the code basically will look like this.
`foo_t foo;`
`foo_t new_foo;`
`// operate on new_foo by using foo`
`foo = new_foo;`
Although, C++ provides inlining as a language feature but almost any optimizing compiler do inlining these days.
Now it depends on what kind of operation you perform on new_foo and foo whether this extra new_foo will be optimized away or not. For some data types it is trivial (the compiler can do a 'copy-propagation' followed by 'dead-code elimination' to remove new_foo completely.

Why do immutable objects enable functional programming?

I'm trying to learn scala and I'm unable to grasp this concept. Why does making an object immutable help prevent side-effects in functions. Can anyone explain like I'm five?
Interesting question, a bit difficult to answer.
Functional programming is very much about using mathematics to reason about programs. To do so, one needs a formalism that describe the programs and how one can make proofs about properties they might have.
There are many models of computation that provide such formalisms, such as lambda calculus and turing machines. And there's a certain degree of equivalency between them (see this question, for a discussion).
In a very real sense, programs with mutability and some other side effects have a direct mapping to functional program. Consider this example:
a = 0
b = 1
a = a + b
Here are two ways of mapping it to functional program. First one, a and b are part of a "state", and each line is a function from a state into a new state:
state1 = (a = 0, b = ?)
state2 = (a = state1.a, b = 1)
state3 = (a = state2.a + state2.b, b = state2.b)
Here's another, where each variable is associated with a time:
(a, t0) = 0
(b, t1) = 1
(a, t2) = (a, t0) + (b, t1)
So, given the above, why not use mutability?
Well, here's the interesting thing about math: the less powerful the formalism is, the easier it is to make proofs with it. Or, to put it in other words, it's too hard to reason about programs that have mutability.
As a consequence, there's very little advance regarding concepts in programming with mutability. The famous Design Patterns were not arrived at through study, nor do they have any mathematical backing. Instead, they are the result of years and years of trial and error, and some of them have since proved to be misguided. Who knows about the other dozens "design patterns" seen everywhere?
Meanwhile, Haskell programmers came up with Functors, Monads, Co-monads, Zippers, Applicatives, Lenses... dozens of concepts with mathematical backing and, most importantly, actual patterns of how code is composed to make up programs. Things you can use to reason about your program, increase reusability and improve correctness. Take a look at the Typeclassopedia for examples.
It's no wonder people not familiar with functional programming get a bit scared with this stuff... by comparison, the rest of the programming world is still working with a few decades-old concepts. The very idea of new concepts is alien.
Unfortunately, all these patterns, all these concepts, only apply with the code they are working with does not contain mutability (or other side effects). If it does, then their properties cease to be valid, and you can't rely on them. You are back to guessing, testing and debugging.
In short, if a function mutates an object then it has side effects. Mutation is a side effect. This is just true by definition.
In truth, in a purely functional language it should not matter if an object is technically mutable or immutable, because the language will never "try" to mutate an object anyway. A pure functional language doesn't give you any way to perform side effects.
Scala is not a pure functional language, though, and it runs in the Java environment in which side effects are very popular. In this environment, using objects that are incapable of mutation encourages you to use a pure functional style because it makes a side-effect oriented style impossible. You are using data types to enforce purity because the language does not do it for you.
Now I will say a bunch of other stuff in the hope that it helps this make sense to you.
Fundamental to the concept of a variable in functional languages is referential transparency.
Referential transparency means that there is no difference between a value, and a reference to that value. In a language where this is true, it makes it much simpler to think about a program works, since you never have to stop and ask, is this a value, or a reference to a value? Anyone who's ever programmed in C recognizes that a great part of the challenge of learning that paradigm is knowing which is which at all times.
In order to have referential transparency, the value that a reference refers to can never change.
(Warning, I'm about to make an analogy.)
Think of it this way: in your cell phone, you have saved some phone numbers of other people's cell phones. You assume that whenever you call that phone number, you will reach the person you intend to talk to. If someone else wants to talk to your friend, you give them the phone number and they reach that same person.
If someone changes their cell phone number, this system breaks down. Suddenly, you need to get their new phone number if you want to reach them. Maybe you call the same number six months later and reach a different person. Calling the same number and reaching a different person is what happens when functions perform side effects: you have what seems to be the same thing, but you try to use it, it turns out it's different now. Even if you expected this, what about all the people you gave that number to, are you going to call them all up and tell them that the old number doesn't reach the same person anymore?
You counted on the phone number corresponding to that person, but it didn't really. The phone number system lacks referential transparency: the number isn't really ALWAYS the same as the person.
Functional languages avoid this problem. You can give out your phone number and people will always be able to reach you, for the rest of your life, and will never reach anybody else at that number.
However, in the Java platform, things can change. What you thought was one thing, might turn into another thing a minute later. If this is the case, how can you stop it?
Scala uses the power of types to prevent this, by making classes that have referential transparency. So, even though the language as a whole isn't referentially transparent, your code will be referentially transparent as long as you use immutable types.
Practically speaking, the advantages of coding with immutable types are:
Your code is simpler to read when the reader doesn't have to look out for surprising side effects.
If you use multiple threads, you don't have to worry about locking because shared objects can never change. When you have side effects, you have to really think through the code and figure out all the places where two threads might try to change the same object at the same time, and protect against the problems that this might cause.
Theoretically, at least, the compiler can optimize some code better if it uses only immutable types. I don't know if Java can do this effectively, though, since it allows side effects. This is a toss-up at best, anyway, because there are some problems that can be solved much more efficiently by using side effects.
I'm running with this 5 year old explanation:
class Account(var myMoney:List[Int] = List(10, 10, 1, 1, 1, 5)) {
def getBalance = println(myMoney.sum + " dollars available")
def myMoneyWithInterest = {
myMoney = myMoney.map(_ * 2)
println(myMoney.sum + " dollars will accru in 1 year")
}
}
Assume we are at an ATM and it is using this code to give us account information.
You do the following:
scala> val myAccount = new Account()
myAccount: Account = Account#7f4a6c40
scala> myAccount.getBalance
28 dollars available
scala> myAccount.myMoneyWithInterest
56 dollars will accru in 1 year
scala> myAccount.getBalance
56 dollars available
We mutated the account balance when we wanted to check our current balance plus a years worth of interest. Now we have an incorrect account balance. Bad news for the bank!
If we were using val instead of var to keep track of myMoney in the class definition, we would not have been able to mutate the dollars and raise our balance.
When defining the class (in the REPL) with val:
error: reassignment to val
myMoney = myMoney.map(_ * 2
Scala is telling us that we wanted an immutable value but are trying to change it!
Thanks to Scala, we can switch to val, re-write our myMoneyWithInterest method and rest assured that our Account class will never alter the balance.
One important property of functional programming is: If I call the same function twice with the same arguments I'll get the same result. This makes reasoning about code much easier in many cases.
Now imagine a function returning the attribute content of some object. If that content can change the function might return different results on different calls with the same argument. => no more functional programming.
First a few definitions:
A side effect is a change in state -- also called a mutation.
An immutable object is an object which does not support mutation, (side effects).
A function which is passed mutable objects (either as parameters or in the global environment) may or may not produce side effects. This is up to the implementation.
However, it is impossible for a function which is passed only immutable objects (either as parameters or in the global environment) to produce side effects. Therefore, exclusive use of immutable objects will preclude the possibility of side effects.
Nate's answer is great, and here is some example.
In functional programming, there is an important feature that when you call a function with same argument, you always get same return value.
This is always true for immutable objects, because you can't modify them after create it:
class MyValue(val value: Int)
def plus(x: MyValue) = x.value + 10
val x = new MyValue(10)
val y = plus(x) // y is 20
val z = plus(x) // z is still 20, plus(x) will always yield 20
But if you have mutable objects, you can't guarantee that plus(x) will always return same value for same instance of MyValue.
class MyValue(var value: Int)
def plus(x: MyValue) = x.value + 10
val x = new MyValue(10)
val y = plus(x) // y is 20
x.value = 30
val z = plus(x) // z is 40, you can't for sure what value will plus(x) return because MyValue.value may be changed at any point.
Why do immutable objects enable functional programming?
They don't.
Take one definition of "function," or "prodecure," "routine" or "method," which I believe applies to many programming languages: "A section of code, typically named, accepting arguments and/or returning a value."
Take one definition of "functional programming:" "Programming using functions." The ability to program with functions is indepedent of whether state is modified.
For instance, Scheme is considered a functional programming language. It features tail calls, higher-order functions and aggregate operations using functions. It also has mutable objects. While mutability destroys some nice mathematical qualities, it does not necessarily prevent "functional programming."
I've read all the answers and they don't satisfy me, because they mostly talk about "immutability", and not about its relation to FP.
The main question is:
Why do immutable objects enable functional programming?
So I've searched a bit more and I have another answer, I believe the easy answer to this question is: "Because Functional Programming is basically defined on the basis of functions that are easy to reason about". Here's the definition of Functional Programming:
The process of building software by composing pure functions.
If a function is not pure -- which means receiving the same input, it's not guaranteed to always produce the same output (e.g., if the function relies on a global object, or date and time, or a random number to compute the output) -- then that function is unpredictable, that's it! Now exactly the same story goes about "immutability" as well, if objects are not immutable, a function with the same object as its input may have different results (aka side effects) each time used, and this will make it hard to reason about the program.
I first tried to put this in a comment, but it got longer than the limit, I'm by no means a pro so please take this answer with a grain of salt.

Documenting Scala functional chains [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Scala (and functional programming, in general), advocates a style of programming where you produce functional "chains" of the form
collection.operation1(...).operation2(...)...
where the operations are various combinations of map, filter, etc.
Where the equivalent Java code might require 50 lines, the Scala code can be done in 1 or 2 lines. The functional chain can change an input collection to something completely different.
The disadvantage of the Scala code is that 10 minutes later (never mind 6 months later), I can't figure out what I was thinking, because the notation is so compact, and lacks type information (because of implied types).
How do you document this? Do you put a large block comment before the chain, changing an elegant 1 line solution into a bulky 40 line solution consisting of 39 lines of comment? Do you intersperse your comments like this?
collection.
// Select the items that meet condition X
filter(predicate_function).
// Change these items from A's to B's
map(transformation_function).
// etc.
Something else? No documentation? (Leave them guessing. They'll never "downsize" you then, because no one else can maintain the code. :-))
If you find yourself writing comments at that detail level, you're just repeating what the code says.
For long functional chains, define new functions to replace parts of the chain. Give these meaningful names. Then you might be able to avoid comments. The names of these functions themselves should explain what they do.
The best comments are the ones that explain why the code does something. Well-written code should make the "how" obvious from the code itself.
I don't write that code to begin with (unless it's a script for one-time use or playing around in the REPL).
If I can explain what the code does in one comment and the reads okay, then I keep it as a one liner:
// Find all real-valued square roots and group them in integer bins
ds.filter(_ >= 0).map(math.sqrt).groupBy(_.toInt).map(_._2)
If I can't understand this by reading carefully through the chain of commands, then I should break it up more into functionally distinct units. For example, if I expected someone to not realize that the square root of a negative number is not real-valued, I would say:
// Only non-negative numbers have a real-valued square root
val nonneg = ds.filter(_ >= 0)
// Find square roots and group them in integer bins
nonneg.map(math.sqrt).groupBy(_.toInt).map(_._2)
In particular, if someone doesn't know the Scala collections library well, and doesn't have the patience to spend five to ten minutes understanding one line of code, then either they shouldn't be working on my code (nor on anything else that accomplishes something nontrivial that they don't understand and don't have the patience to understand), or I should know in advance that I'm providing an e.g. language and mathematics tutorial in addition to writing working code, either by writing a paragraph explaining how the following line works, or breaking it out command by command, or including comments at the start of each anonymous function explaining what is going on (as appropriate).
Anyway, if you can't understand what it does, you probably need some intermediate values. They are very helpful for mental-resetting ("I can't see how to get from A to C!...but...okay, I can understand A to B. And I can understand B to C.")
If your chained operations are all monadic transforms: map, flatMap, filter, then it's often much, much clearer to rewrite the logic as a for-comprehension.
coll.filter(predicate).map(transform)
could become
for(elem <- coll if predicate) yield transform(elem)
it's even easier to show off the power of the technique if you have a longer sequence of operations, such as with Kassen's example:
def eligibleCustomers(products: Seq[Product]) = for {
product <- products
customer <- product.customers
paying <- customer if customer.isPremium
eligible <- paying if paying.age < 20
} yield eligible
If you don't want to split it in multiple methods as hammar suggested you can split the line and give the intermediate values names (and optionally types).
def eligibleCustomers: List[Customer] = {
val customers = products.flatMap(_.customers)
val paying = customers.filter(_.isPremium)
val eligible = paying.filter(_.age < 20)
eligible
}
The linelength is a somehow natural indicator, when your chain is getting too long. :)
Of course, it will depend upon how trivial the chain is:
customerdata.filter (_.age < 40).filter (_.city == "Rio").
filter (_.income > 3000).filter (_.joined < 2005)
filter (_.sex == 'f'). ...
I recently had your impression, where an application of 3 files, one of them a bit lengthy, consisting of 4 classes, one of them not trivial, and of about 10 to 20 methods. Each method was about 5 to 10 lines, and each 2 of them could have been easily combined to a lager one, but I had to convince myself, that although measuring the elegance in spared lines of codes isn't completely wrong, sparing lines isn't the goal itself.
But splitting a method into two often makes complexity per line lower, but not the overall complexity, to understand the whole program.
If the problem domain is complex - filter data at different levels, rowwise, columnwise, map it, group it, build averages, build graphs, paginate them ... - the complicated job has to be done somewhere.
The program isn't more easy to understand, you just have to hit page down less often. It is a readjustment, that you have to read a line of code more slowly.
It doesn't bother me that much now I'm used to Scala. If you want to be more explicit with types, you can always, for example, replace things like map(_.foo) with map { a:A => a.foo } to make the code more readable in lengthy/complex operations. Not that I usually find the need to do that.

When are booleans better than integers?

In most programming languages, 1 and 0 can be used instead of True and False. However, from my experience, integers seem to always be easier to use.
Here are some examples of what I mean:
if x is True: x = False
else: x = True
vs
x = abs(x-1)
__
if x is False: a = 0
else: a = 5
vs
a = 5*x
In what cases are booleans simpler/more efficient to use than 1 or 0?
You should always use any boolean built-in type for boolean values in high-level languages. Your second example would be a horror to debug in the case that x is true, but equal to a value different from 1, and a tricky one to figure out for any developer new to the code - especially one not familiar with your coding style.
What's wrong with
x = !x;
or
a = x ? 5 : 0;
One example where an integer could be more efficient than a boolean would be in a relational database. A bit column generally can't be indexed (I can't necessarily speak for all databases on that statement, hence "generally"), so something like a tinyint would make more sense if indexing is required.
Keep in mind that, depending on the use and on the system using it, while a boolean "takes less space" because it's just a single bit (depending on the implementation), an integer is the native word size of the hardware. (Certain systems likely use a full word for a boolean, essentially saving no space when it actually runs on the metal, just to use a simple word size.)
In high-level programming languages, the choice between a boolean and an int is really more of code readability/supportability than one of efficiency. If the values are indeed limited to "true" or "false" then a boolean makes sense. (Is this model in a given state, such as "valid," or is it not? It will never be a third option.) If, on the other hand, there are currently two options but there could be more someday, it might be tempting to use a boolean (even just "for now") but it would logically make more sense to use an int (or an enum).
Keep that in mind also when doing "clever" things in code. Sure, it may look sleeker and cooler to do some quick int math instead of using a boolean, but what does that do to the readability of the code? Being too clever can be dangerous.
For readibility and good intentions, it's always better to choose booleans for true/false.
You know that you have only two possible choices. With integers, things can get a bit tricky, especially if you're using 0 as false and anything else as true.
You can get too clever when using integers for true/false, so be careful.
Using booleans will make your intentions clearer to you 6 months later and other people who will maintain your code. The less brain cycles you have to use, the better.
I'd say in your examples that the boolean versions are more readable (at least as far as your intentions). It all depends on the context too. If you're sacrificing readability in an attempt to make micro optimizations, that's just evil.
I'm not sure about efficiency but I prefer booleans in many cases.
Your first example could be easily written as x = !x and x = abs(x-1) looks really obscure to me.
Also when using integers, you can't really be sure if x is 1 and not 2 or -5 or anything. When using booleans, it's always just true or false.
It's always more efficient to use Boolean because it's easier to process and uses less processing/memory. Whenever possible, use Boolean
Booleans obviously can only accept true/false or 0/1, not only that they use less processing power and memory as Webnet has already stated.
Performance points aside, I consider booleans and integers to be two fundamentally different concepts in programming. Boolean represents a condition, an integer represents a number. Bugs are easy to introduce if you don't strictly keep the value of your integer-boolean 0 or not 0, and why bother even with that when you can just use booleans, that allow for compile-time security / typechecking? I mean, take a method:
doSomething(int param)
The method alone does /not/ imply the param is interpreted as a boolean. Nobody will stop me from passing 1337 to it, and nobody will tell me what'll happen if I do - and even if it's clearly documented not to pass the 1337 value to the method (but only 0 or 1), I can still do it. If you can prevent errors at compile time, you should.
doSomething(bool param)
only allows two values: true and false, and neither are wrong.
Also, your examples about why integers would be better than booleans are kinda flawed.
if x is True: x = False
else: x = True
could be written as
x != x
whereas your integer alternative:
x = abs(x-1)
would require me to know:
What possible values x can have
what the abs() function does
why 1 is subtracted from x
what this actually /does/. What does it do?
Your second example is also a big wtf to me.
if x is False: a = 0
else: a = 5
could be written as:
a = (x) ? 5 : 0;
whereas your integer alternative
a = 5*x
again requires me to know:
What is X?
What can X be?
What happens if x = 10? -1? 2147483647?
Too much conditionals. Use booleans, for both readability, common sense, and bug-free, predictable code.
I love booleans, so much more readable and you've basically forced a contract with the user.
E.g. "These values are ONLY TRUE or FALSE".

Methods of simplifying ugly nested if-else trees in C#

Sometimes I'm writing ugly if-else statements in C# 3.5; I'm aware of some different approaches to simplifying that with table-driven development, class hierarchy, anonimous methods and some more.
The problem is that alternatives are still less wide-spread than writing traditional ugly if-else statements because there is no convention for that.
What depth of nested if-else is normal for C# 3.5? What methods do you expect to see instead of nested if-else the first? the second?
if i have ten input parameters with 3 states in each, i should map functions to combination of each state of each parameter (really less, because not all the states are valid, but sometimes still a lot). I can express these states as a hashtable key and a handler (lambda) which will be called if key matches.
It is still mix of table-driven, data-driven dev. ideas and pattern matching.
what i'm looking for is extending for C# such approaches as this for scripting (C# 3.5 is rather like scripting)
http://blogs.msdn.com/ericlippert/archive/2004/02/24/79292.aspx
Good question. "Conditional Complexity" is a code smell. Polymorphism is your friend.
Conditional logic is innocent in its infancy, when it’s simple to understand and contained within a
few lines of code. Unfortunately, it rarely ages well. You implement several new features and
suddenly your conditional logic becomes complicated and expansive. [Joshua Kerevsky: Refactoring to Patterns]
One of the simplest things you can do to avoid nested if blocks is to learn to use Guard Clauses.
double getPayAmount() {
if (_isDead) return deadAmount();
if (_isSeparated) return separatedAmount();
if (_isRetired) return retiredAmount();
return normalPayAmount();
};
The other thing I have found simplifies things pretty well, and which makes your code self-documenting, is Consolidating conditionals.
double disabilityAmount() {
if (isNotEligableForDisability()) return 0;
// compute the disability amount
Other valuable refactoring techniques associated with conditional expressions include Decompose Conditional, Replace Conditional with Visitor, Specification Pattern, and Reverse Conditional.
There are very old "formalisms" for trying to encapsulate extremely complex expressions that evaluate many possibly independent variables, for example, "decision tables" :
http://en.wikipedia.org/wiki/Decision_table
But, I'll join in the choir here to second the ideas mentioned of judicious use of the ternary operator if possible, identifying the most unlikely conditions which if met allow you to terminate the rest of the evaluation by excluding them first, and add ... the reverse of that ... trying to factor out the most probable conditions and states that can allow you to proceed without testing of the "fringe" cases.
The suggestion by Miriam (above) is fascinating, even elegant, as "conceptual art;" and I am actually going to try it out, trying to "bracket" my suspicion that it will lead to code that is harder to maintain.
My pragmatic side says there is no "one size fits all" answer here in the absence of a pretty specific code example, and complete description of the conditions and their interactions.
I'm a fan of "flag setting" : meaning anytime my application goes into some less common "mode" or "state" I set a boolean flag (which might even be static for the class) : for me that simplifies writing complex if/then else evaluations later on.
best, Bill
Simple. Take the body of the if and make a method out of it.
This works because most if statements are of the form:
if (condition):
action()
In other cases, more specifically :
if (condition1):
if (condition2):
action()
simplify to:
if (condition1 && condition2):
action()
I'm a big fan of the ternary operator which get's overlooked by a lot of people. It's great for assigning values to variables based on conditions. like this
foobarString = (foo == bar) ? "foo equals bar" : "foo does not equal bar";
Try this article for more info.
It wont solve all your problems, but it is very economical.
I know that this is not the answer you are looking for, but without context your questions is very hard to answer. The problem is that the way to refactor such a thing really depends on your code, what it is doing, and what you are trying to accomplish. If you had said that you were checking the type of an object in these conditionals we could throw out an answer like 'use polymorphism', but sometimes you actually do just need some if statements, and sometimes those statements can be refactored into something more simple. Without a code sample it is hard to say which category you are in.
I was told years ago by an instructor that 3 is a magic number. And as he applied it it-else statements he suggested that if I needed more that 3 if's then I should probably use a case statement instead.
switch (testValue)
{
case = 1:
// do something
break;
case = 2:
// do something else
break;
case = 3:
// do something more
break;
case = 4
// do what?
break;
default:
throw new Exception("I didn't do anything");
}
If you're nesting if statements more than 3 deep then you should probably take that as a sign that there is a better way. Probably like Avirdlg suggested, separating the nested if statements into 1 or more methods. If you feel you are absolutely stuck with multiple if-else statements then I would wrap all the if-else statements into a single method so it didn't ugly up other code.
If the entire purpose is to assign a different value to some variable based upon the state of various conditionals, I use a ternery operator.
If the If Else clauses are performing separate chunks of functionality. and the conditions are complex, simplify by creating temporary boolean variables to hold the true/false value of the complex boolean expressions. These variables should be suitably named to represent the business sense of what the complex expression is calculating. Then use the boolean variables in the If else synatx instead of the complex boolean expressions.
One thing I find myself doing at times is inverting the condition followed by return; several such tests in a row can help reduce nesting of if and else.
Not a C# answer, but you probably would like pattern matching. With pattern matching, you can take several inputs, and do simultaneous matches on all of them. For example (F#):
let x=
match cond1, cond2, name with
| _, _, "Bob" -> 9000 // Bob gets 9000, regardless of cond1 or 2
| false, false, _ -> 0
| true, false, _ -> 1
| false, true, _ -> 2
| true, true, "" -> 0 // Both conds but no name gets 0
| true, true, _ -> 3 // Cond1&2 give 3
You can express any combination to create a match (this just scratches the surface). However, C# doesn't support this, and I doubt it will any time soon. Meanwhile, there are some attempts to try this in C#, such as here: http://codebetter.com/blogs/matthew.podwysocki/archive/2008/09/16/functional-c-pattern-matching.aspx. Google can turn up many more; perhaps one will suit you.
try to use patterns like strategy or command
In simple cases you should be able to get around with basic functional decomposition. For more complex scenarios I used Specification Pattern with great success.