Say I want to have a number of rules that all follow the same pattern. I have ran into this situation when I want to avoid non-deterministic behavior by explicitly listing all possible first arguments. I know, however, that I need to do exactly the same for some of the possibilities. One way to deal with it would be to have a catch-all clause at the end:
foo(a) :- /* do something */.
foo(b) :- /* do something else*/.
foo(_). /* ignore the rest */
but this is not very nice because I can't actually know if got unexpected input, or if I made a mistake in my program. To avoid this, I could also say
foo(X) :- memberchk(X, [ /* list of possible values of X */ ]).
but again, I am now fighting against Prolog's deterministic behavior and indexing when the argument is ground.
So, instead, I do something like this:
term_expansion(foos(Foos), Foo_rules) :-
maplist(expand_foo, Foos, Foo_rules).
expand_foo(Foo, foo(Foo)).
other_foos([x,y,z]).
The thing is, I sort of tried to find existing code like this and I couldn't. Is it because I am doing something wrong? Is there a better way to approach this problem? Or circumvent it altogether?
I don't mind answers that say "you are solving the wrong problem".
EDIT: Some googling actually got me to this very similar example from the SWI-Prolog documentation:
http://www.swi-prolog.org/pldoc/man?section=ext-dquotes-motivation
(at the very bottom)
First a few comments on the variants you already suggest:
foo(a) :- /* do something */.
foo(b) :- /* do something else */.
foo(_). /* ignore the rest */
The main problem with this is that the final clause (foo(_)) applies when the other - presumably more specialized - clauses also apply. So the query ?- foo(a). is now unintentionally non-deterministic.
You say this version "is not very nice because I can't actually know if got unexpected input, or if I made a mistake in my program". We could guard against unexpected input by making sure, in the check-all clause, that the given term is not unexpected:
foo(a) :- /* do something */.
foo(b) :- /* do something else */.
foo(X) :- must_be(oneof([a,b,x,y], X).
This raises an error when the term is of an unexpected form. I used x and y as examples for terms that are not handled specially. Note that a and b must of course be included, because the clause (again) applies to both of them as well. The predicate is still not deterministic even when its argument is instantiated, because first argument indexing cannot distinguish the cases. You write "I am now fighting against Prolog's deterministic behavior and indexing when the argument is ground" and probably (and correctly) mean that you cannot benefit from these features when you use this representation.
And now the nice declarative solution: Every time you are tempted to introduce a "catch-all" or "default" clause, reconsider your data representation, and introduce distinguishing functors for the different cases that can apply. In your example, the two cases are:
the term needs to be handled in a special way
nothing special needs to be done with the term.
I will use special(_) and ordinary(_) to distinguish the cases. Thus:
foo(special(S)) :- foo_special(S).
foo(ordinary(_)). % do nothing
foo_special(a) :- /* do something */
foo_special(b) :- /* do something else */
These predicates can be used in all directions, and are deterministic when the argument is known. Type-checks can be added easily.
Related
The Scala Tour Of Scala docs explain pass-by-name parameters using a whileLoop function as an example.
def whileLoop(condition: => Boolean)(body: => Unit): Unit =
if (condition) {
body
whileLoop(condition)(body)
}
var i = 2
whileLoop (i > 0) {
println(i)
i -= 1
} // prints 2 1
The section explains that if the condition is not met then the body is not evaluated, thus improving performance by not evaluating a body of code that isn't being used.
Does Scala's implementation of while already use pass-by-name parameters?
If there's a reason or specific cases where it's not possible for the implementation to use pass-by-name parameters, please explain to me, I haven't been able to find any information on it so far.
EDIT: As per Valy Dia's (https://stackoverflow.com/users/5826349/valy-dia) answer, I would like to add another question...
Would a method implementation of the while statement perform better than the statement itself if it's possible not to evaluate the body at all for certain cases? If so, why use the while statement at all?
I will try to test this, but I'm new to Scala so it might take me some time. If someone would like to explain, that would be great.
Cheers!
The while statement is not a method, so the terminology by-name parameter is not really relevant... Having said so the while statement has the following structure:
while(condition){
body
}
where the condition is repeatedly evaluated and the body is evaluated only upon the condition being verified, as show this small examples:
scala> while(false){ throw new Exception("Boom") }
// Does nothing
scala> while(true){ throw new Exception("Boom") }
// java.lang.Exception: Boom
scala> while(throw new Exception("boom")){ println("hello") }
// java.lang.Exception: Boom
Would a method implementation of the while statement perform better than the statement itself if it's possible not to evaluate the body at all for certain cases?
No. The built-in while also does not evaluate the body at all unless it has to, and it is going to compile to much more efficient code (because it does not need to introduce the "thunks"/closures/lambdas/anonymous functions that are used to implement "pass-by-name" under the hood).
The example in the book was just showing how you could implement it with functions if there was no built-in while statement.
I assumed that they were also inferring that the while statement's body will be evaluated whether or not the condition was met
No, that would make the built-in while totally useless. That is not what they were driving at. They wanted to say that you can do this kind of thing with "call-by-name" (as opposed to "call-by-value", not as opposed to what the while loop does -- because the latter also works like that).
The main takeaway is that you can build something that looks like a control structure in Scala, because you have syntactic sugar like "call-by-name" and "last argument group taking a function can be called with a block".
I'm currently following the book Learn You Some Erlang for Great Good by Fred Herbert and one of the sections is regarding Macros.
I understand using macros for variables (constant values, mainly), however, I don't understand the use case for macros as functions. For example, Herbert writes:
Defining a "function" macro is similar. Here's a simple macro used to subtract one number from another:
-define(sub(X, Y), X-Y).
Why not just define this as a function elsewhere? Why use a macro? Is there some sort of performance advantage from the compiler or is this merely just a "this function is so simple, let's just define it in one line" type of thing?
I'm not trying to start a debate or preference argument, but after seeing some production Erlang code, I've started noticing lots of macros function usage.
In this case, the one obvious advantage of the macro not being a function (-define(sub(X, Y), X-Y), which would be safer as -define(sub(X, Y), (X-Y))) is that it can be used as a guard since custom function calls are forbidden.
In many cases it would otherwise be safer to define the function as an inlined one.
On the other hand, there are other interesting cases, such as assertions in tests or shortcuts where what you want is to keep some local context in the final place.
For example, let's say I want to make a generic call for a test where the objective is 'match a given pattern and return a given value, or fail after M milliseconds'.
I cannot make this generic with code since patterns are not data structures you are allowed to carry around. However, with macros:
-define(wait_for(PAT, Timeout),
receive
PAT -> VAL
after Timeout ->
error(timeout)
end).
This macro can then be used as:
my_test() ->
Pid = start_whatever(),
%% ...
?wait_for({'EXIT', Pid, Reason}, 5000),
?assertMatch(shutdown, Reason).
By doing this, I'm able to simplify the form of text in some tests without needing a bunch of nesting, and in a way that is not possible with functions.
Do note that the assertion itself as defined by eunit is using a function macro, and does something akin to
-define(assertMatch(PAT, TERM),
%% funs to avoid leaking bindings into parent scope
(fun() ->
try
PAT = TERM,
true
catch _:_ ->
error({assertion_failed, ?LINE, ...})
end
end)()).
This similarly lets you carry patterns and bindings and do fancy forms that couldn't be possible otherwise.
In this last case, you'll notice I used the ?LINE macro. That's another advantage of macros: you preserve information and locality about the call site, such as its module name, line number, and so on. This is useful when such metadata is required, such as when you're reporting test failures.
If you're looking at old code, there might be macros used as a way of inlining small functions under the assumption that function calls are very expensive. I'm not sure if that was ever true, but it's not something you need to worry about today.
Macros can be used to define constants, like
-define(MAX_TIMEOUT, 30 * 1000).
%% ...
gen_server:call(my_server, {do_stuff, Data}, ?MAX_TIMEOUT),
%% ...
I mostly prefer to pass in environment variables for this job, but it's more work to read them on startup and stash them somewhere and write accessors.
Finally, you can do some simple metaprogramming:
-define(MAKE_REQUEST_FUN(Method),
Method(Request, HTTPOptions, Options) ->
httpc:request(Method, Request, HTTPOptions, Options)).
?MAKE_REQUEST_FUN(get).
?MAKE_REQUEST_FUN(put).
%% Now we've defined a get/3 that can be called as
%% get(Request, [], []).
Reading Scala docs written by the experts one can get the impression that tail recursion is better than a while loop, even when the latter is more concise and clearer. This is one example
object Helpers {
implicit class IntWithTimes(val pip:Int) {
// Recursive
def times(f: => Unit):Unit = {
#tailrec
def loop(counter:Int):Unit = {
if (counter >0) { f; loop(counter-1) }
}
loop(pip)
}
// Explicit loop
def :#(f: => Unit) = {
var lc = pip
while (lc > 0) { f; lc -= 1 }
}
}
}
(To be clear, the expert was not addressing looping at all, but in the example they chose to write a loop in this fashion as if by instinct, which is what the raised the question for me: should I develop a similar instinct..)
The only aspect of the while loop that could be better is the iteration variable should be local to the body of the loop, and the mutation of the variable should be in a fixed place, but Scala chooses not to provide that syntax.
Clarity is subjective, but the question is does the (tail) recursive style offer improved performance?
I'm pretty sure that, due to the limitations of the JVM, not every potentially tail-recursive function will be optimised away by the Scala compiler as so, so the short (and sometimes wrong) answer to your question on performance is no.
The long answer to your more general question (having an advantage) is a little more contrived. Note that, by using while, you are in fact:
creating a new variable that holds a counter.
mutating that variable.
Off-by-one errors and the perils of mutability will ensure that, on the long run, you'll introduce bugs with a while pattern. In fact, your times function could easily be implemented as:
def times(f: => Unit) = (1 to pip) foreach f
Which not only is simpler and smaller, but also avoids any creation of transient variables and mutability. In fact, if the type of the function you are calling would be something to which the results matter, then the while construction would start to be even more difficult to read. Please attempt to implement the following using nothing but whiles:
def replicate(l: List[Int])(times: Int) = l.flatMap(x => List.fill(times)(x))
Then proceed to define a tail-recursive function that does the same.
UPDATE:
I hear you saying: "hey! that's cheating! foreach is neither a while nor a tail-rec call". Oh really? Take a look into Scala's definition of foreach for Lists:
def foreach[B](f: A => B) {
var these = this
while (!these.isEmpty) {
f(these.head)
these = these.tail
}
}
If you want to learn more about recursion in Scala, take a look at this blog post. Once you are into functional programming, go crazy and read Rúnar's blog post. Even more info here and here.
In general, a directly tail recursive function (i.e., one that always calls itself directly and cannot be overridden) will always be optimized into a while loop by the compiler. You can use the #tailrec annotation to verify that the compiler is able to do this for a particular function.
As a general rule, any tail recursive function can be rewritten (usually automatically by the compiler) as a while loop and vice versa.
The purpose of writing functions in a (tail) recursive style is not to maximize performance or even conciseness, but to make the intent of the code as clear as possible, while simultaneously minimizing the chance of introducing bugs (by eliminating mutable variables, which generally make it harder to keep track of what the "inputs" and "outputs" of the function are). A properly written recursive function consists of a series of checks for terminating conditions (using either cascading if-else or a pattern match) with the recursive call(s) (plural only if not tail recursive) made if none of the terminating conditions are met.
The benefit of using recursion is most dramatic when there are several different possible terminating conditions. A series of if conditionals or patterns is generally much easier to comprehend than a single while condition with a whole bunch of (potentially complex and inter-related) boolean expressions &&'d together, especially if the return value needs to be different depending on which terminating condition is met.
Did these experts say that performance was the reason? I'm betting their reasons are more to do with expressive code and functional programming. Could you cite examples of their arguments?
One interesting reason why recursive solutions can be more efficient than more imperative alternatives is that they very often operate on lists and in a way that uses only head and tail operations. These operations are actually faster than random-access operations on more complex collections.
Anther reason that while-based solutions may be less efficient is that they can become very ugly as the complexity of the problem increases...
(I have to say, at this point, that your example is not a good one, since neither of your loops do anything useful. Your recursive loop is particularly atypical since it returns nothing, which implies that you are missing a major point about recursive functions. The functional bit. A recursive function is much more than another way of repeating the same operation n times.)
While loops do not return a value and require side effects to achieve anything. It is a control structure which only works at all for very simple tasks. This is because each iteration of the loop has to examine all of the state to decide what to next. The loops boolean expression may also have to be come very complex if there are multiple potential exit paths (or that complexity has to be distributed throughout the code in the loop, which can be ugly and obfuscatory).
Recursive functions offer the possibility of a much cleaner implementation. A good recursive solution breaks a complex problem down in to simpler parts, then delegates each part on to another function which can deal with it - the trick being that that other function is itself (or possibly a mutually recursive function, though that is rarely seen in Scala - unlike the various Lisp dialects, where it is common - because of the poor tail recursion support). The recursively called function receives in its parameters only the simpler subset of data and only the relevant state; it returns only the solution to the simpler problem. So, in contrast to the while loop,
Each iteration of the function only has to deal with a simple subset of the problem
Each iteration only cares about its inputs, not the overall state
Sucess in each subtask is clearly defined by the return value of the call that handled it.
State from different subtasks cannot become entangled (since it is hidden within each recursive function call).
Multiple exit points, if they exist, are much easier to represent clearly.
Given these advantages, recursion can make it easier to achieve an efficient solution. Especially if you count maintainability as an important factor in long-term efficiency.
I'm going to go find some good examples of code to add. Meanwhile, at this point I always recommend The Little Schemer. I would go on about why but this is the second Scala recursion question on this site in two days, so look at my previous answer instead.
I have defined a prolog file with the following code:
divisible(X, Y) :-
X mod Y =:= 0.
divisibleBy(X, Y) :-
divisible(X, Y).
op(35,xfx,divisibleBy).
Prolog is complaining that
'$record_clause'/2: No permission to modify static_procedure `op/3'
What am I doing wrong? I want to define an divisibleBy operator that will allow me to write code like the following:
4 divisibleBy 2
Thanks.
Use
:- op(35,xfx,divisibleBy).
:- tells the Prolog interpreter to evaluate the next term while loading the file, i.e. make a predicate call, instead of treating it as a definition (in this case a redefinition of op/3).
The answer given by #larsmans is spot-on regarding your original problem.
However, you should reconsider if you should define a new operator.
In general, I would strongly advise against defining new operators for the following reasons:
The gain in readability is often overrated.
It may easily introduce new problems in places you wouldn't normally expect buggy.
It doesn't "scale" well: a small number of operators can make code on presentation slides super-concise, but what if you add more discriminate union cases over time? More operators?
Sometimes I'm writing ugly if-else statements in C# 3.5; I'm aware of some different approaches to simplifying that with table-driven development, class hierarchy, anonimous methods and some more.
The problem is that alternatives are still less wide-spread than writing traditional ugly if-else statements because there is no convention for that.
What depth of nested if-else is normal for C# 3.5? What methods do you expect to see instead of nested if-else the first? the second?
if i have ten input parameters with 3 states in each, i should map functions to combination of each state of each parameter (really less, because not all the states are valid, but sometimes still a lot). I can express these states as a hashtable key and a handler (lambda) which will be called if key matches.
It is still mix of table-driven, data-driven dev. ideas and pattern matching.
what i'm looking for is extending for C# such approaches as this for scripting (C# 3.5 is rather like scripting)
http://blogs.msdn.com/ericlippert/archive/2004/02/24/79292.aspx
Good question. "Conditional Complexity" is a code smell. Polymorphism is your friend.
Conditional logic is innocent in its infancy, when it’s simple to understand and contained within a
few lines of code. Unfortunately, it rarely ages well. You implement several new features and
suddenly your conditional logic becomes complicated and expansive. [Joshua Kerevsky: Refactoring to Patterns]
One of the simplest things you can do to avoid nested if blocks is to learn to use Guard Clauses.
double getPayAmount() {
if (_isDead) return deadAmount();
if (_isSeparated) return separatedAmount();
if (_isRetired) return retiredAmount();
return normalPayAmount();
};
The other thing I have found simplifies things pretty well, and which makes your code self-documenting, is Consolidating conditionals.
double disabilityAmount() {
if (isNotEligableForDisability()) return 0;
// compute the disability amount
Other valuable refactoring techniques associated with conditional expressions include Decompose Conditional, Replace Conditional with Visitor, Specification Pattern, and Reverse Conditional.
There are very old "formalisms" for trying to encapsulate extremely complex expressions that evaluate many possibly independent variables, for example, "decision tables" :
http://en.wikipedia.org/wiki/Decision_table
But, I'll join in the choir here to second the ideas mentioned of judicious use of the ternary operator if possible, identifying the most unlikely conditions which if met allow you to terminate the rest of the evaluation by excluding them first, and add ... the reverse of that ... trying to factor out the most probable conditions and states that can allow you to proceed without testing of the "fringe" cases.
The suggestion by Miriam (above) is fascinating, even elegant, as "conceptual art;" and I am actually going to try it out, trying to "bracket" my suspicion that it will lead to code that is harder to maintain.
My pragmatic side says there is no "one size fits all" answer here in the absence of a pretty specific code example, and complete description of the conditions and their interactions.
I'm a fan of "flag setting" : meaning anytime my application goes into some less common "mode" or "state" I set a boolean flag (which might even be static for the class) : for me that simplifies writing complex if/then else evaluations later on.
best, Bill
Simple. Take the body of the if and make a method out of it.
This works because most if statements are of the form:
if (condition):
action()
In other cases, more specifically :
if (condition1):
if (condition2):
action()
simplify to:
if (condition1 && condition2):
action()
I'm a big fan of the ternary operator which get's overlooked by a lot of people. It's great for assigning values to variables based on conditions. like this
foobarString = (foo == bar) ? "foo equals bar" : "foo does not equal bar";
Try this article for more info.
It wont solve all your problems, but it is very economical.
I know that this is not the answer you are looking for, but without context your questions is very hard to answer. The problem is that the way to refactor such a thing really depends on your code, what it is doing, and what you are trying to accomplish. If you had said that you were checking the type of an object in these conditionals we could throw out an answer like 'use polymorphism', but sometimes you actually do just need some if statements, and sometimes those statements can be refactored into something more simple. Without a code sample it is hard to say which category you are in.
I was told years ago by an instructor that 3 is a magic number. And as he applied it it-else statements he suggested that if I needed more that 3 if's then I should probably use a case statement instead.
switch (testValue)
{
case = 1:
// do something
break;
case = 2:
// do something else
break;
case = 3:
// do something more
break;
case = 4
// do what?
break;
default:
throw new Exception("I didn't do anything");
}
If you're nesting if statements more than 3 deep then you should probably take that as a sign that there is a better way. Probably like Avirdlg suggested, separating the nested if statements into 1 or more methods. If you feel you are absolutely stuck with multiple if-else statements then I would wrap all the if-else statements into a single method so it didn't ugly up other code.
If the entire purpose is to assign a different value to some variable based upon the state of various conditionals, I use a ternery operator.
If the If Else clauses are performing separate chunks of functionality. and the conditions are complex, simplify by creating temporary boolean variables to hold the true/false value of the complex boolean expressions. These variables should be suitably named to represent the business sense of what the complex expression is calculating. Then use the boolean variables in the If else synatx instead of the complex boolean expressions.
One thing I find myself doing at times is inverting the condition followed by return; several such tests in a row can help reduce nesting of if and else.
Not a C# answer, but you probably would like pattern matching. With pattern matching, you can take several inputs, and do simultaneous matches on all of them. For example (F#):
let x=
match cond1, cond2, name with
| _, _, "Bob" -> 9000 // Bob gets 9000, regardless of cond1 or 2
| false, false, _ -> 0
| true, false, _ -> 1
| false, true, _ -> 2
| true, true, "" -> 0 // Both conds but no name gets 0
| true, true, _ -> 3 // Cond1&2 give 3
You can express any combination to create a match (this just scratches the surface). However, C# doesn't support this, and I doubt it will any time soon. Meanwhile, there are some attempts to try this in C#, such as here: http://codebetter.com/blogs/matthew.podwysocki/archive/2008/09/16/functional-c-pattern-matching.aspx. Google can turn up many more; perhaps one will suit you.
try to use patterns like strategy or command
In simple cases you should be able to get around with basic functional decomposition. For more complex scenarios I used Specification Pattern with great success.