When would I write a non-tail recursive function in Scala? - scala

Since non-tail recursion calls use stack frames like Java does, I'd think you'd be using it very sparingly, if at all. This seems however severely restrictive given it's one of the most important tools.
When can I use non-tail recursion functions? Also, are there plans to remove the memory restriction in the future?

In the same situations where it would be safe in Java, where the dataset you are working with never grows huge and the performance isn't critical/hot path of your app.
Also, IMHO, there are times when the clarity of non tail recursion version of an algorithm is way better than the tail recursive version.

Related

Is it possible to tail-recursively traverse directory in Scala?

I'm attempting to write a Scala function to list all files/subdirectories under a given directory, but I'd like to make it tail recursive. Before I spend any more time on this is it even an attainable goal, or should I stick to regular recursion? I just want to know that it is possible, as I'd like to figure it out for myself. Good learning experience and all that. Thanks!
I seems you need some type of stack for tree traversal, so if you avoid the system stack you have to implement your own (see http://www.scala-lang.org/old/node/7984)

How usable is non tail recursive recursion in Scala?

Since non tail recursive recursion calls use stack frames like Java does, I'd be weary to do any recursion that, let's say goes beyond 1,000 times. I would be thus weary to use it for most of things.
Do people actually use non tail recursive recursion in Scala? If so, what are the criteria I can use to determine if it can be non tail recursive?
Also, are there plans to remove this memory restriction in future versions of Scala?
In the situation where you cannot use single tail recursion, for example because you need to alternative between two functions, a mechanism called trampolining has been described.
A more recent and thorough discussion of this topic can be found in Rúnar Bjarnason's paper Stackless Scala With Free Monads.
Personally, I would go the easy route and revert to imperative style if you really have this situation with >1K depths, e.g. use a mutable collection builder.

F# has tail call elimination?

In this talk, in the first 8 minutes, Runar explains that Scala has problems with tail call elimination, this makes me wonder whether F# has similar problems ? If not, why not ?
The problem with Proper Tail Calls in Scala is one of engineering trade-offs. It would be easily possible to add PTCs to Scala: just add a sentence to the SLS. Voilà: PTCs in Scala. From a Language Design perspective, we are done.
Now the poor compiler writers need to implement that spec. Well, compiling into a language with PTCs is easy … but unfortunately, the JVM byte code isn't such a language. Okay, so what about GOTO? Nope. Continuations? Nope. Exceptions (which are known to be equivalent to Continuations)? Ah, now we are getting somewhere! So, we could use exceptions to implement PTCs. Or, alternatively, we could just not use the JVM call stack at all and implement our own stack.
After all, there are multiple Scheme implementations on the JVM, all of them support PTCs just fine. It's a myth that you cannot have PTCs on the JVM, just because the JVM doesn't support them. After all, x86 doesn't have them either, but nonetheless, there are languages running on x86 that have them.
So, if implementing PTCs on the JVM is possible, then why doesn't Scala have them? Like I said above, you could use exceptions or your own stack to implement them. But using exceptions for control flow or implementing your own stack means that everything which expects the JVM call stack to look a certain way would no longer work.
In particular, you would lose pretty much all interoperability with the Java tooling ecosystem (debuggers, visualizers, static analyzers). You would also have to build bridges to interoperate with Java libraries, which would be slow, so you lose interop with the Java library ecosystem as well.
But that is a major design goal of Scala! And that's why Scala doesn't have PTCs.
I call this "Hickey's Theorem", after Rich Hickey, the designer of Clojure who once said in a talk "Tail Calls, Interop, Performance – Pick Two."
You would also present the JIT compiler with some very unusual byte code patterns that it may not know how to optimize well.
If you were to port F# to the JVM, you would basically have to make exactly that choice: do you give up Tail Calls (you can't, because they are required by the Language Spec), do you give up Interop or do you give up Performance? On .NET, you can have all three, because Tail Calls in F# can simply be compiled into Tail Calls in MSIL. (Although the actual translation is more complex than that, and the implementation of Tail Calls in MSIL is buggy in some corner cases.)
This poses the question: why not add Tail Calls to the JVM? Well, this is very hard, due to a design flaw in the JVM byte code. The designers wanted the JVM byte code to have certain safety properties. But instead of designing the JVM byte code language in such a way that you cannot write an unsafe program in the first place (like, say, in Java, for example, where you cannot write a program that violates pointer safety, because the language just doesn't give you access to pointers in the first place), JVM byte code in itself is unsafe and needs a separate byte code verifier to make it safe.
That byte code verifier is based on stack inspection, and Tail Calls change the stack. So, the two are very hard to reconcile, but the JVM simply doesn't work without the byte code verifier. It took a long time and some very smart people to finally figure out how to implement Tail Calls on the JVM without losing the byte code verifier (see A Tail-Recursive Machine with Stack Inspection by Clements and Felleisen and tail calls in the VM by John Rose (JVM lead designer)), so we have now moved from the stage where it was an open research problem to the stage where it is "just" an open engineering problem.
Note that Scala and some other languages do have intra-method direct tail-recursion. However, that is pretty boring, implementation-wise: it is just a while loop. Most targets have while loops or something equivalent, e.g. the JVM has intra-method GOTO. Scala also has the scala.util.control.TailCalls object, which is kind-of a re-ified trampoline. (See Stackless Scala With Free Monads by Rúnar Óli Bjarnason for a more general version of this idea, which can eliminate all use of the stack, not just in tail-calls.) This can be used to implement a tail-calling algorithm in Scala, but this is then not compatible with the JVM stack, i.e. it doesn't look like a recursive method call to other languages or to a debugger:
import scala.util.control.TailCalls._
def isEven(xs: List[Int]): TailRec[Boolean] =
if (xs.isEmpty) done(true) else tailcall(isOdd(xs.tail))
def isOdd(xs: List[Int]): TailRec[Boolean] =
if (xs.isEmpty) done(false) else tailcall(isEven(xs.tail))
isEven((1 to 100000).toList).result
def fib(n: Int): TailRec[Int] =
if (n < 2) done(n) else for {
x <- tailcall(fib(n - 1))
y <- tailcall(fib(n - 2))
} yield (x + y)
fib(40).result
Clojure has the recur special form, which is also an explicit trampoline.
F# does not have a problem with tail calls. Here is what it does:
If you have a single tail-recursive function, the compiler generates a loop with mutation because this is faster than generating the .tail instruction
In other tail-call positions (e.g. when using continuations or two mutually recursive functions), it generates the .tail instruction and so the tail call is handled by the CLR
By default, tail-call optimization is turned off in Debug mode in Visual Studio, because this makes debugging easier (you can inspect the stack), but you can turn it on in project properties if needed.
In the old days, there used to be problems with the .tail instruction on some runtimes (CLR x64 and Mono), but that all of those have now been fixed and everything works fine.
It turns out that for proper tail calls, you have to either compile in "Release Mode" as opposed to the default "Debug Mode", or to open your project properties, and in the Build menu, scroll down and check "Generate tail calls". Thanks to Arnavion on IRC.freenode.net #fsharp for the tip.

What can use to avoid the msgSend function overhead?(Obj-c)

Hello everyone help me please!!!!
What can use to avoid the msgSend function overhead?
maybe answer is IMP, but I not sure.
You could simply inline the function to avoid any function call overhead. Then it would be faster than even a C function! But before you start down this path - are you certain this level of optimisation is warranted? You are more likely to get a better payoff by optimizing the algorithm.
The use of IMP is very rarely required. The method dispatching in Objective-C (especially in the 64-bit runtime) has been very heavily optimised, and exploits many tricks for speed.
What profiling have you done that tells you that method dispatching is the cause of your performance issue? I suggest first you examine the algorithm to see first of all where the most expensive operations are, and see if there is a more efficient way to implement it.
To answer your question, a quick search finds some directly relevant questions similar to yours right here on SO, with some great and detailed answers:
Objective-C optimization
Objective-C and use of SEL/IMP

When should you use XS?

I am writing up a talk on XS and I need to know when the community thinks it is proper to reach for XS.
I can think of at least three reasons to use XS:
You have a C library you want to access in Perl 5
You have a block of code that is provably slowing down your program and it would be faster if written in C
You need access to something only available in XS
Reason 1 is obvious and should need no explaination.
When you really need reason 2 is less obvious. Often you are better off looking at how the code is structured. You should only invoke reason 2 if you have profiled your code and have a benchmark and test suite to prove that the XS code is faster and correct.
Reason 3 is a dangerous reason. It is rare that you actually need to look into Perl's guts to do something, but there is at least one valid case.
In a few cases, better memory management is another reason for using XS. For example, if you have a very large block of objects of some similar type, this can be managed more efficiently through XS. KinoSearch uses this for tokens, for example, where start and end offsets in a large string can be managed more effectively through XS than as a huge pool of scalars. PDL also has a memory management aspect to it, as well as speed.
There are proposals to integrate some of this approach into core Perl in the long term, initially because it offers a chance to make sharing data in threading better: see: http://openparallel.com/2011/07/05/a-new-hope-for-efficient-safe-data-sharing-between-threads-in-perl/.