Scala: List.forall using while loop - scala

I see following implementation in List.scala from Scala library:
override final def forall(p: A => Boolean): Boolean = {
var these: List[A] = this
while (!these.isEmpty) {
if (!p(these.head)) return false
these = these.tail
}
true
}
This method can be implemented recursively to get rid of var and while loop.
Reading through all available books,blogs, articles etc online etc, I am under impression that we are supposed to follow recursive approach as much as we can in Scala.

The mutability of var these is not visible outside forall method and likely helps with performance
override final def forall(p: A => Boolean): Boolean = {
var these: List[A] = this
...
true
} // var is out-of-scope at this point
so technically forall is still pure from perspective of callers. Tail recursive approach would also probably have similar performance as var+while though.

The recursive approach is considered more declarative and readable that's why it's a good approach. From the other side functional approach implies immutibilty in particular which can lead to perfornace degradation so using such approach in Scala library is understandable. I assume that the authors of the Scala library didn't assume users heavily read their code as it's should be under the hood.

Related

Using scala-cats IO type to encapsulate a mutable Java library

I understand that generally speaking there is a lot to say about deciding what one wants to model as effect This discussion is introduce in Functional programming in Scala on the chapter on IO.
Nonethless, I have not finished the chapter, i was just browsing it end to end before takling it together with Cats IO.
In the mean time, I have a bit of a situation for some code I need to deliver soon at work.
It relies on a Java Library that is just all about mutation. That library was started a long time ago and for legacy reason i don't see them changing.
Anyway, long story short. Is actually modeling any mutating function as IO a viable way to encapsulate a mutating java library ?
Edit1 (at request I add a snippet)
Readying into a model, mutate the model rather than creating a new one. I would contrast jena to gremlin for instance, a functional library over graph data.
def loadModel(paths: String*): Model =
paths.foldLeft(ModelFactory.createOntologyModel(new OntModelSpec(OntModelSpec.OWL_MEM)).asInstanceOf[Model]) {
case (model, path) ⇒
val input = getClass.getClassLoader.getResourceAsStream(path)
val lang = RDFLanguages.filenameToLang(path).getName
model.read(input, "", lang)
}
That was my scala code, but the java api as documented in the website look like this.
// create the resource
Resource r = model.createResource();
// add the property
r.addProperty(RDFS.label, model.createLiteral("chat", "en"))
.addProperty(RDFS.label, model.createLiteral("chat", "fr"))
.addProperty(RDFS.label, model.createLiteral("<em>chat</em>", true));
// write out the Model
model.write(system.out);
// create a bag
Bag smiths = model.createBag();
// select all the resources with a VCARD.FN property
// whose value ends with "Smith"
StmtIterator iter = model.listStatements(
new SimpleSelector(null, VCARD.FN, (RDFNode) null) {
public boolean selects(Statement s) {
return s.getString().endsWith("Smith");
}
});
// add the Smith's to the bag
while (iter.hasNext()) {
smiths.add(iter.nextStatement().getSubject());
}
So, there are three solutions to this problem.
1. Simple and dirty
If all the usage of the impure API is contained in single / small part of the code base, you may just "cheat" and do something like:
def useBadJavaAPI(args): IO[Foo] = IO {
// Everything inside this block can be imperative and mutable.
}
I said "cheat" because the idea of IO is composition, and a big IO chunk is not really composition. But, sometimes you only want to encapsulate that legacy part and do not care about it.
2. Towards composition.
Basically, the same as above but dropping some flatMaps in the middle:
// Instead of:
def useBadJavaAPI(args): IO[Foo] = IO {
val a = createMutableThing()
mutableThing.add(args)
val b = a.bar()
b.computeFoo()
}
// You do something like this:
def useBadJavaAPI(args): IO[Foo] =
for {
a <- IO(createMutableThing())
_ <- IO(mutableThing.add(args))
b <- IO(a.bar())
result <- IO(b.computeFoo())
} yield result
There are a couple of reasons for doing this:
Because the imperative / mutable API is not contained in a single method / class but in a couple of them. And the encapsulation of small steps in IO is helping you to reason about it.
Because you want to slowly migrate the code to something better.
Because you want to feel better with yourself :p
3. Wrap it in a pure interface
This is basically the same that many third party libraries (e.g. Doobie, fs2-blobstore, neotypes) do. Wrapping a Java library on a pure interface.
Note that as such, the amount of work that has to be done is way more than the previous two solutions. As such, this is worth it if the mutable API is "infecting" many places of your codebase, or worse in multiple projects; if so then it makes sense to do this and publish is as an independent module.
(it may also be worth to publish that module as an open-source library, you may end up helping other people and receive help from other people as well)
Since this is a bigger task is not easy to just provide a complete answer of all you would have to do, it may help to see how those libraries are implemented and ask more questions either here or in the gitter channels.
But, I can give you a quick snippet of how it would look like:
// First define a pure interface of the operations you want to provide
trait PureModel[F[_]] { // You may forget about the abstract F and just use IO instead.
def op1: F[Int]
def op2(data: List[String]): F[Unit]
}
// Then in the companion object you define factories.
object PureModel {
// If the underlying java object has a close or release action,
// use a Resource[F, PureModel[F]] instead.
def apply[F[_]](args)(implicit F: Sync[F]): F[PureModel[F]] = ???
}
Now, how to create the implementation is the tricky part.
Maybe you can use something like Sync to initialize the mutable state.
def apply[F[_]](args)(implicit F: Sync[F]): F[PureModel[F]] =
F.delay(createMutableState()).map { mutableThing =>
new PureModel[F] {
override def op1: F[Int] = F.delay(mutableThing.foo())
override def op2(data: List[String]): F[Unit] = F.delay(mutableThing.bar(data))
}
}

Is there a universal method to create a tail recursive function in Scala?

While checking Intel's BigDL repo, I stumbled upon this method:
private def recursiveListFiles(f: java.io.File, r: Regex): Array[File] = {
val these = f.listFiles()
val good = these.filter(f => r.findFirstIn(f.getName).isDefined)
good ++ these.filter(_.isDirectory).flatMap(recursiveListFiles(_, r))
}
I noticed that it was not tail recursive and decided to write a tail recursive version:
private def recursiveListFiles(f: File, r: Regex): Array[File] = {
#scala.annotation.tailrec def recursiveListFiles0(f: Array[File], r: Regex, a: Array[File]): Array[File] = {
f match {
case Array() => a
case htail => {
val these = htail.head.listFiles()
val good = these.filter(f => r.findFirstIn(f.getName).isDefined)
recursiveListFiles0(these.filter(_.isDirectory)++htail.tail, r, a ++ good)
}
}
}
recursiveListFiles0(Array[File](f), r, Array.empty[File])
}
What made this difficult compared to what I am used to is the concept that a File can be transformed into an Array[File] which adds another level of depth.
What is the theory behind recursion on datatypes that have the following member?
def listTs[T]: T => Traversable[T]
Short answer
If you generalize the idea and think of it as a monad (polymorphic thing working for arbitrary type params) then you won't be able to implement a tail recursive implementation.
Trampolines try to solve this very problem by providing a way to evaluate a recursive computation without overflowing the stack. The general idea is to create a stream of pairs of (result, computation). So at each step you'll have to return the computed result up to that point and a function to create the next result (aka thunk).
From Rich Dougherty’s blog:
A trampoline is a loop that repeatedly runs functions. Each function,
called a thunk, returns the next function for the loop to run. The
trampoline never runs more than one thunk at a time, so if you break
up your program into small enough thunks and bounce each one off the
trampoline, then you can be sure the stack won't grow too big.
More + References
In the categorical sense, the theory behind such data types is closely related to Cofree Monads and fold and unfold functions, and in general to Fixed point types.
See this fantastic talk: Fun and Games with Fix Cofree and Doobie by Rob Norris which discusses a use case very similar to your question.
This article about Free monads and Trampolines is also related to your first question: Stackless Scala With Free Monads.
See also this part of the Matryoshka docs. Matryoshka is a Scala library implementing monads around the concept of FixedPoint types.

Why are the Scala libraries implemented with mutable state?

Why are some methods in Scala's standard libraries implemented with mutable state?
For instance, the find method as part of scala.Iterator class is implemented as
def find(p: A => Boolean): Option[A] = {
var res: Option[A] = None
while (res.isEmpty && hasNext) {
val e = next()
if (p(e)) res = Some(e)
}
res
}
Which could have been implemented as a #tailrec'd method, perhaps something like
def findNew(p: A => Boolean): Option[A] = {
#tailrec
def findRec(e: A): Option[A] = {
if (p(e)) Some(e)
else {
if (hasNext) findRec(next())
else None
}
}
if (hasNext) findRec(next())
else None
}
Now I suppose one argument could be the use of mutable state and a while loop could be more efficient, which is understandably very important in library code, but is that really the case over a #tailrec'd method?
There is no harm in having a mutable state as long as he is not shared.
In your example there is no way the mutable var could be accessed from outside, so it's not possible that this mutable variable change due to a side effect.
It's always good to enforce immutability as much as possible, but when performance matter there is nothing wrong in having some mutability as long as it's constrained in a safe way.
NOTE: Iterator is a data-structure which is not side-effect free and this could lead to some weird behavior, but this is an other story and in no way the reason for designing a method in such way. You'll find method like that in immutable data-structure too.
In this case the tailrec quite possibly has the same performance as the while loop. I would say that in this case the while loop solution is shorter and more concise.
But, iterators are a mutable abstraction anyway, so the gain of having a tail recursive method to avoid that var, which is local to that short code snippet, is questionable.
Scala is not designed for functional purity but for broadly useful capability. Part of this includes trying to have the most efficient implementations of basic library routines (certainly not universally true, but it often is).
As such, if you have two possible interfaces:
trait Iterator[A] { def next: A }
trait FunctionalIterator[A] { def next: (A, FunctionalIterator[A]) }
and the second one is awkward and slower, it's quite sensible to choose the first.
When a functionally pure implementation is superior for the bulk of use cases, you'll typically find the functionally pure one.
And when it comes to simply using a while loop vs. recursion, either one is easy enough to maintain so it's really up to the preferences of the coder. Note that find would have to be marked final in the tailrec case, so while preserves more flexibility:
trait Foo {
def next: Int
def foo: Int = {
var a = next
while (a < 0) a = next
a
}
}
defined trait Foo
trait Bar {
def next: Int
#tailrec def bar: Int = {
val a = next
if (a < 0) bar else a
}
}
<console>:10: error: could not optimize #tailrec annotated method bar:
it is neither private nor final so can be overridden
#tailrec def bar: Int = {
^
There are ways to get around this (nested methods, final, redirect to private method, etc.), but it tends to adds boilerplate to the point where the while is syntactically more compact.

costly computation occuring in both isDefined and Apply of a PartialFunction

It is quite possible that to know whether a function is defined at some point, a significant part of computing its value has to be done. In a PartialFunction, when implementing isDefined and apply, both methods will have to do that. What to do is this common job is costly?
There is the possibility of caching its result, hoping that apply will be called after isDefined. Definitely ugly.
I often wish that PartialFunction[A,B] would be Function[A, Option[B]], which is clearly isomorphic. Or maybe, there could be another method in PartialFunction, say applyOption(a: A): Option[B]. With some mixins, implementors would have a choice of implementing either isDefined and apply or applyOption. Or all of them to be on the safe side, performance wise. Clients which test isDefined just before calling apply would be encouraged to use applyOption instead.
However, this is not so. Some major methods in the library, among them collect in collections require a PartialFunction. Is there a clean (or not so clean) way to avoid paying for computations repeated between isDefined and apply?
Also, is the applyOption(a: A): Option[B] method reasonable? Does it sound feasible to add it in a future version? Would it be worth it?
Why is caching such a problem? In most cases, you have a local computation, so as long as you write a wrapper for the caching, you needn't worry about it. I have the following code in my utility library:
class DroppedFunction[-A,+B](f: A => Option[B]) extends PartialFunction[A,B] {
private[this] var tested = false
private[this] var arg: A = _
private[this] var ans: Option[B] = None
private[this] def cache(a: A) {
if (!tested || a != arg) {
tested = true
arg = a
ans = f(a)
}
}
def isDefinedAt(a: A) = {
cache(a)
ans.isDefined
}
def apply(a: A) = {
cache(a)
ans.get
}
}
class DroppableFunction[A,B](f: A => Option[B]) {
def drop = new DroppedFunction(f)
}
implicit def function_is_droppable[A,B](f: A => Option[B]) = new DroppableFunction(f)
and then if I have an expensive computation, I write a function method A => Option[B] and do something like (f _).drop to use it in collect or whatnot. (If you wanted to do it inline, you could create a method that takes A=>Option[B] and returns a partial function.)
(The opposite transformation--from PartialFunction to A => Option[B]--is called lifting, hence the "drop"; "unlift" is, I think, a more widely used term for the opposite operation.)
Have a look at this thread, Rethinking PartialFunction. You're not the only one wondering about this.
This is an interesting question, and I'll give my 2 cents.
First of resist the urge for premature optimization. Make sure the partial function is the problem. I was amazed at how fast they are on some cases.
Now assuming there is a problem, where would it come from?
Could be a large number of case clauses
Complex pattern matching
Some complex computation on the if causes
One option I'd try to find ways to fail fast. Break the pattern matching into layer, then chain partial functions. This way you can fail the match early. Also extract repeated sub matching. For example:
Lets assume OddEvenList is an extractor that break a list into a odd list and an even list:
var pf1: PartialFuntion[List[Int],R] = {
case OddEvenList(1::ors, 2::ers) =>
case OddEvenList(3::ors, 4::ors) =>
}
Break to two part, one that matches the split then one that tries to match re result (to avoid repeated computation. However this may require some re-engineering
var pf2: PartialFunction[(List[Int],List[Int],R) = {
case (1 :: ors, 2 :: ers) => R1
case (3 :: ors, 4 :: ors) => R2
}
var pf1: PartialFuntion[List[Int],R] = {
case OddEvenList(ors, ers) if(pf2.isDefinedAt(ors,ers) => pf2(ors,ers)
}
I have used this when progressively reading XML files that hard a rather inconstant format.
Another option is to compose partial functions using andThen. Although a quick test here seamed to indicate that only the first was is actually tests.
There is absolutely nothing wrong with caching mechanism inside the partial function, if:
the function returns always the same input, when passed the same argument
it has no side effects
it is completely hidden from the rest of the world
Such cached function is not distiguishable from a plain old pure partial function...

Pros and Cons of choosing def over val

I'm asking a slight different question than this one. Suppose I have a code snippet:
def foo(i : Int) : List[String] = {
val s = i.toString + "!" //using val
s :: Nil
}
This is functionally equivalent to the following:
def foo(i : Int) : List[String] = {
def s = i.toString + "!" //using def
s :: Nil
}
Why would I choose one over the other? Obviously I would assume the second has a slight disadvantages in:
creating more bytecode (the inner def is lifted to a method in the class)
a runtime performance overhead of invoking a method over accessing a value
non-strict evaluation means I could easily access s twice (i.e. unnecesasarily redo a calculation)
The only advantage I can think of is:
non-strict evaluation of s means it is only called if it is used (but then I could just use a lazy val)
What are peoples' thoughts here? Is there a significant dis-benefit to me making all inner vals defs?
1)
One answer I didn't see mentioned is that the stack frame for the method you're describing could actually be smaller. Each val you declare will occupy a slot on the JVM stack, however, the whenever you use a def obtained value it will get consumed in the first expression you use it in. Even if the def references something from the environment, the compiler will pass .
The HotSpot should optimize both these things, or so some people claim. See:
http://www.ibm.com/developerworks/library/j-jtp12214/
Since the inner method gets compiled into a regular private method behind the scene and it is usually very small, the JIT compiler might choose to inline it and then optimize it. This could save time allocating smaller stack frames (?), or, by having fewer elements on the stack, make local variables access quicker.
But, take this with a (big) grain of salt - I haven't actually made extensive benchmarks to backup this claim.
2)
In addition, to expand on Kevin's valid reply, the stable val provides also means that you can use it with path dependent types - something you can't do with a def, since the compiler doesn't check its purity.
3)
For another reason you might want to use a def, see a related question asked not so long ago:
Functional processing of Scala streams without OutOfMemory errors
Essentially, using defs to produce Streams ensures that there do not exist additional references to these objects, which is important for the GC. Since Streams are lazy anyway, the overhead of creating them is probably negligible even if you have multiple defs.
The val is strict, it's given a value as soon as you define the thing.
Internally, the compiler will mark it as STABLE, equivalent to final in Java. This should allow the JVM to make all sorts of optimisations - I just don't know what they are :)
I can see an advantage in the fact that you are less bound to a location when using a def than when using a val.
This is not a technical advantage but allows for better structuring in some cases.
So, stupid example (please edit this answer, if you’ve got a better one), this is not possible with val:
def foo(i : Int) : List[String] = {
def ret = s :: Nil
def s = i.toString + "!"
ret
}
There may be cases where this is important or just convenient.
(So, basically, you can achieve the same with lazy val but, if only called at most once, it will probably be faster than a lazy val.)
For a local declaration like this (with no arguments, evaluated precisely once and with no code evaluated between the point of declaration and the point of evaluation) there is no semantic difference. I wouldn't be surprised if the "val" version compiled to simpler and more efficient code than the "def" version, but you would have to examine the bytecode and possibly profile to be sure.
In your example I would use a val. I think the val/def choice is more meaningful when declaring class members:
class A { def a0 = "a"; def a1 = "a" }
class B extends A {
var c = 0
override def a0 = { c += 1; "a" + c }
override val a1 = "b"
}
In the base class using def allows the sub class to override with possibly a def that does not return a constant. Or it could override with a val. So that gives more flexibility than a val.
Edit: one more use case of using def over val is when an abstract class has a "val" for which the value should be provided by a subclass.
abstract class C { def f: SomeObject }
new C { val f = new SomeObject(...) }