How to write this three-liner as a one-liner? - scala

I like the way, you can write one-liner-methods in Scala, e.g. with List(1, 2, 3).foreach(..).map(..).
But there is a certain situation, that sometimes comes up when writing Scala code, where things get a bit ugly. Example:
def foo(a: A): Int = {
// do something with 'a' which results in an integer
// e.g. 'val result = a.calculateImportantThings
// clean up object 'a'
// e.g. 'a.cleanUp'
// Return the result of the previous calculation
return result
}
In this situation we have to return a result, but can not return it directly after the calculation is done, because we have to do some clean up before returning.
I always have to write a three-liner. Is there also a possibility to write a one-liner to do this (without changing the class of A, because this may be a external library which can not be changed) ?

There are clearly side-effects involved here (otherwise the order of invocation of calculateImportantThings and cleanUp wouldn't matter) so you would be well advised to reconsider your design.
However, if that's not an option you could try something like,
scala> class A { def cleanUp {} ; def calculateImportantThings = 23 }
defined class A
scala> val a = new A
a: A = A#927eadd
scala> (a.calculateImportantThings, a.cleanUp)._1
res2: Int = 23
The tuple value (a, b) is equivalent to the application Tuple2(a, b) and the Scala specification guarantees that its arguments will be evaluated left to right, which is what you want here.

This is a perfect use-case for try/finally:
try a.calculateImportantThings finally a.cleanUp
This works because try/catch/finally is an expression in scala, meaning it returns a value, and even better, you get the cleanup whether or not the calculation throws an exception.
Example:
scala> val x = try 42 finally println("complete")
complete
x: Int = 42

There is, in fact, a Haskell operator for just such an occasion:
(<*) :: Applicative f => f a -> f b -> f a
For example:
ghci> getLine <* putStrLn "Thanks for the input!"
asdf
Thanks for the input!
"asdf"
All that remains then is to discover the same operator in scalaz, since scalaz usually replicates everything that Haskell has. You can wrap values in Identity, since Scala doesn't have IO to classify effects. The result would look something like this:
import scalaz._
import Scalaz._
def foo(a: A): Int =
(a.calculateImportantThings.pure[Identity] <* a.cleanup.pure[Identity]).value
This is rather obnoxious, though, since we have to explicitly wrap the side-effecting computations in Identity. Well the truth is, scalaz does some magic that implicitly converts to and from the Identity container, so you can just write:
def foo(a: A): Int = Identity(a.calculateImportantThings) <* a.cleanup()
You do need to hint to the compiler somehow that the leftmost thing is in the Identity monad. The above was the shortest way I could think of. Another possibility is to use Identity() *> foo <* bar, which will invoke the effects of foo and bar in that order, and then produce the value of foo.
To return to the ghci example:
scala> import scalaz._; import Scalaz._
import scalaz._
import Scalaz._
scala> val x : String = Identity(readLine) <* println("Thanks for the input!")
<< input asdf and press enter >>
Thanks for the input!
x: String = asdf

Maybe you want to use a kestrel combinator? It is defined as follows:
Kxy = x
So you call it with the value you want to return and some side-effecting operation you want to execute.
You could implement it as follows:
def kestrel[A](x: A)(f: A => Unit): A = { f(x); x }
... and use it in this way:
kestrel(result)(result => a.cleanUp)
More information can be found here: debasish gosh blog.
[UPDATE] As Yaroslav correctly points out, this is not the best application of the kestrel combinator. But it should be no problem to define a similar combinator using a function without arguments, so instead:
f: A => Unit
someone could use:
f: () => Unit

class Test {
def cleanUp() {}
def getResult = 1
}
def autoCleanup[A <: Test, T](a: A)(x: => T) = {
try { x } finally { a.cleanUp }
}
def foo[A <: Test](a:A): Int = autoCleanup(a) { a.getResult }
foo(new Test)
You can take a look at scala-arm project for type class based solution.

Starting Scala 2.13, the chaining operation tap can be used to apply a side effect (in this case the cleanup of A) on any value while returning the original value untouched:
def tap[U](f: (A) => U): A
import util.chaining._
// class A { def cleanUp { println("clean up") } ; def calculateImportantThings = 23 }
// val a = new A
val x = a.calculateImportantThings.tap(_ => a.cleanUp)
// clean up
// x: Int = 23
In this case tap is a bit abused since we don't even use the value it's applied on (a.calculateImportantThings (23)) to perform the side effect (a.cleanUp).

Related

Get sequence of types from HList in macro

Context: I'm trying to write a macro that is statically aware of an non-fixed number of types. I'm trying to pass these types as a single type parameter using an HList. It would be called as m[ConcreteType1 :: ConcreteType2 :: ... :: HNil](). The macro then builds a match statement which requires some implicits to be found at compile time, a bit like how a json serialiser might demand implicit encoders. I've got a working implementation of the macro when used on a fixed number of type parameters, as follows:
def m[T1, T2](): Int = macro mImpl[T1, T2]
def mImpl[T1: c.WeakTypeTag, T2: c.WeakTypeTag](c: Context)(): c.Expr[Int] = {
import c.universe._
val t = Seq(
weakTypeOf[T1],
weakTypeOf[T2]
).map(c => cq"a: $c => externalGenericCallRequiringImplicitsAndReturningInt(a)")
val cases = q"input match { case ..$t }"
c.Expr[Int](cases)
}
Question: If I have a WeakTypeTag[T] for some T <: HList, is there any way to turn that into a Seq[Type]?
def hlistToSeq[T <: HList](hlistType: WeakTypeTag[T]): Seq[Type] = ???
My instinct is to write a recursive match which turns each T <: HList into either H :: T or HNil, but I don't think that kind of matching exists in scala.
I'd like to hear of any other way to get a list of arbitrary size of types into a macro, bearing in mind that I would need a Seq[Type], not Expr[Seq[Type]], as I need to map over them in macro code.
A way of writing a similar 'macro' in Dotty would be interesting too - I'm hoping it'll be simpler there, but haven't fully investigated yet.
Edit (clarification): The reason I'm using a macro is that I want a user of the library I'm writing to provide a collection of types (perhaps in the form of an HList), which the library can iterate over and expect implicits relating to. I say library, but it will be compiled together with the uses, in order for the macros to run; in any case it should be reusable with different collections of types. It's a bit confusing, but I think I've worked this bit out - I just need to be able to build macros that can operate on lists of types.
Currently you seem not to need macros. It seems type classes or shapeless.Poly can be enough.
def externalGenericCallRequiringImplicitsAndReturningInt[C](a: C)(implicit
mtc: MyTypeclass[C]): Int = mtc.anInt
trait MyTypeclass[C] {
def anInt: Int
}
object MyTypeclass {
implicit val mtc1: MyTypeclass[ConcreteType1] = new MyTypeclass[ConcreteType1] {
override val anInt: Int = 1
}
implicit val mtc2: MyTypeclass[ConcreteType2] = new MyTypeclass[ConcreteType2] {
override val anInt: Int = 2
}
//...
}
val a1: ConcreteType1 = null
val a2: ConcreteType2 = null
externalGenericCallRequiringImplicitsAndReturningInt(a1) //1
externalGenericCallRequiringImplicitsAndReturningInt(a2) //2

Fibonacci memoization in Scala with Memo.mutableHashMapMemo

I am trying implement the fibonacci function in Scala with memoization
One example given here uses a case statement:
Is there a generic way to memoize in Scala?
import scalaz.Memo
lazy val fib: Int => BigInt = Memo.mutableHashMapMemo {
case 0 => 0
case 1 => 1
case n => fib(n-2) + fib(n-1)
}
It seems the variable n is implicitly defined as the first argument, but I get a compilation error if I replace n with _
Also what advantage does the lazy keyword have here, as the function seems to work equally well with and without this keyword.
However I wanted to generalize this to a more generic function definition with appropriate typing
import scalaz.Memo
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
var value : Int = 0
if( n <= 1 ) { value = n; }
else { value = fibonachi(n-1) + fibonachi(n-2) }
return value
}
but I get the following compilation error
cmd10.sc:4: type mismatch;
found : Int => Int
required: Int
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
^Compilation Failed
Compilation Failed
So I am trying to understand the generic way of adding adding a memoization annotation to a scala def function
One way to achieve a Fibonacci sequence is via a recursive Stream.
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
An interesting aspect of streams is that, if something holds on to the head of the stream, the calculation results are auto-memoized. So, in this case, because the identifier fib is a val and not a def, the value of fib(n) is calculated only once and simply retrieved thereafter.
However, indexing a Stream is still a linear operation. If you want to memoize that away you could create a simple memo-wrapper.
def memo[A,R](f: A=>R): A=>R =
new collection.mutable.WeakHashMap[A,R] {
override def apply(a: A) = getOrElseUpdate(a,f(a))
}
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
val mfib = memo(fib)
mfib(99) //res0: BigInt = 218922995834555169026
The more general question I am trying to ask is how to take a pre-existing def function and add a mutable/immutable memoization annotation/wrapper to it inline.
Unfortunately there is no way to do this in Scala unless you are willing to use a macro annotation for this which feels like an overkill to me or to use some very ugly design.
The contradicting requirements are "def" and "inline". The fundamental reason for this is that whatever you do inline with the def can't create any new place to store the memoized values (unless you use a macro that can re-write code introducing new val/vars). You may try to work this around using some global cache but this IMHO falls under the "ugly design" branch.
The design of ScalaZ Memo is used to create a val of the type Function[K,V] which is often written in Scala as just K => V instead of def. In this way the produced val can contain also the storage for the cached values. On the other hand syntactically the difference between usage of a def method and of a K => V function is minimal so this works pretty well. Since the Scala compiler knows how to convert def method into a function, you can wrap a def with Memo but you can't get a def out of it. If for some reason you need def anyway, you'll need another wrapper def.
import scalaz.Memo
object Fib {
def fib(n: Int): BigInt = n match {
case 0 => BigInt(0)
case 1 => BigInt(1)
case _ => fib(n - 2) + fib(n - 1)
}
// "fib _" converts a method into a function. Sometimes "_" might be omitted
// and compiler can imply it but sometimes the compiler needs this explicit hint
lazy val fib_mem_val: Int => BigInt = Memo.mutableHashMapMemo(fib _)
def fib_mem_def(n: Int): BigInt = fib_mem_val(n)
}
println(Fib.fib(5))
println(Fib.fib_mem_val(5))
println(Fib.fib_mem_def(5))
Note how there is no difference in syntax of calling fib, fib_mem_val and fib_mem_def although fib_mem_val is a value. You may also try this example online
Note: beware that some ScalaZ Memo implementations are not thread-safe.
As for the lazy part, the benefit is typical for any lazy val: the actual value with the underlying storage will not be created until the first usage. If the method will be used anyway, I see no benefits in declaring it as lazy

Scala flattening an Option around a higher kinded type, looking for a more idiomatic approach

Given a higher Kinded Type M, and a monad type class, I can operate on values within M through a for-comprehension. Working with a function that return Options, im looking for a more appropriate way to flatten these options than the solution I have. This is as follows
class Test[M[+_]:Monad](calc:Calculator[M]) {
import Monad._
def doSomething(x:Float):M[Option[Float]] = {
for {
y:Option[Float] <- calc.divideBy(x) // divideBy returns M[Option[Float]]
z:Option[Float] <- y.fold[M[Option[Float]]](implicitly[Monad[M]].point(None))(i => calc.divideBy(i))
} yield z
}
}
So its the following I'm looking to correct:
y.fold[M[Option[Float]]](implicitly[Monad[M]].point(None))(i => calc.divideBy(i))
Also the case where instead of calling the second divideBy, I call multiplyBy which returns M[Float]
y.fold[M[Option[Float]]](implicitly[Monad[M]].point(None))(i => calc.multipleBy(i).map(Some(_))
Maybe this is a case for Monad Transformers but Im not sure how to go about it.
It seems likely that monad transformers can help you here. For example, the following compiles and I think does roughly what you want:
import scalaz._, Scalaz._
abstract class Calculator[M[_]: Monad] {
def divideBy(x: Float): M[Option[Float]]
def multiplyBy(x: Float): M[Float]
}
class Test[M[_]: Monad](calc: Calculator[M]) {
def doSomething(x: Float): OptionT[M, Float] = for {
y <- OptionT(calc.divideBy(x))
z <- calc.multiplyBy(y).liftM[OptionT]
} yield z
}
Now doSomething returns an OptionT[M, Float], which is kind of wrapper for M[Option[Float]] that allows you to work with the contents all the way inside the Option monadically. To get back an M[Option[Float]] from the OptionT[M, Float] you can just use the run method.

Can I overload parenthesis in Scala?

Trying to figure out out how to overload parenthesis on a class.
I have this code:
class App(values: Map[String,String])
{
// do stuff
}
I would like to be able to access the values Map this way:
var a = new App(Map("1" -> "2"))
a("1") // same as a.values("1")
Is this possible?
You need to define an apply method.
class App(values: Map[String,String]) {
def apply(x:String) = values(x)
// ...
}
For completeness, it should be said that your "apply" can take multiple values, and that "update" works as the dual of "apply", allowing "parentheses overloading" on the left-hand-side of assignments
Class PairMap[A, B, C]{
val contents: mutable.Map[(A,B), C] = new mutable.Map[(A, B), C]();
def apply(a:A, b:B):C = contents.get((a, b))
def update(a:A, b:B, c:C):Unit = contents.put((a, b), c)
}
val foo = new PairMap[String, Int, Int]()
foo("bar", 42) = 6
println(foo("bar", 42)) // prints 6
The primary value of all this is that it keeps people from suggesting extra syntax for things that had to be special-cased in earlier C-family languages (e.g. array element assignment and fetch). It's also handy for factory methods on companion objects. Other than that, care should be taken, as it's one of those things that can easily make your code too compact to actually be readable.
As others have already noted, you want to overload apply:
class App(values: Map[String,String]) {
def apply(s: String) = values(s)
}
While you're at it, you might want to overload the companion object apply also:
object App {
def apply(m: Map[String,String]) = new App(m)
}
Then you can:
scala> App(Map("1" -> "2")) // Didn't need to call new!
res0: App = App#5c66b06b
scala> res0("1")
res1: String = 2
though whether this is a benefit or a confusion will depend on what you're trying to do.
I think it works using apply : How does Scala's apply() method magic work?

Can Scala allow free Type Parameters in arguments (are Scala Type Parameters first class citizens?)?

I have some Scala code that does something nifty with two different versions of a type-parameterized function. I have simplified this down a lot from my application but in the end my code full of calls of the form w(f[Int],f[Double]) where w() is my magic method. I would love to have a more magic method like z(f) = w(f[Int],f[Double]) - but I can't get any syntax like z(f[Z]:Z->Z) to work as it looks (to me) like function arguments can not have their own type parameters. Here is the problem as a Scala code snippet.
Any ideas? A macro could do it, but I don't think those are part of Scala.
object TypeExample {
def main(args: Array[String]):Unit = {
def f[X](x:X):X = x // parameterize fn
def v(f:Int=>Int):Unit = { } // function that operates on an Int to Int function
v(f) // applied, types correct
v(f[Int]) // appplied, types correct
def w[Z](f:Z=>Z,g:Double=>Double):Unit = {} // function that operates on two functions
w(f[Int],f[Double]) // works
// want something like this: def z[Z](f[Z]:Z=>Z) = w(f[Int],f[Double])
// a type parameterized function that takes a single type-parameterized function as an
// argument and then speicalizes the the argument-function to two different types,
// i.e. a single-argument version of w() (or wrapper)
}
}
You can do it like this:
trait Forall {
def f[Z] : Z=>Z
}
def z(u : Forall) = w(u.f[Int], u.f[Double])
Or using structural types:
def z(u : {def f[Z] : Z=>Z}) = w(u.f[Int], u.f[Double])
But this will be slower than the first version, since it uses reflection.
EDIT: This is how you use the second version:
scala> object f1 {def f[Z] : Z=>Z = x => x}
defined module f1
scala> def z(u : {def f[Z] : Z=>Z}) = (u.f[Int](0), u.f[Double](0.0))
z: (AnyRef{def f[Z]: (Z) => Z})(Int, Double)
scala> z(f1)
res0: (Int, Double) = (0,0.0)
For the first version add f1 extends Forall or simply
scala> z(new Forall{def f[Z] : Z=>Z = x => x})
If you're curious, what you're talking about here is called "rank-k polymorphism." See wikipedia. In your case, k = 2. Some translating:
When you write
f[X](x : X) : X = ...
then you're saying that f has type "forall X.X -> X"
What you want for z is type "(forall Z.Z -> Z) -> Unit". That extra pair of parenthesis is a big difference. In terms of the wikipedia article it puts the forall qualifier before 2 arrows instead of just 1. The type variable can't be instantiated just once and carried through, it potentially has to be instantiated to many different types. (Here "instantiation" doesn't mean object construction, it means assigning a type to a type variable for type checking).
As alexy_r's answer shows this is encodable in Scala using objects rather than straight function types, essentially using classes/traits as the parens. Although he seems to have left you hanging a bit in terms of plugging it into your original code, so here it is:
// this is your code
object TypeExample {
def main(args: Array[String]):Unit = {
def f[X](x:X):X = x // parameterize fn
def v(f:Int=>Int):Unit = { } // function that operates on an Int to Int function
v(f) // applied, types correct
v(f[Int]) // appplied, types correct
def w[Z](f:Z=>Z,g:Double=>Double):Unit = {} // function that operates on two functions
w(f[Int],f[Double]) // works
// This is new code
trait ForAll {
def g[X](x : X) : X
}
def z(forall : ForAll) = w(forall.g[Int], forall.g[Double])
z(new ForAll{def g[X](x : X) = f(x)})
}
}
I don't think what you want to do is possible.
Edit:
My previous version was flawed. This does work:
scala> def z(f: Int => Int, g: Double => Double) = w(f, g)
z: (f: (Int) => Int,g: (Double) => Double)Unit
scala> z(f, f)
But, of course, it is pretty much what you have.
I do not think it is even possible for it to work, because type parameters exist only at compile-time. At run time there is no such thing. So it doesn't make even sense to me to pass a parameterized function, instead of a function with the type parameters inferred by Scala.
And, no, Scala has no macro system.