Can I apply argument defaults when using partial functions in Scala - scala

I have defined two partial functions (hashes), which I expect to take an optional second Boolean parameter:
def SHA1 = hash(MessageDigest.getInstance("SHA-1"))_
def MD5 = hash(MessageDigest.getInstance("MD5"))_
private def hash(algorithm:HashAlgorithm)(s:String, urlencode:Boolean = false) = {
val form = if (urlencode) "%%%02X" else "%02X"
(algorithm.digest(s.getBytes) map(form format _)).mkString
}
When I call the function with both parameters, it compiles, but with just one parameter I get a compilation error:
// First 3 tests are fine
val test1 = hash(MessageDigest.getInstance("SHA-1"))("foo", true)
val test2 = hash(MessageDigest.getInstance("SHA-1"))("foo")
val test3 = SHA1("foo", true)
// not enough arguments for method apply: (v1: String, v2: Boolean)String in trait Function2. Unspecified value parameter v2.
val test4 = SHA1("foo")
I just refactored this to use partial functions, and before I refactored I could force the hash function to use the default without any problem.
Any ideas why the partial function implementation fails to permit default arguments? Am I doing something wrong using both partial functions and currying together?

When you use partial application to generate a function, you lose the ability to call the default. A method is a static thing, so the compiler knows where to look up the default value; a function can be passed around into different contexts, so the compiler will not in general have the information it needs to be able to apply the default parameter.
To think about it another way, functions only know how many arguments they have. There's just one method, apply, that you pass parameters into; otherwise you'd need some way (different types, presumably) to distinguish, for example, Function2-that-must-take-two-parameters and Function2-that-can-be-called-with-one-parameter-also-because-there-is-a-stored-default.

Related

Scala Function Overloading Anomaly

In Scala, why would this overload be allowed?
class log {
def LogInfo(m: String, properties: Map[String, String]): Unit = {
println(m)
}
def LogInfo(m: String, properties: Map[String, String], c: UUID = null): Unit = {
println(m + c.toString())
}
}
In the second definition of the LogInfo function, I have set the extra parameter to a default value of null. When I make the following call, it will call the first overload.
val l: log = new log()
val props: Map[String, String] = Map("a" -> "1")
l.LogInfo("message", props)
Why would it not throw an exception? With a default value, I would have thought both definitions could look the same.
An exception wouldn't be thrown here because the compiler chooses the first overload as the applicable one. This has to do with the way overload resolution works with default arguments. As per the specification, a strong hint to the fact such methods are discarded would be the following line:
Otherwise, let CC be the set of applicable alternatives which don't employ any default argument in the application to e1,…,em.
This has to do with the way the Scala compiler emits JVM byte code for these two methods. If we compile them and look behind the curtains, we'll see (omitting the actual byte code for brevity):
public class testing.ReadingFile$log$1 {
public void LogInfo(java.lang.String,
scala.collection.immutable.Map<java.lang.String, java.lang.String>);
Code:
public void LogInfo(java.lang.String,
scala.collection.immutable.Map<java.lang.String, java.lang.String>,
java.util.UUID);
Code:
public java.util.UUID LogInfo$default$3();
Code:
0: aconst_null
1: areturn
}
You see that the generated code actually emitted two methods, one taking two arguments and one taking three. Additionaly, the compiler added an additional method called LogInfo$default$3 (the name actually has a meaning, where $3 means "the default parameter for the third argument), which returns the default value for the c variable of the second overload. If the method with the default argument was to be invoked, LogInfo$default$3 would be used to introduce a fresh variable with the given value.
Both methods are applicable, but overloading resolution specifically tosses out the application that requires default args:
applicable alternatives which don't employ any default argument
http://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#overloading-resolution
As to "why", imagine the overload has many default parameters, such that most applications of it don't look like invocations of the first method.

Scala Fork-Join-All With Multiple Generic Types and 1 Generic Unit of Work

I'm attempting to write a method which accepts multiple generic types and takes as an argument a unit of work to execute.
The idea is that the unit of work is a common function that itself is generic. For the sake of example, let's say it's something like the following:
def loadModelRdd[T: TypeTag](sc: SparkContext): RDD[T] = {
...
}
loadModelRdd() will construct an RDD of the given type after some internal processing like loading the Model information, etc.
A prototype method I've been hacking on looks something like the following (non-working):
def forkAll[A : Manifest, B : Manifest](work: => RDD[_]): (RDD[A], RDD[B]) = {
def aFuture = Future { work } // How can I notify that this work call returns type A?
def bFuture = Future { work } // How can I notify that this work call returns type B?
val res = for {
a <- aFuture
b <- bFuture
} yield (a.asInstanceOf[A], b.asInstanceOf[B])
Await.result(res, 10.seconds)
}
This is a shortened version of the code I'm working on as I'm actually looking at accepting as many as 10 different types.
As you can see, the overall goal of the forkAll method is to wrap the unit of work in a Future, fork-join the execution of the unit of work for each type, then return the results as a Tuple'd result. An example consumer statement would be:
val (a, b) = forkAll[ClassA, ClassB](loadModelRdd)
i.e I want to fork-join at this point and wait for the results, but I want the executions to be executed in parallel and then collected back to the Driver (Spark Driver to be specific).
The problem is I'm not sure how to coerce the type returned by the unit of work within forkAll when constructing the Future {} blocks. Without the forkAll, the implementation looked like the following:
val resA = loadModelRdd[ClassA](sc)
val resB = loadModelRdd[ClassB](sc)
...
I am looking at doing this for two reasons:
To abstract the details of fork-join for any unit of work which matches this model.
A version of this code, which explicitly states what the unit of work is, is working in Production and was responsible for cutting execution of a long-running block by close to half. I have a couple of execution steps where this pattern could be applied
Is this something that is possible in Scala's type system? Or should I look at this problem from a different perspective? I've tried a couple of implementations (including one described here) but I haven't quite found one that fits my current view of the problem
Please let me know if there is any additional information needed.
Thanks!
Short answer: Scala does not allow functions with type parameters, so what you want is not exactly possible.
You are attempting to pass a method with a type parameter. Although methods are allowed to have type parameters, functions are not. When you try to pass a method, it acts like an anonymous function, so you must specify a type.
However, since methods do allow type parameters, you can take advantage of this by creating an abstract class that will do your fork/join
abstract class ForkJoin {
protected def work[T]: RDD[T]
def apply[A, B]: (RDD[A], RDD[B]) = {
// Write implementation of fork/join here
(work[A], work[B])
}
}
then overriding the type generic work method so that it does what you want, such as calling some other pre-defined method.
val forkJoin = new ForkJoin {
override protected def work[T]: RDD[T] =
loadModelRdd[T](sc)
}
val (intRdd, stringRdd) = forkJoin[Int, String]
Check out this for a prototype implementation that compiles and runs without issues.

Scala: Why use implicit on function argument?

I have a following function:
def getIntValue(x: Int)(implicit y: Int ) : Int = {x + y}
I see above declaration everywhere. I understand what above function is doing. It is a currying function which takes two arguments. If you omit the second argument, it will invoke implicit definition which returns int instead. So I think it is something very similar to defining a default value for the argument.
implicit val temp = 3
scala> getIntValue(3)
res8: Int = 6
I was wondering what are the benefits of above declaration?
Here's my "pragmatic" answer: you typically use currying as more of a "convention" than anything else meaningful. It comes in really handy when your last parameter happens to be a "call by name" parameter (for example: : => Boolean):
def transaction(conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
conn.startTransaction // start transaction
val booleanResult = codeToExecuteInTransaction //invoke the code block they passed in
//deal with errors and rollback if necessary, or commit
//return connection to connection pool
}
What this is saying is "I have a function called transaction, its first parameter is a Connection and its second parameter will be a code-block".
This allows us to use this method like so (using the "I can use curly brace instead of parenthesis rule"):
transaction(myConn) {
//code to execute in a transaction
//the code block's last executable statement must be a Boolean as per the second
//parameter of the transaction method
}
If you didn't curry that transaction method, it would look pretty unnatural doing this:
transaction(myConn, {
//code block
})
How about implicit? Yes it can seem like a very ambiguous construct, but you get used to it after a while, and the nice thing about implicit functions is they have scoping rules. So this means for production, you might define an implicit function for getting that database connection from the PROD database, but in your integration test you'll define an implicit function that will superscede the PROD version, and it will be used to get a connection from a DEV database instead for use in your test.
As an example, how about we add an implicit parameter to the transaction method?
def transaction(implicit conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
}
Now, assuming I have an implicit function somewhere in my code base that returns a Connection, like so:
def implicit getConnectionFromPool() : Connection = { ...}
I can execute the transaction method like so:
transaction {
//code to execute in transaction
}
and Scala will translate that to:
transaction(getConnectionFromPool) {
//code to execute in transaction
}
In summary, Implicits are a pretty nice way to not have to make the developer provide a value for a required parameter when that parameter is 99% of the time going to be the same everywhere you use the function. In that 1% of the time you need a different Connection, you can provide your own connection by passing in a value instead of letting Scala figure out which implicit function provides the value.
In your specific example there are no practical benefits. In fact using implicits for this task will only obfuscate your code.
The standard use case of implicits is the Type Class Pattern. I'd say that it is the only use case that is practically useful. In all other cases it's better to have things explicit.
Here is an example of a typeclass:
// A typeclass
trait Show[a] {
def show(a: a): String
}
// Some data type
case class Artist(name: String)
// An instance of the `Show` typeclass for that data type
implicit val artistShowInstance =
new Show[Artist] {
def show(a: Artist) = a.name
}
// A function that works for any type `a`, which has an instance of a class `Show`
def showAListOfShowables[a](list: List[a])(implicit showInstance: Show[a]): String =
list.view.map(showInstance.show).mkString(", ")
// The following code outputs `Beatles, Michael Jackson, Rolling Stones`
val list = List(Artist("Beatles"), Artist("Michael Jackson"), Artist("Rolling Stones"))
println(showAListOfShowables(list))
This pattern originates from a functional programming language named Haskell and turned out to be more practical than the standard OO practices for writing a modular and decoupled software. The main benefit of it is it allows you to extend the already existing types with new functionality without changing them.
There's plenty of details unmentioned, like syntactic sugar, def instances and etc. It is a huge subject and fortunately it has a great coverage throughout the web. Just google for "scala type class".
There are many benefits, outside of your example.
I'll give just one; at the same time, this is also a trick that you can use on certain occasions.
Imagine you create a trait that is a generic container for other values, like a list, a set, a tree or something like that.
trait MyContainer[A] {
def containedValue:A
}
Now, at some point, you find it useful to iterate over all elements of the contained value.
Of course, this only makes sense if the contained value is of an iterable type.
But because you want your class to be useful for all types, you don't want to restrict A to be of a Seq type, or Traversable, or anything like that.
Basically, you want a method that says: "I can only be called if A is of a Seq type."
And if someone calls it on, say, MyContainer[Int], that should result in a compile error.
That's possible.
What you need is some evidence that A is of a sequence type.
And you can do that with Scala and implicit arguments:
trait MyContainer[A] {
def containedValue:A
def aggregate[B](f:B=>B)(implicit ev:A=>Seq[B]):B =
ev(containedValue) reduce f
}
So, if you call this method on a MyContainer[Seq[Int]], the compiler will look for an implicit Seq[Int]=>Seq[B].
That's really simple to resolve for the compiler.
Because there is a global implicit function that's called identity, and it is always in scope.
Its type signature is something like: A=>A
It simply returns whatever argument is passed to it.
I don't know how this pattern is called. (Can anyone help out?)
But I think it's a neat trick that comes in handy sometimes.
You can see a good example of that in the Scala library if you look at the method signature of Seq.sum.
In the case of sum, another implicit parameter type is used; in that case, the implicit parameter is evidence that the contained type is numeric, and therefore, a sum can be built out of all contained values.
That's not the only use of implicits, and certainly not the most prominent, but I'd say it's an honorable mention. :-)

Scala reflection on function parameter names

I have a class which takes a function
case class FunctionParser1Arg[T, U](func:(T => U))
def testFunc(name1:String):String = name1
val res = FunctionParser1Arg(testFunc)
I would like to know the type signature information on the function from inside the case class. I want to know both the parameter name and type. I have had success in finding the type using the runtime mirror objects, but not the name. Any suggestions?
Ok, let's say you got the symbol for the instance func points to:
import scala.reflect.runtime.universe._
import scala.reflect.runtime.{currentMirror => m}
val im = m reflect res.func // Instance Mirror
You can get the apply method from its type members:
val apply = newTermName("apply")
val applySymbol = im.symbol.typeSignature member apply
And since we know it's a method, make it a method symbol:
val applyMethod = applySymbol.asMethod
It's parameters can be found through paramss, and we know there's only one parameter on one parameter list, so we can get the first parameter of the first parameter list:
val param = applyMethod.paramss(0)(0)
Then what you are asking for is:
val name = param.name.decoded // if you want "+" instead of "$plus", for example
val type = param.typeSignature
It's possible that you think that's the wrong answer because you got x$1 instead of name1, but what is passed to the constructor is not the named function testFunc, but, instead, an anonymous function representing that method created through a process called eta expansion. You can't find out the parameter name of the method because you can't pass the method.
If that's what you need, I suggest you use a macro instead. With a macro, you'll be able to see exactly what is being passed at compile time and get the name from it.

When is a scala partial function not a partial function?

While creating a map of String to partial functions I ran into unexpected behavior. When I create a partial function as a map element it works fine. When I allocate to a val it invokes instead. Trying to invoke the check generates an error. Is this expected? Am I doing something dumb? Comment out the check() to see the invocation. I am using scala 2.7.7
def PartialFunctionProblem() = {
def dream()() = {
println("~Dream~");
new Exception().printStackTrace()
}
val map = scala.collection.mutable.HashMap[String,()=>Unit]()
map("dream") = dream() // partial function
map("dream")() // invokes as expected
val check = dream() // unexpected invocation
check() // error: check of type Unit does not take parameters
}
For convenience, Scala lets you omit empty parens when calling a method, but it's clever enough to see that the expected type in the first case is ()=>Unit, so it doesn't remove all the parens for you; instead, it converts the method into a function for you.
In the val check case, however, it looks just like a function call result getting assigned to a variable. In fact, all three of these do the exact same thing:
val check = dream
val check = dream()
val check = dream()()
If you want to turn the method into a function, you place _ after the method in place of the argument list(s). Thus,
val check = dream() _
will do what you want.
Well, the problem is that you got it all wrong. :-)
Here are some conceptual mistakes:
def dream()() = {
println("~Dream~");
new Exception().printStackTrace()
}
This is not a partial function. This is a curried method with two empty parameter lists which returns Unit.
val map = scala.collection.mutable.HashMap[String,()=>Unit]()
The type of the values in this map is not partial function, but function. Specifically, Function0[Unit]. A partial function would have type PartialFunction[T, R].
map("dream") = dream() // partial function
What happens here is that Scala converts the partially applied method into a function. This is not a simple assignment. Scala does the conversion because the type inferencer can guess the correct type.
val check = dream() // unexpected invocation
Here there's no expected type to help the type inferencer. However, empty parameter lists can be ommitted, so this is just a method call.