Left handed equivalent of apply in Scala - scala

It's very easy to define something like a 2 dimensional Matrix class in Scala with an apply method that would let me elegantly access the values inside my Matrix. Quite simply one would do this :
class Matrix(val n: Int, val m: Int, val default: Double = 0) {
val data: Array[Array[Double]] = Array.fill(n, m)(default)
def apply(x: Int, y: Int): Double = data(x)(y)
}
This allows me to access elements in my matrix like so :
val matrix = new Matrix(3, 3)
println(matrix(2, 2))
What I'm after, however, is the ability to do the opposite, and actually assign values to a matrix using similar notation. Essentially I want the ability to write something like this :
matrix(2, 2) = 5
Is there any way to do this in Scala ? In C++ this is doable by overloading the parenthesis operator to return a reference rather than a copy (the former defines the setter and the latter the getter), and similarly in Python this is the distinction between the __getitem__ and __setitem__ magic methods (with the slight difference of applying to square brackets instead of parenthesis). Does Scala support such behavior or am I stuck having to either directly access the data member and/or just writing a setter function ?

Take Array#update as example:
/** Update the element at given index.
*
* Note the syntax `xs(i) = x` is a shorthand for `xs.update(i, x)`.
*
* #param i the index
* #param x the value to be written at index `i`
*/
def update(i: Int, x: T): Unit
Try implementing an update:
class Matrix(val n: Int, val m: Int, val default: Double = 0) {
...
def update(x:Int, y: Int, value: Double): Unit =
???
}
matrix(2,2) = 5d
EDIT:
You can actually use:
def update(x:Int, y: Int, value: Double): Unit
instead of:
def update(coord: (Int,Int), value: Double): Unit.
and get exactly the syntax you desired.

Related

Call Function Through Implicit Function in Scala

In Scala, if I have a class called Vector2D and I want to make some implicit conversions for it. Currently, I have this function placed in the Vector2D: implicit def fromFloatTuple(tuple: (Float, Float)): Vector2D = new Vector2D(tuple._1, tuple._2)
I can do the following successfully val x: Vector2D = (1f, 1f). However, I cannot do something like: val x: Float = (1f, 1f).length() (obviously length is defined for Vector2D). Why does this not work, I would expect it to be converted to something like val x: Float = fromFloatTuple((1f, 1f)).length() but it is not. How do I get this effect in Scala?
Also now that I think of it, how could I make this function accept other numerical types as well without making a function for each combination of all numerical types (not a problem with a 2D vector but quite unclean for a 4D vector).
When you have target or source type Vector2D, the compiler looks in the companion object of Vector2D; in val x: Float = (1f, 1f).length() it doesn't. Looking for all possible types for implicit conversion would slow compilation very much, and Scala compiler is already slow (though improving).
You need to bring fromFloatTuple into scope by importing it:
import your.package.Vector2D._ // or just fromFloatTuple instead of _
This is exactly how it is supposed to work:
import scala.language.implicitConversions
object Test {
class Vector2D(x: Float, y: Float) { def length(): Float = x + y }
implicit def fromFloatTuple(tuple: (Float, Float)): Vector2D = Vector2D(tuple._1, tuple._2)
val x: Float = (1f, 1f).length()
For the more generic case you might want to look at scala.math.Numeric.
case class Vector4D[N](a: N, b: N, c: N, d: N)(implicit n: Numeric[N]) {
import n._
def length(): N = a + b + c + d
}
implicit def c2V4D[N: Numeric](tup: (N, N, N, N)): Vector4D[N] = new Vector4D(tup._1, tup._2, tup._3, tup._4)
println(Vector4D(1,2,3,4).length())
println(Vector4D(1,2,3,4.5).length())
println((1,2,3,4).length())
}

Scala overload function to add currying?

Started learning Scala today, and I was curious if you can overload a function to add currying like:
def add(x: Int, y: Int): Int = x + y
def add(x: Int)(y: Int): Int = x + y
But not only does this code not compile, but I've heard that overloading in Scala is not a good idea.
Is there a way to overload add such that it's curried without doing partial application, meaning both add(1, 2) and add(1)(2) work?
The problem is that those add functions are indistinguishable after JVM type erasure: during execution they are both (Int, Int)Int. But they are different during compilation, and Scala compiler can tell which one you are calling.
This means you have to make their argument lists different. To achieve that you can add an implicit argument list with a DummyImplicit argument:
def add(x: Int, y: Int)(implicit dummy: DummyImplicit): Int = x + y
def add(x: Int)(y: Int): Int = x + y
This DummyImplicit is provided by Scala library, and there is always an implicit value for it. Now the first function after erasure has a type (Int, Int, DummyImplicit)Int, and the second one (Int, Int)Int, so JVM can distinguish them.
Now you can call both:
add(1, 2)
add(1)(2)
I've got a way you can use add(1,2) and add(1)(2) but I wouldn't recommend it. It uses Scala's implicit definitions to have a different type for both methods, but use an implicit method to convert to the appropriate type.
case class IntWrapper(value: Int) // Purely to have a different type
object CurryingThing
{
def add(x: IntWrapper)(y: IntWrapper) = x.value + y.value
def add(x: Int, y: Int) = x + y
// The magic happens here. In the event that you have an int, but the function requires an intWrapper (our first function definition), Scala will invoke this implicit method to convert it
implicit def toWrapper(x: Int) = IntWrapper(x)
def main(args: Array[String]) = {
// Now you can use both add(1,2) and add(1)(2) and the appropriate function is called
println(add(1,2)) //Compiles, prints 3
println(add(1)(2)) // Compiles, prints 3
()
}
}
For overloading it's necessary that function should either:
have different number of arguments
or have different argument type(s)
In your example both definitions of add are equivalent hence it's not overloading and you get compilation error.
You could use Kolmar's way (implicit object) below to call both add(1, 2) and add(1)(2) OR you can use Scala's Default Parameter to achieve same thing:
def add(a: Int, b: Int, c: Any = DummyImplicit) = a + b // c is default parameter
def add(a: Int)(b: Int) = a + b
About:
I've heard that overloading in Scala is not a good idea.
You can see Why "avoid method overloading"?

Trait with Abstract Type in Method Argument

I am new to Scala and am building tools for statistical estimation. Consider the following: a trait probabilityDistribution is defined, which guarantees that classes which inherit from it will be able to perform certain functions, such as compute a density. Two such examples of probability distributions might be a binomial and beta distribution. The support of these two functions is Int and Double, respectively.
Set Up
trait probabilityDistribution extends Serializable {
type T
def density(x: T): Double
}
case class binomial(n: Int, p: Double) extends probabilityDistribution {
type T = Int
def density(x: Int): Double = x*p
}
case class beta(alpha: Double, beta: Double) extends probabilityDistribution {
type T = Double
def density(x: Double): Double = x*alpha*beta
}
Note that the actual mathematical implementations of the density methods are simplified above. Now, consider a Mixture Model, in which we have several features or variables which come from different distributions. We may choose to create a list of probabilityDistributions to represent our features.
val p = List(binomial(5, .5), beta(.5,.5))
Suppose that we are now interested in supplying a vector of hypothetical data values, and wish to query the density functions for each respective probability distribution.
val v = List[Any](2, 0.75)
The Problem
Of course, we use a zip with map. However, this doesn't work:
p zip v map { case (x,y) => x.density(y) }
### found : Any
# required: x.T
Caveat: Choice of Container
A valid question is to wonder why I have chosen List[Any] as the container to hold data values, rather than List[Double], or perhaps List[T <: Double]. Consider the case when some of our probability distributions have a support over vectors or even matrices (e.g. multivariate normal and inverse Wishart)
An idea to address the caveat might be to instead house our input values in a container that is more representative of our input type. e.g. something like
class likelihoodSupport
val v = List[likelihoodSupport](...)
where Int, Double, and Array[Double] and even a tuple (Array[Double], Array[Array[Double]]) all inherit from likelihoodSupport. As some of these classes are final, however, this is not possible.
One (Crummy) Fix
Note that this can be handled by using pattern matching and a polymorphic method within each subclass, but as Odersky might say this has a code smell:
trait probabilityDistribution extends Serializable {
type T
def density[T](x: T): Double
}
case class binomial(n: Int, p: Double) extends probabilityDistribution {
type T = Int
def density[U](x: U): Double = x match {case arg: Int => arg * p }
}
case class beta(alpha: Double, beta: Double) extends probabilityDistribution {
type T = Double
def density[U](x: U): Double = x match {case arg: Double => arg * alpha * beta}
}
We can now run
p zip v map { case (x,y) => x.density(y) }
Plea I know what I'm trying to do should be very easily accomplished in such a beautiful and powerful language, but I can't figure out how! Your help is much appreciated.
Note I am not interested in using additional packages/imports, as I feel this problem should be trivially solved in base Scala.
You can't do it given the separate p and v lists (at least without casts, or by writing your own HList library). This should be obvious: if you change the order of elements in one of these lists, the types won't change (unlike for HList), but distributions will now be paired with values of a wrong type!
The simplest approach is to add a cast:
p zip v map { case (x,y) => x.density(y.asInstanceOf[x.T]) }
Note that this may be a no-op at the runtime and lead to a ClassCastException inside density call instead, thanks to JVM type erasure.
If you want a safer alternative to the cast, something like this should work (see http://docs.scala-lang.org/overviews/reflection/typetags-manifests.html for more information on ClassTags and related types):
// note that generics do buy you some convenience in this case:
// abstract class probabilityDistribution[T](implicit val tag: ClassTag[T]) extends Serializable
// will mean you don't need to set tag explicitly in subtypes
trait probabilityDistribution extends Serializable {
type T
implicit val tag: ClassTag[T]
def density(x: T): Double
}
case class binomial(n: Int, p: Double) extends probabilityDistribution {
type T = Int
val tag = classTag[Int]
def density(x: Int): Double = x*p
}
p zip v map { (x,y) =>
implicit val tag: ClassTag[x.T] = x.tag
y match {
case y: x.T => ...
case _ => ...
}
}
Or you can combine distributions and values (or data structures containing values, functions returning values, etc.):
// alternately DistribWithValue(d: probabilityDistribution)(x: d.T)
case class DistribWithValue[A](d: probabilityDistribution { type T = A }, x: A) {
def density = d.density(x)
}
val pv: List[DistribWithValue[_]] = List(DistribWithValue(binomial(5, .5), 2), DistribWithValue(beta(.5,.5), 0.75))
// if you want p and v on their own
val p = pv.map(_.d)
val v = pv.map(_.x)
Of course, if you want to use a probabilityDistribution as a method argument, as the question title says, it's simple, for example:
def density(d: probabilityDistribution)(xs: List[d.T]) = xs.map(d.density _)
The problems only arise specifically when
The user may wish to make multiple density queries with different x values that are not intrinsically related to the probability distribution itself
and the compiler can't prove that these values have the correct type.

How to dynamical bind a method reference to a trait?

Given
def add(x: Int, y: Int): Int = x + y
val addAsMethodReference: (Int, Int) => Int = add _
trait BinaryOperator {
def execute(x: Int, y: Int): Int
}
val addAsBinaryOperator: BinaryOperator = addAsMethodReference...?
How do I bind addAsMethodReference to BinaryOperator without implementing BinaryOperator by hand?
Java 8 SAM will just work. I could use the method reference anywhere binary operator trait is used in Java 8.
Ideally, I want to write something like:
var addAsBinaryOperator: BinaryOperator = addAsMethodReference.asNewInstanceOf[BinaryOperator]
The reason I want this asNewInstanceOf method is it would work for any method signature. I don't care how many parameters are being passed. If I had to implement this by hand I have to carefully match each x and y. This is error prone at a larger scale.
The specification of left.asNewInstanceOf[right] would be if the right side has more than one abstract method, it fails at compilation. If the left side is not a functional type that matches the single abstract method signature in the right side, it would fail at compilation. The right side doesn't need to be a trait, it could be an abstract class with a single abstract method.
Well, you could make the implicit conversion
implicit def myConversion(f: (Int, Int) ⇒ Int): BinaryOperator = new BinaryOperator {
def execute(x: Int, y: Int): Int = f(x, y)
}
and if it's in scope, you can just do
val addAsBinaryOperator: BinaryOperator = addAsMethodReference
for any binary function of integers returning an integer. Although maybe this also classifies as "implementing by hand". I can't see a way in which the compiler magically realizes that you want to interpret a function as an instance of a user-created trait with some particular structure.
NEW ANSWER:
This does what you want. It dynamically creates a BinaryOperator object whose execute method is bound to addAsMethodReference:
def add(x: Int, y: Int): Int = x + y
val addAsMethodReference: (Int, Int) => Int = add _
trait BinaryOperator {
def execute(x: Int, y: Int): Int
}
val addAsBinaryOperator: BinaryOperator =
new BinaryOperator{ def execute(x: Int, y: Int): Int = addAsMethodReference(x,y) }
OLD ANSWER
Is this what you want?
implicit class EnhancedBinaryOperator(val self: BinaryOperator) extends AnyVal {
def addAsMethodReference(a: Int, b: Int) = a + b
}
val o: BinaryOperator = ??? // anything here
o.addAsMethodReference(1,2) // addAsMethodReference is now on BinaryOperator

What are named and default arguments?

I heard that Scala contains a feature called named and default arguments but I don't know what such parameters do or how to use them.
Can someone explain their usage?
Some special functions call type in Scala
Named arguments:
Named arguments allow you to pass arguments to a function in a different order.For example:
def speed(distance: Float, time: Float): Float = distance / time
And the it can be used like this:
speed(distance = 100, time = 10)
or
speed(time = 10, distance = 100)
Default arguments:
Scala lets you specify default values for function parameters. For example:
def printTime(out: java.io.PrintStream = Console.out) =
out.println("time = "+ System.currentTimeMillis())
Then you can call printTime without giving any output stream like this:
printTime()
Repeated arguments:
Scala allows you to indicate that the last parameter to a function may be repeat. For example:
def echo(args: String*) =
for (arg <- args)
println(arg)
Then you can use it like this:
echo()
echo("one")
echo("hello", "world!")
Default arguments solve the problem other programming languages normally solve with method overloading. When there is a method
def addNumbers(a: Int, b: Int, c: Int, d: Int) = a+b+c+d
that takes multiple parameters it can be useful to set some default values in overloaded methods to provide an API that is easier to use if one doesn't want to fill all parameters:
def addNumbers(a: Int, b: Int, c: Int) = addNumbers(a, b, c, 0)
def addNumbers(a: Int, b: Int) = addNumbers(a, b, 0, 0)
With default arguments it is no longer necessary to overload such a method:
def addNumbers(a: Int, b: Int, c: Int = 0, d: Int = 0) = a+b+c+d
The compiler automatically calls the method with the specific default arguments if they are not specified:
scala> addNumbers(1, 2, 3)
res2: Int = 6
A useful place for default arguments is in constructors. It is easier to write
class A(i: Int, s: String = "")
than
class A(i: Int, s: String) {
def this(i: Int) = this(i, "")
}
Named arguments on the other side can improve the readability of a method call:
def compute(xs: List[Int], executeInParallel: Boolean) = ???
compute(xs, executeInParallel = true) is easier to read than only compute(xs, true)
One can always specify the name of a parameter regardless of its order. This means compute(executeInParallel = true, xs = xs) is the same as compute(xs, true). The compiler often just needs a hint which parameter must be placed at which position when the ordering of the parameters is changed.
A use case where both named and default arguments can be used lies in the copy method of case classes, which are automatically generated by the compiler:
scala> case class Person(name: String, age: Int)
defined class Person
scala> val p = Person("Ruben", 43)
p: Person = Person(Ruben,43)
scala> val oneYearOlder = p.copy(age = p.age+1)
oneYearOlder: Person = Person(Ruben,44)
It may be important to mention that named arguments only work for methods defined in Scala. Parameters of methods defined in Java can't be called by their name.
Furthermore named arguments don't work on function literals:
scala> val f = (i: Int) => i
f: Int => Int = <function1>
scala> f(i = 1)
<console>:9: error: not found: value i
f(i = 1)
^
For further information to this feature one can take a look to docs.scala-lang.org.