Scala: How to multiply a List of Lists by value - scala

Now studying Scala and working with list of lists. Want to multiply array by an element(for example, 1).
However I get the following error:
identifier expected but integer constant found
Current code:
def multiply[A](listOfLists:List[List[A]]):List[List[A]] =
if (listOfLists == Nil) Nil
else -1 * listOfLists.head :: multiply(listOfLists.tail)
val tt = multiply[List[3,4,5,6];List[4,5,6,7,8]]
print(tt);;

There are a few issues with your code:
In general, you can't perform arithmetic on unconstrained generic types; somehow, you have to communicate any supported arithmetic operations.
Multiplying by 1 will typically have no effect anyway.
As already pointed out, you don't declare List instances using square brackets (they're used for declaring generic type arguments).
The arguments you're passing to multiply are two separate lists (using an invalid semicolon separator instead of a comma), not a list of lists.
In the if clause the return value is Nil, which matches the stated return type of List[List[A]]. However the else clause is trying to perform a calculation which is multiplying List instances (not the contents of the lists) by an Int. Even if this made sense, the resulting type is clearly not a List[List[A]]. (This also makes it difficult for me to understand exactly what it is you're trying to accomplish.)
Here's a version of your code that corrects the above, assuming that you're trying to multiply each member of the inner lists by a particular factor:
// Multiply every element in a list of lists by the specified factor, returning the
// resulting list of lists.
//
// Should work for any primitive numeric type (Int, Double, etc.). For custom value types,
// you will need to declare an `implicit val` of type Numeric[YourCustomType] with an
// appropriate implementation of the `Numeric[T]` trait. If in scope, the appropriate
// num value will be identified by the compiler and passed to the function automatically.
def multiply[A](ll: List[List[A]], factor: A)(implicit num: Numeric[A]): List[List[A]] = {
// Numeric[T] trait defines a times method that we use to perform the multiplication.
ll.map(_.map(num.times(_, factor)))
}
// Sample use: Multiply every value in the list by 5.
val tt = multiply(List(List(3, 4, 5, 6), List(4, 5, 6, 7, 8)), 5)
println(tt)
This should result in the following output:
List(List(15, 20, 25, 30), List(20, 25, 30, 35, 40))
However, it might be that you're just trying to multiply together all of the values in the lists. This is actually a little more straightforward (note the different return type):
def multiply[A](ll: List[List[A]])(implicit num: Numeric[A]): A = ll.flatten.product
// Sample use: Multiply all values in all lists together.
val tt = multiply(List(List(3, 4, 5, 6), List(4, 5, 6, 7, 8)))
println(tt)
This should result in the following output:
2419200
I'd recommend you read a good book on Scala. There's a lot of pretty sophisticated stuff going on in these examples, and it would take too long to explain it all here. A good start would be Programming in Scala, Third Edition by Odersky, Spoon & Venners. That will cover List[A] operations such as map, flatten and product as well as implicit function arguments and implicit val declarations.

To make numeric operations available to type A, you can use context bound to associate A with scala.math.Numeric which provides methods such as times and fromInt to carry out the necessary multiplication in this use case:
def multiply[A: Numeric](listOfLists: List[List[A]]): List[List[A]] = {
val num = implicitly[Numeric[A]]
import num._
if (listOfLists == Nil) Nil else
listOfLists.head.map(times(_, fromInt(-1))) :: multiply(listOfLists.tail)
}
multiply( List(List(3, 4, 5, 6), List(4, 5, 6, 7, 8)) )
// res1: List[List[Int]] = List(List(-3, -4, -5, -6), List(-4, -5, -6, -7, -8))
multiply( List(List(3.0, 4.0), List(5.0, 6.0, 7.0)) )
// res2: List[List[Double]] = List(List(-3.0, -4.0), List(-5.0, -6.0, -7.0))
For more details about context bound, here's a relevant SO link.

Related

Scala: Detect and extract something more specific from a collection of `Any` values

Scala: Detect and extract something more specific from a collection of Any values.
(Motivation: The Saddle library -- the only Scala library I have found that provides a Frame type, which is critical for data science -- leads me to this puzzle. See last section for details.)
The problem
Imagine a collection c of type C[Any]. Suppose that some of the elements of c are of a type T which is strictly more specific than Any.
I would like a way to find all the elements of type T, and to then create an object d of type C[T], rather than C[Any].
Some code demonstrating the problem
scala> val c = List(0,1,"2","3")
<console>:11: warning: a type was inferred to be `Any`;
this may indicate a programming error.
val c = List(0,1,"2","3")
^
c: List[Any] = List(0, 1, 2, 3)
scala> :t c(0)
Any // I wish this were more specific
// Scala can convert some elements to Int.
scala> val c0 = c(0) . asInstanceOf[Int]
c0: Int = 0
// But how would I detect which?
scala> val d = c.slice(0,2)
d: List[Any] = List(0, 1) // I wish this were List[Int]
Motivation: Why the Saddle library leads me to this problem
Saddle lets you manipulate "Frames" (tables). Frames can have columns of various types. Some systems (e.g. Pandas) assign a separate type to each column. Every Frame in Saddle, however, has exactly three type parameters: The type of row labels, the type of column labels, and the type of cells.
Real world data is typically a mix of strings and numbers. The way such tables are represented in Saddle is as a Frame with a cell type of Any. I'd like to downcast (upcast? polymorphism is hard) a column to something more specific than a Series of Any values. I'd also like to be able to test a column, to be sure that the cast is appropriate.
I posted an issue on Saddle's Github site about the puzzle.
You could do something like this
scala> val c = List(0,1,"2","3")
c: List[Any] = List(0, 1, 2, 3)
scala> c.collect { case x: Int => x; case s: String => s.toInt }
res0: List[Int] = List(0, 1, 2, 3)
If you just want the Int types you can simply drop the second case.

Void Function Equivalent in scala?

val list = List(4, 6, 7, 8, 9, 13, 14)
list.foreach(num ⇒ println(num * 4))
list should be()
I have tried to figure what it should be but don't quite get the answer. I think that it has to be empty or like a void function in Java but I do not know the equivalent in Scala.
void equivalent in Scala would be Unit, foreach does return Unit.
def foreach(f: (A) ⇒ Unit): Unit
So the proper (and useless) test will be this:
list should be(List(4, 6, 7, 8, 9, 13, 14))
Also take into consideration that even it the function returns something, you are not capturing it so the list will remain unchanged.
If you want to retrieve the result of a function you should assign it to another value, something like this below: (Using map to show this):
val result = list.map(num ⇒ num * 4)
result should be(List(16, 24, 28, 32, 36, 52, 56))
I don't understand what you mean by void function, but there is a way to represent empty list like this:
list should be List.empty[Int]
void in java means there is no return value, Unit in scala serves the same purpose.
When you do list.foreach then there is no return value and list is not changed (or rather the return value is Unit), instead the function is applied to each member and the return value is discarded.
I imagine you are instead looking to do a flatmap. list.flatmap(f) would apply f on each element and assume the value return from f is a collection and then flatten it. for example if list is (0, 2, 4) and f returns a list with 1 repeated the member value then f would return list(), list(1,1) and list(1, 1, 1, 1) then the returned value would be list(1,1,1,1,1,1)
In this case, just return list() to have a total return value of list().

Why set shows odd order

I have a function that expects two arguments from type Set:
def union[A](set1: Set[A], set2: Set[A]): Set[A] = {
set1.foldLeft(set2){ (set, elt) => (set + elt) }
}
Apply function as follow:
union(Set(3,4,5,6), Set(34,56,23))
and I've got:
res2: Set[Int] = Set(5, 56, 6, 34, 3, 23, 4)
but I expect:
Set(3,4,5,6,34,56,23)
Why do I receive such as unordered result?
Set is an unordered data type, generally the order is determined by the implementation and is generally not guaranteed (much less guaranteed to be insertion ordered).
to get the behaviour you seem to want (distinct, insertion ordered) I would suggest using Lists and the distinct method
(List(3,4,5,6) ++ List(34,56,23)).distinct
res0: List[Int] = List(3, 4, 5, 6, 34, 56, 23)
Sets do not preserve an order - if you would like what you expect as your end result you can try this, note that it returns an ArrayBuffer (which you can then convert to what you need):
union(Set(3,4,5,6), Set(34,56,23)).toSeq.sorted
As we are dealing with a simple Ordering (Int), we don't need to specify the conditions for it to be ordered as it's implicitly done. Since your union def takes in any type, an Ordering will need to be specified based on the type you passed in. See this on guidance on creating an Ordering.

The easiest way to write {1, 2, 4, 8, 16 } in Scala

I was advertising Scala to a friend (who uses Java most of the time) and he asked me a challenge: what's the way to write an array {1, 2, 4, 8, 16} in Scala.
I don't know functional programming that well, but I really like Scala. However, this is a iterative array formed by (n*(n-1)), but how to keep track of the previous step? Is there a way to do it easily in Scala or do I have to write more than one line of code to achieve this?
Array.iterate(1, 5)(2 * _)
or
Array.iterate(1, 5)(n => 2 * n)
Elaborating on this as asked for in comment. Don't know what you want me to elaborate on, hope you will find what you need.
This is the function iterate(start,len)(f) on object Array (scaladoc). That would be a static in java.
The point is to fill an array of len elements, from first value start and always computing the next element by passing the previous one to function f.
A basic implementation would be
import scala.reflect.ClassTag
def iterate[A: ClassTag](start: A, len: Int)(f: A => A): Array[A] = {
val result = new Array[A](len)
if (len > 0) {
var current = start
result(0) = current
for (i <- 1 until len) {
current = f(current)
result(i) = current
}
}
result
}
(the actual implementation, not much different can be found here. It is a little different mostly because the same code is used for different data structures, e.g List.iterate)
Beside that, the implementation is very straightforward . The syntax may need some explanations :
def iterate[A](...) : Array[A] makes it a generic methods, usable for any type A. That would be public <A> A[] iterate(...) in java.
ClassTag is just a technicality, in scala as in java, you normally cannot create an array of a generic type (java new E[]), and the : ClassTag asks the compiler to add some magic which is very similar to adding at method declaration, and passing at call site, a class<A> clazz parameter in java, which can then be used to create the array by reflection. If you do e.g List.iterate rather than Array.iterate, it is not needed.
Maybe more surprising, the two parameters lists, one with start and len, and then in a separate parentheses, the one with f. Scala allows a method to have severals parameters lists. Here the reason is the peculiar way scala does type inference : Looking at the first parameter list, it will determine what is A, based on the type of start. Only afterwards, it will look at the second list, and then it knows what type A is. Otherwise, it would need to be told, so if there had been only one parameter list, def iterate[A: ClassTag](start: A, len: Int, f: A => A),
then the call should be either
Array.iterate(1, 5, n : Int => 2 * n)
Array.iterate[Int](1, 5, n => 2 * n)
Array.iterate(1, 5, 2 * (_: int))
Array.iterate[Int](1, 5, 2 * _)
making Int explicit one way or another. So it is common in scala to put function arguments in a separate argument list. The type might be much longer to write than just 'Int'.
A => A is just syntactic sugar for type Function1[A,A]. Obviously a functional language has functions as (first class) values, and a typed functional language has types for functions.
In the call, iterate(1, 5)(n => 2 * n), n => 2 * n is the value of the function. A more complete declaration would be {n: Int => 2 * n}, but one may dispense with Int for the reason stated above. Scala syntax is rather flexible, one may also dispense with either the parentheses or the brackets. So it could be iterate(1, 5){n => 2 * n}. The curlies allow a full block with several instruction, not needed here.
As for immutability, Array is basically mutable, there is no way to put a value in an array except to change the array at some point. My implementation (and the one in the library) also use a mutable var (current) and a side-effecting for, which is not strictly necessary, a (tail-)recursive implementation would be only a little longer to write, and just as efficient. But a mutable local does not hurt much, and we are already dealing with a mutable array anyway.
always more than one way to do it in Scala:
scala> (0 until 5).map(1<<_).toArray
res48: Array[Int] = Array(1, 2, 4, 8, 16)
or
scala> (for (i <- 0 to 4) yield 1<<i).toArray
res49: Array[Int] = Array(1, 2, 4, 8, 16)
or even
scala> List.fill(4)(1).scanLeft(1)(2*_+0*_).toArray
res61: Array[Int] = Array(1, 2, 4, 8, 16)
The other answers are fine if you happen to know in advance how many entries will be in the resulting list. But if you want to take all of the entries up to some limit, you should create an Iterator, use takeWhile to get the prefix you want, and create an array from that, like so:
scala> Iterator.iterate(1)(2*_).takeWhile(_<=16).toArray
res21: Array[Int] = Array(1, 2, 4, 8, 16)
It all boils down to whether what you really want is more correctly stated as
the first 5 powers of 2 starting at 1, or
the powers of 2 from 1 to 16
For non-trivial functions you almost always want to specify the end condition and let the program figure out how many entries there are. Of course your example was simple, and in fact the real easiest way to create that simple array is just to write it out literally:
scala> Array(1,2,4,8,16)
res22: Array[Int] = Array(1, 2, 4, 8, 16)
But presumably you were asking for a general technique you could use for arbitrarily complex problems. For that, Iterator and takeWhile are generally the tools you need.
You don't have to keep track of the previous step. Also, each element is not formed by n * (n - 1). You probably meant f(n) = f(n - 1) * 2.
Anyway, to answer your question, here's how you do it:
(0 until 5).map(math.pow(2, _).toInt).toArray

Should Scala's map() behave differently when mapping to the same type?

In the Scala Collections framework, I think there are some behaviors that are counterintuitive when using map().
We can distinguish two kinds of transformations on (immutable) collections. Those whose implementation calls newBuilder to recreate the resulting collection, and those who go though an implicit CanBuildFrom to obtain the builder.
The first category contains all transformations where the type of the contained elements does not change. They are, for example, filter, partition, drop, take, span, etc. These transformations are free to call newBuilder and to recreate the same collection type as the one they are called on, no matter how specific: filtering a List[Int] can always return a List[Int]; filtering a BitSet (or the RNA example structure described in this article on the architecture of the collections framework) can always return another BitSet (or RNA). Let's call them the filtering transformations.
The second category of transformations need CanBuildFroms to be more flexible, as the type of the contained elements may change, and as a result of this, the type of the collection itself maybe cannot be reused: a BitSet cannot contain Strings; an RNA contains only Bases. Examples of such transformations are map, flatMap, collect, scanLeft, ++, etc. Let's call them the mapping transformations.
Now here's the main issue to discuss. No matter what the static type of the collection is, all filtering transformations will return the same collection type, while the collection type returned by a mapping operation can vary depending on the static type.
scala> import collection.immutable.TreeSet
import collection.immutable.TreeSet
scala> val treeset = TreeSet(1,2,3,4,5) // static type == dynamic type
treeset: scala.collection.immutable.TreeSet[Int] = TreeSet(1, 2, 3, 4, 5)
scala> val set: Set[Int] = TreeSet(1,2,3,4,5) // static type != dynamic type
set: Set[Int] = TreeSet(1, 2, 3, 4, 5)
scala> treeset.filter(_ % 2 == 0)
res0: scala.collection.immutable.TreeSet[Int] = TreeSet(2, 4) // fine, a TreeSet again
scala> set.filter(_ % 2 == 0)
res1: scala.collection.immutable.Set[Int] = TreeSet(2, 4) // fine
scala> treeset.map(_ + 1)
res2: scala.collection.immutable.SortedSet[Int] = TreeSet(2, 3, 4, 5, 6) // still fine
scala> set.map(_ + 1)
res3: scala.collection.immutable.Set[Int] = Set(4, 5, 6, 2, 3) // uh?!
Now, I understand why this works like this. It is explained there and there. In short: the implicit CanBuildFrom is inserted based on the static type, and, depending on the implementation of its def apply(from: Coll) method, may or may not be able to recreate the same collection type.
Now my only point is, when we know that we are using a mapping operation yielding a collection with the same element type (which the compiler can statically determine), we could mimic the way the filtering transformations work and use the collection's native builder. We can reuse BitSet when mapping to Ints, create a new TreeSet with the same ordering, etc.
Then we would avoid cases where
for (i <- set) {
val x = i + 1
println(x)
}
does not print the incremented elements of the TreeSet in the same order as
for (i <- set; x = i + 1)
println(x)
So:
Do you think this would be a good idea to change the behavior of the mapping transformations as described?
What are the inevitable caveats I have grossly overlooked?
How could it be implemented?
I was thinking about something like an implicit sameTypeEvidence: A =:= B parameter, maybe with a default value of null (or rather an implicit canReuseCalleeBuilderEvidence: B <:< A = null), which could be used at runtime to give more information to the CanBuildFrom, which in turn could be used to determine the type of builder to return.
I looked again at it, and I think your problem doesn't arise from a particular deficiency of Scala collections, but rather a missing builder for TreeSet. Because the following does work as intended:
val list = List(1,2,3,4,5)
val seq1: Seq[Int] = list
seq1.map( _ + 1 ) // yields List
val vector = Vector(1,2,3,4,5)
val seq2: Seq[Int] = vector
seq2.map( _ + 1 ) // yields Vector
So the reason is that TreeSet is missing a specialised companion object/builder:
seq1.companion.newBuilder[Int] // ListBuffer
seq2.companion.newBuilder[Int] // VectorBuilder
treeset.companion.newBuilder[Int] // Set (oops!)
So my guess is, if you take proper provision for such a companion for your RNA class, you may find that both map and filter work as you wish...?