Increment (++) operator in Scala - scala

Is there any reason for Scala not support the ++ operator to increment primitive types by default?
For example, you can not write:
var i=0
i++
Thanks

My guess is this was omitted because it would only work for mutable variables, and it would not make sense for immutable values. Perhaps it was decided that the ++ operator doesn't scream assignment, so including it may lead to mistakes with regard to whether or not you are mutating the variable.
I feel that something like this is safe to do (on one line):
i++
but this would be a bad practice (in any language):
var x = i++
You don't want to mix assignment statements and side effects/mutation.

I like Craig's answer, but I think the point has to be more strongly made.
There are no "primitives" -- if Int can do it, then so can a user-made Complex (for example).
Basic usage of ++ would be like this:
var x = 1 // or Complex(1, 0)
x++
How do you implement ++ in class Complex? Assuming that, like Int, the object is immutable, then the ++ method needs to return a new object, but that new object has to be assigned.
It would require a new language feature. For instance, let's say we create an assign keyword. The type signature would need to be changed as well, to indicate that ++ is not returning a Complex, but assigning it to whatever field is holding the present object. In Scala spirit of not intruding in the programmers namespace, let's say we do that by prefixing the type with #.
Then it could be like this:
case class Complex(real: Double = 0, imaginary: Double = 0) {
def ++: #Complex = {
assign copy(real = real + 1)
// instead of return copy(real = real + 1)
}
The next problem is that postfix operators suck with Scala rules. For instance:
def inc(x: Int) = {
x++
x
}
Because of Scala rules, that is the same thing as:
def inc(x: Int) = { x ++ x }
Which wasn't the intent. Now, Scala privileges a flowing style: obj method param method param method param .... That mixes well C++/Java traditional syntax of object method parameter with functional programming concept of pipelining an input through multiple functions to get the end result. This style has been recently called "fluent interfaces" as well.
The problem is that, by privileging that style, it cripples postfix operators (and prefix ones, but Scala barely has them anyway). So, in the end, Scala would have to make big changes, and it would be able to measure up to the elegance of C/Java's increment and decrement operators anyway -- unless it really departed from the kind of thing it does support.

In Scala, ++ is a valid method, and no method implies assignment. Only = can do that.
A longer answer is that languages like C++ and Java treat ++ specially, and Scala treats = specially, and in an inconsistent way.
In Scala when you write i += 1 the compiler first looks for a method called += on the Int. It's not there so next it does it's magic on = and tries to compile the line as if it read i = i + 1. If you write i++ then Scala will call the method ++ on i and assign the result to... nothing. Because only = means assignment. You could write i ++= 1 but that kind of defeats the purpose.
The fact that Scala supports method names like += is already controversial and some people think it's operator overloading. They could have added special behavior for ++ but then it would no longer be a valid method name (like =) and it would be one more thing to remember.

I think the reasoning in part is that +=1 is only one more character, and ++ is used pretty heavily in the collections code for concatenation. So it keeps the code cleaner.
Also, Scala encourages immutable variables, and ++ is intrinsically a mutating operation. If you require +=, at least you can force all your mutations to go through a common assignment procedure (e.g. def a_=).

The primary reason is that there is not the need in Scala, as in C. In C you are constantly:
for(i = 0, i < 10; i++)
{
//Do stuff
}
C++ has added higher level methods for avoiding for explicit loops, but Scala has much gone further providing foreach, map, flatMap foldLeft etc. Even if you actually want to operate on a sequence of Integers rather than just cycling though a collection of non integer objects, you can use Scala range.
(1 to 5) map (_ * 3) //Vector(3, 6, 9, 12, 15)
(1 to 10 by 3) map (_ + 5)//Vector(6, 9, 12, 15)
Because the ++ operator is used by the collection library, I feel its better to avoid its use in non collection classes. I used to use ++ as a value returning method in my Util package package object as so:
implicit class RichInt2(n: Int)
{
def isOdd: Boolean = if (n % 2 == 1) true else false
def isEven: Boolean = if (n % 2 == 0) true else false
def ++ : Int = n + 1
def -- : Int = n - 1
}
But I removed it. Most of the times when I have used ++ or + 1 on an integer, I have later found a better way, which doesn't require it.

It is possible if you define you own class which can simulate the desired output however it may be a pain if you want to use normal "Int" methods as well since you would have to always use *()
import scala.language.postfixOps //otherwise it will throw warning when trying to do num++
/*
* my custom int class which can do ++ and --
*/
class int(value: Int) {
var mValue = value
//Post-increment
def ++(): int = {
val toReturn = new int(mValue)
mValue += 1
return toReturn
}
//Post-decrement
def --(): int = {
val toReturn = new int(mValue)
mValue -= 1
return toReturn
}
//a readable toString
override def toString(): String = {
return mValue.toString
}
}
//Pre-increment
def ++(n: int): int = {
n.mValue += 1
return n;
}
//Pre-decrement
def --(n: int): int = {
n.mValue -= 1
return n;
}
//Something to get normal Int
def *(n: int): Int = {
return n.mValue
}
Some possible test cases
scala>var num = new int(4)
num: int = 4
scala>num++
res0: int = 4
scala>num
res1: int = 5 // it works although scala always makes new resources
scala>++(num) //parentheses are required
res2: int = 6
scala>num
res3: int = 6
scala>++(num)++ //complex function
res4: int = 7
scala>num
res5: int = 8
scala>*(num) + *(num) //testing operator_*
res6: Int = 16

Of course you can have that in Scala, if you really want:
import scalaz._
import Scalaz._
case class IncLens[S,N](lens: Lens[S,N], num : Numeric[N]) {
def ++ = lens.mods(num.plus(_, num.one))
}
implicit def incLens[S,N:Numeric](lens: Lens[S,N]) =
IncLens[S,N](lens, implicitly[Numeric[N]])
val i = Lens[Int,Int](identity, (x, y) => y)
val imperativeProgram = for {
_ <- i := 0;
_ <- i++;
_ <- i++;
x <- i++
} yield x
def runProgram = imperativeProgram ! 0
And here you go:
scala> runProgram
runProgram: Int = 3

It isn't included because Scala developers thought it make the specification more complex while achieving only negligible benefits and because Scala doesn't have operators at all.
You could write your own one like this:
class PlusPlusInt(i: Int){
def ++ = i+1
}
implicit def int2PlusPlusInt(i: Int) = new PlusPlusInt(i)
val a = 5++
// a is 6
But I'm sure you will get into some trouble with precedence not working as you expect. Additionally if i++ would be added, people would ask for ++i too, which doesn't really fit into Scala's syntax.

Lets define a var:
var i = 0
++i is already short enough:
{i+=1;i}
Now i++ can look like this:
i(i+=1)
To use above syntax, define somewhere inside a package object, and then import:
class IntPostOp(val i: Int) { def apply(op: Unit) = { op; i } }
implicit def int2IntPostOp(i: Int): IntPostOp = new IntPostOp(i)
Operators chaining is also possible:
i(i+=1)(i%=array.size)(i&=3)
The above example is similar to this Java (C++?) code:
i=(i=i++ %array.length)&3;
The style could depend, of course.

Related

Scala operator overloading with multiple parameters

In short: I try to write something like A <N B for a DSL in Scala, for an integer N and A,B of Type T. Is there a nice possibility to do so?
Longer: I try to write a DSL for TGrep2 in Scala. I'm currently interested to write
A <N B B is the Nth child of A (the rst child is <1).
in a nice way and as close as possible to the original definition in Scala. Is there a way to overload the < Operator that it can take a N and a B as a argument.
What I tried: I tried two different possibilities which did not make me very happy:
scala> val N = 10
N: Int = 10
scala> case class T(n:String) {def <(i:Int,j:T) = println("huray!")}
defined class T
scala> T("foo").<(N,T("bar"))
huray!
and
scala> case class T(n:String) {def <(i:Int) = new {def apply(j:T) = println("huray!")}}
defined class T
scala> (T("foo")<N)(T("bar"))
warning: there were 1 feature warnings; re-run with -feature for details
huray!
Id suggest you use something like nth instead of the < symbol which makes the semantics clear. A nth N is B would make a lot of sense to me at least. It would translate to something like
case class T (label:String){
def is(j:T) = {
label equals j.label
}
}
case class J(i:List[T]){
def nth(index:Int) :T = {
i(index)
}
}
You can easily do:
val t = T("Mice")
val t1 = T("Rats")
val j = J(List(t1,t))
j nth 1 is t //res = true
The problem is that apply doesn't work as a postfix operator, so you can't write it without the parantheses, you could write this:
case class T(n: String) {
def <(in: (Int, T)) = {
in match {
case (i, t) =>
println(s"${t.n} is the ${i} child of ${n}")
}
}
}
implicit class Param(lower: Int) {
def apply(t: T) = (lower, t)
}
but then,
T("foo") < 10 T("bar")
would still fail, but you could work it out with:
T("foo") < 10 (T("bar"))
there isn't a good way of doing what you want without adding parenthesis somewhere.
I think that you might want to go for a combinational parser instead if you really want to stick with this syntax. Or as #korefn proposed, you break the compatibility and do it with new operators.

Scala: how to use immutable values returned from function

I am new to Scala and try to use it in a functional way. Here are my questions:
Why can't I create a new binding for 'cnt' variable with function return value using '<-' operator?
How can increment immutable variable in a functional way (similar to Haskell <-) ? For the sake of experiment I don't want to use mutable vars.
import scala.io.Source
object MyProgram {
def main(args: Array[String]): Unit = {
if (args.length > 0) {
val lines = Source.fromFile(args(0)).getLines()
val cnt = 0
for (line <- lines) {
cnt <- readLines(line, cnt)
}
Console.err.println("cnt = "+cnt)
}
}
def readLines(line: String, cnt:Int):Int = {
println(line.length + " " + line)
val newCnt = cnt + 1
return (newCnt)
}
}
As for side effects, I could never expect that (line <- lines) is so devastating! It completely unwinds lines iterator. So running the following snippet will make size = 0 :
val lines = Source.fromFile(args(0)).getLines()
var cnt = 0
for (line <- lines) {
cnt = readLines(line, cnt)
}
val size = lines.size
Is it a normal Scala practice to have well-hidden side-effects like this?
You could fold on lines like so:
val lines = Source.fromFile(args(0)).getLines()
val cnt = lines.foldLeft(0) { case (count, line) => readLines(line, count) }
Console.err.println("cnt = "+cnt)
Your readLines method does side-effect with the call to println, but using foldLeft guarantees left-to-right processing of the list, so the output should be the same.
Why can't I reassign immutable 'cnt' variable with function return value using '<-' operator?
Why would you? If you has java experience, <- has the simular meaning as : in for(Item x: someCollection). It is just a syntactic sugar for taking current item from collection and naming it, it is not a bind operator in general.
Moreover, isn't reassign immutable oxymoron?
How can increment immutable variable in a functional way (similar to Haskell <-)?
Scala people usually use .zipWithIndex but this will work only if you're going to use counter inside for comprehension:
for((x, i) <- lines.zipWithIndex) { println("the counter value is" + i) }
So I think you need to stick with lines.count or use fold/reduce or = to assign new value to variable.
<- is not an operator, just a syntax used in for expressions. You have to use =. If you want to use <- it must be within the for-iteration-expression. And you cannot increment a val. If you want to modify that variable, make it a var.

Efficient way to fold list in scala, while avoiding allocations and vars

I have a bunch of items in a list, and I need to analyze the content to find out how many of them are "complete". I started out with partition, but then realized that I didn't need to two lists back, so I switched to a fold:
val counts = groupRows.foldLeft( (0,0) )( (pair, row) =>
if(row.time == 0) (pair._1+1,pair._2)
else (pair._1, pair._2+1)
)
but I have a lot of rows to go through for a lot of parallel users, and it is causing a lot of GC activity (assumption on my part...the GC could be from other things, but I suspect this since I understand it will allocate a new tuple on every item folded).
for the time being, I've rewritten this as
var complete = 0
var incomplete = 0
list.foreach(row => if(row.time != 0) complete += 1 else incomplete += 1)
which fixes the GC, but introduces vars.
I was wondering if there was a way of doing this without using vars while also not abusing the GC?
EDIT:
Hard call on the answers I've received. A var implementation seems to be considerably faster on large lists (like by 40%) than even a tail-recursive optimized version that is more functional but should be equivalent.
The first answer from dhg seems to be on-par with the performance of the tail-recursive one, implying that the size pass is super-efficient...in fact, when optimized it runs very slightly faster than the tail-recursive one on my hardware.
The cleanest two-pass solution is probably to just use the built-in count method:
val complete = groupRows.count(_.time == 0)
val counts = (complete, groupRows.size - complete)
But you can do it in one pass if you use partition on an iterator:
val (complete, incomplete) = groupRows.iterator.partition(_.time == 0)
val counts = (complete.size, incomplete.size)
This works because the new returned iterators are linked behind the scenes and calling next on one will cause it to move the original iterator forward until it finds a matching element, but it remembers the non-matching elements for the other iterator so that they don't need to be recomputed.
Example of the one-pass solution:
scala> val groupRows = List(Row(0), Row(1), Row(1), Row(0), Row(0)).view.map{x => println(x); x}
scala> val (complete, incomplete) = groupRows.iterator.partition(_.time == 0)
Row(0)
Row(1)
complete: Iterator[Row] = non-empty iterator
incomplete: Iterator[Row] = non-empty iterator
scala> val counts = (complete.size, incomplete.size)
Row(1)
Row(0)
Row(0)
counts: (Int, Int) = (3,2)
I see you've already accepted an answer, but you rightly mention that that solution will traverse the list twice. The way to do it efficiently is with recursion.
def counts(xs: List[...], complete: Int = 0, incomplete: Int = 0): (Int,Int) =
xs match {
case Nil => (complete, incomplete)
case row :: tail =>
if (row.time == 0) counts(tail, complete + 1, incomplete)
else counts(tail, complete, incomplete + 1)
}
This is effectively just a customized fold, except we use 2 accumulators which are just Ints (primitives) instead of tuples (reference types). It should also be just as efficient a while-loop with vars - in fact, the bytecode should be identical.
Maybe it's just me, but I prefer using the various specialized folds (.size, .exists, .sum, .product) if they are available. I find it clearer and less error-prone than the heavy-duty power of general folds.
val complete = groupRows.view.filter(_.time==0).size
(complete, groupRows.length - complete)
How about this one? No import tax.
import scala.collection.generic.CanBuildFrom
import scala.collection.Traversable
import scala.collection.mutable.Builder
case class Count(n: Int, total: Int) {
def not = total - n
}
object Count {
implicit def cbf[A]: CanBuildFrom[Traversable[A], Boolean, Count] = new CanBuildFrom[Traversable[A], Boolean, Count] {
def apply(): Builder[Boolean, Count] = new Counter
def apply(from: Traversable[A]): Builder[Boolean, Count] = apply()
}
}
class Counter extends Builder[Boolean, Count] {
var n = 0
var ttl = 0
override def +=(b: Boolean) = { if (b) n += 1; ttl += 1; this }
override def clear() { n = 0 ; ttl = 0 }
override def result = Count(n, ttl)
}
object Counting extends App {
val vs = List(4, 17, 12, 21, 9, 24, 11)
val res: Count = vs map (_ % 2 == 0)
Console println s"${vs} have ${res.n} evens out of ${res.total}; ${res.not} were odd."
val res2: Count = vs collect { case i if i % 2 == 0 => i > 10 }
Console println s"${vs} have ${res2.n} evens over 10 out of ${res2.total}; ${res2.not} were smaller."
}
OK, inspired by the answers above, but really wanting to only pass over the list once and avoid GC, I decided that, in the face of a lack of direct API support, I would add this to my central library code:
class RichList[T](private val theList: List[T]) {
def partitionCount(f: T => Boolean): (Int, Int) = {
var matched = 0
var unmatched = 0
theList.foreach(r => { if (f(r)) matched += 1 else unmatched += 1 })
(matched, unmatched)
}
}
object RichList {
implicit def apply[T](list: List[T]): RichList[T] = new RichList(list)
}
Then in my application code (if I've imported the implicit), I can write var-free expressions:
val (complete, incomplete) = groupRows.partitionCount(_.time != 0)
and get what I want: an optimized GC-friendly routine that prevents me from polluting the rest of the program with vars.
However, I then saw Luigi's benchmark, and updated it to:
Use a longer list so that multiple passes on the list were more obvious in the numbers
Use a boolean function in all cases, so that we are comparing things fairly
http://pastebin.com/2XmrnrrB
The var implementation is definitely considerably faster, even though Luigi's routine should be identical (as one would expect with optimized tail recursion). Surprisingly, dhg's dual-pass original is just as fast (slightly faster if compiler optimization is on) as the tail-recursive one. I do not understand why.
It is slightly tidier to use a mutable accumulator pattern, like so, especially if you can re-use your accumulator:
case class Accum(var complete = 0, var incomplete = 0) {
def inc(compl: Boolean): this.type = {
if (compl) complete += 1 else incomplete += 1
this
}
}
val counts = groupRows.foldLeft( Accum() ){ (a, row) => a.inc( row.time == 0 ) }
If you really want to, you can hide your vars as private; if not, you still are a lot more self-contained than the pattern with vars.
You could just calculate it using the difference like so:
def counts(groupRows: List[Row]) = {
val complete = groupRows.foldLeft(0){ (pair, row) =>
if(row.time == 0) pair + 1 else pair
}
(complete, groupRows.length - complete)
}

How to use scala.util.Sorting.quickSort() with arbitrary types?

I need to sort an array of pairs by second element. How do I pass comparator for my pairs to the quickSort function?
I'm using the following ugly approach now:
type AccResult = (AccUnit, Long) // pair
class Comparator(a:AccResult) extends Ordered[AccResult] {
def compare(that:AccResult) = lessCompare(a, that)
def lessCompare(a:AccResult, that:AccResult) = if (a._2 == that._2) 0 else if (a._2 < that._2) -1 else 1
}
scala.util.Sorting.quickSort(data)(d => new Comparator(d))
Why is quickSort designed to have an ordered view instead of usual comparator argument?
Scala 2.7 solutions are preferred.
I tend to prefer the non-implicit arguments unless its being used in more than a few places.
type Pair = (String,Int)
val items : Array[Pair] = Array(("one",1),("three",3),("two",2))
quickSort(items)(new Ordering[Pair] {
def compare(x: Pair, y: Pair) = {
x._2 compare y._2
}
})
Edit: After learning about view bounds in another question, I think that this approach might be better:
val items : Array[(String,Int)] = Array(("one",1),("three",3),("two",2))
class OrderTupleBySecond[X,Y <% Comparable[Y]] extends Ordering[(X,Y)] {
def compare(x: (X,Y), y: (X,Y)) = {
x._2 compareTo y._2
}
}
util.Sorting.quickSort(items)(new OrderTupleBySecond[String,Int])
In this way, OrderTupleBySecond could be used for any Tuple2 type where the type of the 2nd member of the tuple has a view in scope which would convert it to a Comparable.
Ok, I'm not sure exactly what you are unhappy about what you are currently doing, but perhaps all you are looking for is this?
implicit def toComparator(a: AccResult) = new Comparator(a)
scala.util.Sorting.quickSort(data)
If, on the other hand, the problem is that the tuple is Ordered and you want a different ordering, well, that's why it changed on Scala 2.8.
* EDIT *
Ouch! Sorry, I only now realize you said you preferred Scala 2.7 solutions. I have editted this answer soon to put the solution for 2.7 above. What follows is a 2.8 solution.
Scala 2.8 expects an Ordering, not an Ordered, which is a context bound, not a view bound. You'd write your code in 2.8 like this:
type AccResult = (AccUnit, Long) // pair
implicit object AccResultOrdering extends Ordering[AccResult] {
def compare(x: AccResult, y: AccResult) = if (x._2 == y._2) 0 else if (x._2 < y._2) -1 else 1
}
Or maybe just:
type AccResult = (AccUnit, Long) // pair
implicit val AccResultOrdering = Ordering by ((_: AccResult)._2)
And use it like:
scala.util.Sorting.quickSort(data)
On the other hand, the usual way to do sort in Scala 2.8 is just to call one of the sorting methods on it, such as:
data.sortBy((_: AccResult)._2)
Have your type extend Ordered, like so:
case class Thing(number : Integer, name: String) extends Ordered[Thing] {
def compare(that: Thing) = name.compare(that.name)
}
And then pass it to sort, like so:
val array = Array(Thing(4, "Doll"), Thing(2, "Monkey"), Thing(7, "Green"))
scala.util.Sorting.quickSort(array)
Printing the array will give you:
array.foreach{ e => print(e) }
>> Thing(4,Doll) Thing(7,Green) Thing(2,Monkey)

Newbie Scala question about simple math array operations

Newbie Scala Question:
Say I want to do this [Java code] in Scala:
public static double[] abs(double[] r, double[] im) {
double t[] = new double[r.length];
for (int i = 0; i < t.length; ++i) {
t[i] = Math.sqrt(r[i] * r[i] + im[i] * im[i]);
}
return t;
}
and also make it generic (since Scala efficiently do generic primitives I have read). Relying only on the core language (no library objects/classes, methods, etc), how would one do this? Truthfully I don't see how to do it at all, so I guess that's just a pure bonus point question.
I ran into sooo many problems trying to do this simple thing that I have given up on Scala for the moment. Hopefully once I see the Scala way I will have an 'aha' moment.
UPDATE:
Discussing this with others, this is the best answer I have found so far.
def abs[T](r: Iterable[T], im: Iterable[T])(implicit n: Numeric[T]) = {
import n.mkNumericOps
r zip(im) map(t => math.sqrt((t._1 * t._1 + t._2 * t._2).toDouble))
}
Doing generic/performant primitives in scala actually involves two related mechanisms which scala uses to avoid boxing/unboxing (e.g. wrapping an int in a java.lang.Integer and vice versa):
#specialize type annotations
Using Manifest with arrays
specialize is an annotation that tells the Java compiler to create "primitive" versions of code (akin to C++ templates, so I am told). Check out the type declaration of Tuple2 (which is specialized) compared with List (which isn't). It was added in 2.8 and means that, for example code like CC[Int].map(f : Int => Int) is executed without ever boxing any ints (assuming CC is specialized, of course!).
Manifests are a way of doing reified types in scala (which is limited by the JVM's type erasure). This is particularly useful when you want to have a method genericized on some type T and then create an array of T (i.e. T[]) within the method. In Java this is not possible because new T[] is illegal. In scala this is possible using Manifests. In particular, and in this case it allows us to construct a primitive T-array, like double[] or int[]. (This is awesome, in case you were wondering)
Boxing is so important from a performance perspective because it creates garbage, unless all of your ints are < 127. It also, obviously, adds a level of indirection in terms of extra process steps/method calls etc. But consider that you probably don't give a hoot unless you are absolutely positively sure that you definitely do (i.e. most code does not need such micro-optimization)
So, back to the question: in order to do this with no boxing/unboxing, you must use Array (List is not specialized yet, and would be more object-hungry anyway, even if it were!). The zipped function on a pair of collections will return a collection of Tuple2s (which will not require boxing, as this is specialized).
In order to do this generically (i.e. across various numeric types) you must require a context bound on your generic parameter that it is Numeric and that a Manifest can be found (required for array creation). So I started along the lines of...
def abs[T : Numeric : Manifest](rs : Array[T], ims : Array[T]) : Array[T] = {
import math._
val num = implicitly[Numeric[T]]
(rs, ims).zipped.map { (r, i) => sqrt(num.plus(num.times(r,r), num.times(i,i))) }
// ^^^^ no SQRT function for Numeric
}
...but it doesn't quite work. The reason is that a "generic" Numeric value does not have an operation like sqrt -> so you could only do this at the point of knowing you had a Double. For example:
scala> def almostAbs[T : Manifest : Numeric](rs : Array[T], ims : Array[T]) : Array[T] = {
| import math._
| val num = implicitly[Numeric[T]]
| (rs, ims).zipped.map { (r, i) => num.plus(num.times(r,r), num.times(i,i)) }
| }
almostAbs: [T](rs: Array[T],ims: Array[T])(implicit evidence$1: Manifest[T],implicit evidence$2: Numeric[T])Array[T]
Excellent - now see this purely generic method do some stuff!
scala> val rs = Array(1.2, 3.4, 5.6); val is = Array(6.5, 4.3, 2.1)
rs: Array[Double] = Array(1.2, 3.4, 5.6)
is: Array[Double] = Array(6.5, 4.3, 2.1)
scala> almostAbs(rs, is)
res0: Array[Double] = Array(43.69, 30.049999999999997, 35.769999999999996)
Now we can sqrt the result, because we have a Array[Double]
scala> res0.map(math.sqrt(_))
res1: Array[Double] = Array(6.609841147864296, 5.481788029466298, 5.980802621722272)
And to prove that this would work even with another Numeric type:
scala> import math._
import math._
scala> val rs = Array(BigDecimal(1.2), BigDecimal(3.4), BigDecimal(5.6)); val is = Array(BigDecimal(6.5), BigDecimal(4.3), BigDecimal(2.1))
rs: Array[scala.math.BigDecimal] = Array(1.2, 3.4, 5.6)
is: Array[scala.math.BigDecimal] = Array(6.5, 4.3, 2.1)
scala> almostAbs(rs, is)
res6: Array[scala.math.BigDecimal] = Array(43.69, 30.05, 35.77)
scala> res6.map(d => math.sqrt(d.toDouble))
res7: Array[Double] = Array(6.609841147864296, 5.481788029466299, 5.9808026217222725)
Use zip and map:
scala> val reals = List(1.0, 2.0, 3.0)
reals: List[Double] = List(1.0, 2.0, 3.0)
scala> val imags = List(1.5, 2.5, 3.5)
imags: List[Double] = List(1.5, 2.5, 3.5)
scala> reals zip imags
res0: List[(Double, Double)] = List((1.0,1.5), (2.0,2.5), (3.0,3.5))
scala> (reals zip imags).map {z => math.sqrt(z._1*z._1 + z._2*z._2)}
res2: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
scala> def abs(reals: List[Double], imags: List[Double]): List[Double] =
| (reals zip imags).map {z => math.sqrt(z._1*z._1 + z._2*z._2)}
abs: (reals: List[Double],imags: List[Double])List[Double]
scala> abs(reals, imags)
res3: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
UPDATE
It is better to use zipped because it avoids creating a temporary collection:
scala> def abs(reals: List[Double], imags: List[Double]): List[Double] =
| (reals, imags).zipped.map {(x, y) => math.sqrt(x*x + y*y)}
abs: (reals: List[Double],imags: List[Double])List[Double]
scala> abs(reals, imags)
res7: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
There isn't a easy way in Java to create generic numeric computational code; the libraries aren't there as you can see from oxbow's answer. Collections also are designed to take arbitrary types, which means that there's an overhead in working with primitives with them. So the fastest code (without careful bounds checking) is either:
def abs(re: Array[Double], im: Array[Double]) = {
val a = new Array[Double](re.length)
var i = 0
while (i < a.length) {
a(i) = math.sqrt(re(i)*re(i) + im(i)*im(i))
i += 1
}
a
}
or, tail-recursively:
def abs(re: Array[Double], im: Array[Double]) = {
def recurse(a: Array[Double], i: Int = 0): Array[Double] = {
if (i < a.length) {
a(i) = math.sqrt(re(i)*re(i) + im(i)*im(i))
recurse(a, i+1)
}
else a
}
recurse(new Array[Double](re.length))
}
So, unfortunately, this code ends up not looking super-nice; the niceness comes once you package it in a handy complex number array library.
If it turns out that you don't actually need highly efficient code, then
def abs(re: Array[Double], im: Array[Double]) = {
(re,im).zipped.map((i,j) => math.sqrt(i*i + j*j))
}
will do the trick compactly and conceptually clearly (once you understand how zipped works). The penalty in my hands is that this is about 2x slower. (Using List makes it 7x slower than while or tail recursion in my hands; List with zip makes it 20x slower; generics with arrays are 3x slower even without computing the square root.)
(Edit: fixed timings to reflect a more typical use case.)
After Edit:
OK I have got running what I wanted to do. Will take two Lists of any type of number and return an Array of Doubles.
def abs[A](r:List[A], im:List[A])(implicit numeric: Numeric[A]):Array[Double] = {
var t = new Array[Double](r.length)
for( i <- r.indices) {
t(i) = math.sqrt(numeric.toDouble(r(i))*numeric.toDouble(r(i))+numeric.toDouble(im(i))*numeric.toDouble(im(i)))
}
t
}