Scala Datatype for numeric real range - scala

Is there some idiomatic scala type to limit a floating point value to a given float range that is defined by a upper an lower bound?
Concrete i want to have a float type that is only allowed to have values between 0.0 and 1.0.
More concrete i am about to write a function that takes a Int and another function that maps this Int to the range between 0.0 and 1.0, in pseudo-scala:
def foo(x : Int, f : (Int => {0.0,...,1.0})) {
// ....
}
Already searched the boards, but found nothing appropriate. some implicit-magic or custom typedef would be also ok for me.

I wouldn't know how to do it statically, except with dependent types (example), which Scala doesn't have. If you only dealt with constants it should be possible to use macros or a compiler plug-in that performs the necessary checks, but if you have arbitrary float-typed expressions it is very likely that you have to resort to runtime checks.
Here is an approach. Define a class that performs a runtime check to ensure that the float value is in the required range:
abstract class AbstractRangedFloat(lb: Float, ub: Float) {
require (lb <= value && value <= ub, s"Requires $lb <= $value <= $ub to hold")
def value: Float
}
You could use it as follows:
case class NormalisedFloat(val value: Float)
extends AbstractRangedFloat(0.0f, 1.0f)
NormalisedFloat(0.99f)
NormalisedFloat(-0.1f) // Exception
Or as:
case class RangedFloat(val lb: Float, val ub: Float)(val value: Float)
extends AbstractRangedFloat(lb, ub)
val RF = RangedFloat(-0.1f, 0.1f) _
RF(0.0f)
RF(0.2f) // Exception
It would be nice if one could use value classes in order to gain some performance, but the call to requires in the constructor (currently) prohibits that.
EDIT : addressing comments by #paradigmatic
Here is an intuitive argument why types depending on natural numbers can be encoded in a type system that does not (fully) support dependent types, but ranged floats probably cannot: The natural numbers are an enumerable set, which makes it possible to encode each element as path-dependent types using Peano numerals. The real numbers, however, are not enumerable any more, and it is thus no longer possible to systematically create types corresponding to each element of the reals.
Now, computer floats and reals are eventually finite sets, but still way to large to be reasonably efficiently enumerable in a type system. The set of computer natural numbers is of course also very large and thus poses a problem for arithmetic over Peano numerals encoded as types, see the last paragraph of this article. However, I claim that it is often sufficient to work with the first n (for a rather small n) natural numbers, as, for example, evidenced by HLists. Making the corresponding claim for floats is less convincing - would it be better to encode 10,000 floats between 0.0 and 1.0, or rather 10,000 between 0.0 and 100.0?

Here is another approach using an implicit class:
object ImplicitMyFloatClassContainer {
implicit class MyFloat(val f: Float) {
check(f)
val checksEnabled = true
override def toString: String = {
// The "*" is just to show that this method gets called actually
f.toString() + "*"
}
#inline
def check(f: Float) {
if (checksEnabled) {
print(s"Checking $f")
assert(0.0 <= f && f <= 1.0, "Out of range")
println(" OK")
}
}
#inline
def add(f2: Float): MyFloat = {
check(f2)
val result = f + f2
check(result)
result
}
#inline
def +(f2: Float): MyFloat = add(f2)
}
}
object MyFloatDemo {
def main(args: Array[String]) {
import ImplicitMyFloatClassContainer._
println("= Checked =")
val a: MyFloat = 0.3f
val b = a + 0.4f
println(s"Result 1: $b")
val c = 0.3f add 0.5f
println("Result 2: " + c)
println("= Unchecked =")
val x = 0.3f + 0.8f
println(x)
val f = 0.5f
val r = f + 0.3f
println(r)
println("= Check applied =")
try {
println(0.3f add 0.9f)
} catch {
case e: IllegalArgumentException => println("Failed as expected")
}
}
}
It requires a hint for the compiler to use the implicit class, either by typing the summands explicitly or by choosing a method which is not provided by Scala's Float.
This way at least the checks are centralized, so you can turn it off, if performance is an issue. As mhs pointed out, if this class is converted to an implicit value class, the checks must be removed from the constructor.
I have added #inline annotations, but I'm not sure, if this is helpful/necessary with implicit classes.
Finally, I have had no success to unimport the Scala Float "+" with
import scala.{Float => RealFloat}
import scala.Predef.{float2Float => _}
import scala.Predef.{Float2float => _}
possibly there is another way to achieve this in order to push the compiler to use the implict class

You can use value classes as pointed by mhs:
case class Prob private( val x: Double ) extends AnyVal {
def *( that: Prob ) = Prob( this.x * that.x )
def opposite = Prob( 1-x )
}
object Prob {
def make( x: Double ) =
if( x >=0 && x <= 1 )
Prob(x)
else
throw new RuntimeException( "X must be between 0 and 1" )
}
They must be created using the factory method in the companion object, which will check that the range is correct:
scala> val x = Prob.make(0.5)
x: Prob = Prob(0.5)
scala> val y = Prob.make(1.1)
java.lang.RuntimeException: X must be between 0 and 1
However using operations that will never produce a number outside the range will not require validity check. For instance * or opposite.

Related

how to test object methods (exercise)

As part of an exercise I am writing an API to generate random numbers.
This is the code that I have and I would like to test the notNegativeInt function.
How can I do that?
(here the full solution https://github.com/fpinscala/fpinscala/blob/master/answers/src/main/scala/fpinscala/state/State.scala )
import java.util.Random
object chapter6 {
println("Welcome to the Scala worksheet") //> Welcome to the Scala worksheet
trait RNG {
def nextInt: (Int, RNG) // Should generate a random `Int`. We'll later define other functions in terms of `nextInt`.
}
object RNG {
// NB - this was called SimpleRNG in the book text
case class Simple(seed: Long) extends RNG {
def nextInt: (Int, RNG) = {
val newSeed = (seed * 0x5DEECE66DL + 0xBL) & 0xFFFFFFFFFFFFL // `&` is bitwise AND. We use the current seed to generate a new seed.
val nextRNG = Simple(newSeed) // The next state, which is an `RNG` instance created from the new seed.
val n = (newSeed >>> 16).toInt // `>>>` is right binary shift with zero fill. The value `n` is our new pseudo-random integer.
(n, nextRNG) // The return value is a tuple containing both a pseudo-random integer and the next `RNG` state.
}
}
// We need to be quite careful not to skew the generator.
// Since `Int.Minvalue` is 1 smaller than `-(Int.MaxValue)`,
// it suffices to increment the negative numbers by 1 and make them positive.
// This maps Int.MinValue to Int.MaxValue and -1 to 0.
def nonNegativeInt(rng: RNG): (Int, RNG) = {
val (i, r) = rng.nextInt
(if (i < 0) -(i + 1) else i, r)
}
}
}
If you want to test only the nonNegativeInt method, you could provide a mock implementation of RNG that provides the values you want to test when nextInt is called.
For example, there are 2 possible branches in the nonNegativeInt, so you should provide a RNG instance that gives a negative number and another one that provides a positive number.
class MyRNG(val num:Int) extends RNG {
self: RNG =>
def nextInt: (Int, RNG) = (num, this)
}
With this RNG mock you could test your nonNegativeInt method, creating a MyRNG with the desired value you want to test against.
For this particular case, you can also omit the self reference:
class MyRNG(val num:Int) extends RNG {
def nextInt: (Int, RNG) = (num, this)
}

How to use scala.util.Sorting.quickSort() with arbitrary types?

I need to sort an array of pairs by second element. How do I pass comparator for my pairs to the quickSort function?
I'm using the following ugly approach now:
type AccResult = (AccUnit, Long) // pair
class Comparator(a:AccResult) extends Ordered[AccResult] {
def compare(that:AccResult) = lessCompare(a, that)
def lessCompare(a:AccResult, that:AccResult) = if (a._2 == that._2) 0 else if (a._2 < that._2) -1 else 1
}
scala.util.Sorting.quickSort(data)(d => new Comparator(d))
Why is quickSort designed to have an ordered view instead of usual comparator argument?
Scala 2.7 solutions are preferred.
I tend to prefer the non-implicit arguments unless its being used in more than a few places.
type Pair = (String,Int)
val items : Array[Pair] = Array(("one",1),("three",3),("two",2))
quickSort(items)(new Ordering[Pair] {
def compare(x: Pair, y: Pair) = {
x._2 compare y._2
}
})
Edit: After learning about view bounds in another question, I think that this approach might be better:
val items : Array[(String,Int)] = Array(("one",1),("three",3),("two",2))
class OrderTupleBySecond[X,Y <% Comparable[Y]] extends Ordering[(X,Y)] {
def compare(x: (X,Y), y: (X,Y)) = {
x._2 compareTo y._2
}
}
util.Sorting.quickSort(items)(new OrderTupleBySecond[String,Int])
In this way, OrderTupleBySecond could be used for any Tuple2 type where the type of the 2nd member of the tuple has a view in scope which would convert it to a Comparable.
Ok, I'm not sure exactly what you are unhappy about what you are currently doing, but perhaps all you are looking for is this?
implicit def toComparator(a: AccResult) = new Comparator(a)
scala.util.Sorting.quickSort(data)
If, on the other hand, the problem is that the tuple is Ordered and you want a different ordering, well, that's why it changed on Scala 2.8.
* EDIT *
Ouch! Sorry, I only now realize you said you preferred Scala 2.7 solutions. I have editted this answer soon to put the solution for 2.7 above. What follows is a 2.8 solution.
Scala 2.8 expects an Ordering, not an Ordered, which is a context bound, not a view bound. You'd write your code in 2.8 like this:
type AccResult = (AccUnit, Long) // pair
implicit object AccResultOrdering extends Ordering[AccResult] {
def compare(x: AccResult, y: AccResult) = if (x._2 == y._2) 0 else if (x._2 < y._2) -1 else 1
}
Or maybe just:
type AccResult = (AccUnit, Long) // pair
implicit val AccResultOrdering = Ordering by ((_: AccResult)._2)
And use it like:
scala.util.Sorting.quickSort(data)
On the other hand, the usual way to do sort in Scala 2.8 is just to call one of the sorting methods on it, such as:
data.sortBy((_: AccResult)._2)
Have your type extend Ordered, like so:
case class Thing(number : Integer, name: String) extends Ordered[Thing] {
def compare(that: Thing) = name.compare(that.name)
}
And then pass it to sort, like so:
val array = Array(Thing(4, "Doll"), Thing(2, "Monkey"), Thing(7, "Green"))
scala.util.Sorting.quickSort(array)
Printing the array will give you:
array.foreach{ e => print(e) }
>> Thing(4,Doll) Thing(7,Green) Thing(2,Monkey)

Increment (++) operator in Scala

Is there any reason for Scala not support the ++ operator to increment primitive types by default?
For example, you can not write:
var i=0
i++
Thanks
My guess is this was omitted because it would only work for mutable variables, and it would not make sense for immutable values. Perhaps it was decided that the ++ operator doesn't scream assignment, so including it may lead to mistakes with regard to whether or not you are mutating the variable.
I feel that something like this is safe to do (on one line):
i++
but this would be a bad practice (in any language):
var x = i++
You don't want to mix assignment statements and side effects/mutation.
I like Craig's answer, but I think the point has to be more strongly made.
There are no "primitives" -- if Int can do it, then so can a user-made Complex (for example).
Basic usage of ++ would be like this:
var x = 1 // or Complex(1, 0)
x++
How do you implement ++ in class Complex? Assuming that, like Int, the object is immutable, then the ++ method needs to return a new object, but that new object has to be assigned.
It would require a new language feature. For instance, let's say we create an assign keyword. The type signature would need to be changed as well, to indicate that ++ is not returning a Complex, but assigning it to whatever field is holding the present object. In Scala spirit of not intruding in the programmers namespace, let's say we do that by prefixing the type with #.
Then it could be like this:
case class Complex(real: Double = 0, imaginary: Double = 0) {
def ++: #Complex = {
assign copy(real = real + 1)
// instead of return copy(real = real + 1)
}
The next problem is that postfix operators suck with Scala rules. For instance:
def inc(x: Int) = {
x++
x
}
Because of Scala rules, that is the same thing as:
def inc(x: Int) = { x ++ x }
Which wasn't the intent. Now, Scala privileges a flowing style: obj method param method param method param .... That mixes well C++/Java traditional syntax of object method parameter with functional programming concept of pipelining an input through multiple functions to get the end result. This style has been recently called "fluent interfaces" as well.
The problem is that, by privileging that style, it cripples postfix operators (and prefix ones, but Scala barely has them anyway). So, in the end, Scala would have to make big changes, and it would be able to measure up to the elegance of C/Java's increment and decrement operators anyway -- unless it really departed from the kind of thing it does support.
In Scala, ++ is a valid method, and no method implies assignment. Only = can do that.
A longer answer is that languages like C++ and Java treat ++ specially, and Scala treats = specially, and in an inconsistent way.
In Scala when you write i += 1 the compiler first looks for a method called += on the Int. It's not there so next it does it's magic on = and tries to compile the line as if it read i = i + 1. If you write i++ then Scala will call the method ++ on i and assign the result to... nothing. Because only = means assignment. You could write i ++= 1 but that kind of defeats the purpose.
The fact that Scala supports method names like += is already controversial and some people think it's operator overloading. They could have added special behavior for ++ but then it would no longer be a valid method name (like =) and it would be one more thing to remember.
I think the reasoning in part is that +=1 is only one more character, and ++ is used pretty heavily in the collections code for concatenation. So it keeps the code cleaner.
Also, Scala encourages immutable variables, and ++ is intrinsically a mutating operation. If you require +=, at least you can force all your mutations to go through a common assignment procedure (e.g. def a_=).
The primary reason is that there is not the need in Scala, as in C. In C you are constantly:
for(i = 0, i < 10; i++)
{
//Do stuff
}
C++ has added higher level methods for avoiding for explicit loops, but Scala has much gone further providing foreach, map, flatMap foldLeft etc. Even if you actually want to operate on a sequence of Integers rather than just cycling though a collection of non integer objects, you can use Scala range.
(1 to 5) map (_ * 3) //Vector(3, 6, 9, 12, 15)
(1 to 10 by 3) map (_ + 5)//Vector(6, 9, 12, 15)
Because the ++ operator is used by the collection library, I feel its better to avoid its use in non collection classes. I used to use ++ as a value returning method in my Util package package object as so:
implicit class RichInt2(n: Int)
{
def isOdd: Boolean = if (n % 2 == 1) true else false
def isEven: Boolean = if (n % 2 == 0) true else false
def ++ : Int = n + 1
def -- : Int = n - 1
}
But I removed it. Most of the times when I have used ++ or + 1 on an integer, I have later found a better way, which doesn't require it.
It is possible if you define you own class which can simulate the desired output however it may be a pain if you want to use normal "Int" methods as well since you would have to always use *()
import scala.language.postfixOps //otherwise it will throw warning when trying to do num++
/*
* my custom int class which can do ++ and --
*/
class int(value: Int) {
var mValue = value
//Post-increment
def ++(): int = {
val toReturn = new int(mValue)
mValue += 1
return toReturn
}
//Post-decrement
def --(): int = {
val toReturn = new int(mValue)
mValue -= 1
return toReturn
}
//a readable toString
override def toString(): String = {
return mValue.toString
}
}
//Pre-increment
def ++(n: int): int = {
n.mValue += 1
return n;
}
//Pre-decrement
def --(n: int): int = {
n.mValue -= 1
return n;
}
//Something to get normal Int
def *(n: int): Int = {
return n.mValue
}
Some possible test cases
scala>var num = new int(4)
num: int = 4
scala>num++
res0: int = 4
scala>num
res1: int = 5 // it works although scala always makes new resources
scala>++(num) //parentheses are required
res2: int = 6
scala>num
res3: int = 6
scala>++(num)++ //complex function
res4: int = 7
scala>num
res5: int = 8
scala>*(num) + *(num) //testing operator_*
res6: Int = 16
Of course you can have that in Scala, if you really want:
import scalaz._
import Scalaz._
case class IncLens[S,N](lens: Lens[S,N], num : Numeric[N]) {
def ++ = lens.mods(num.plus(_, num.one))
}
implicit def incLens[S,N:Numeric](lens: Lens[S,N]) =
IncLens[S,N](lens, implicitly[Numeric[N]])
val i = Lens[Int,Int](identity, (x, y) => y)
val imperativeProgram = for {
_ <- i := 0;
_ <- i++;
_ <- i++;
x <- i++
} yield x
def runProgram = imperativeProgram ! 0
And here you go:
scala> runProgram
runProgram: Int = 3
It isn't included because Scala developers thought it make the specification more complex while achieving only negligible benefits and because Scala doesn't have operators at all.
You could write your own one like this:
class PlusPlusInt(i: Int){
def ++ = i+1
}
implicit def int2PlusPlusInt(i: Int) = new PlusPlusInt(i)
val a = 5++
// a is 6
But I'm sure you will get into some trouble with precedence not working as you expect. Additionally if i++ would be added, people would ask for ++i too, which doesn't really fit into Scala's syntax.
Lets define a var:
var i = 0
++i is already short enough:
{i+=1;i}
Now i++ can look like this:
i(i+=1)
To use above syntax, define somewhere inside a package object, and then import:
class IntPostOp(val i: Int) { def apply(op: Unit) = { op; i } }
implicit def int2IntPostOp(i: Int): IntPostOp = new IntPostOp(i)
Operators chaining is also possible:
i(i+=1)(i%=array.size)(i&=3)
The above example is similar to this Java (C++?) code:
i=(i=i++ %array.length)&3;
The style could depend, of course.

Newbie Scala question about simple math array operations

Newbie Scala Question:
Say I want to do this [Java code] in Scala:
public static double[] abs(double[] r, double[] im) {
double t[] = new double[r.length];
for (int i = 0; i < t.length; ++i) {
t[i] = Math.sqrt(r[i] * r[i] + im[i] * im[i]);
}
return t;
}
and also make it generic (since Scala efficiently do generic primitives I have read). Relying only on the core language (no library objects/classes, methods, etc), how would one do this? Truthfully I don't see how to do it at all, so I guess that's just a pure bonus point question.
I ran into sooo many problems trying to do this simple thing that I have given up on Scala for the moment. Hopefully once I see the Scala way I will have an 'aha' moment.
UPDATE:
Discussing this with others, this is the best answer I have found so far.
def abs[T](r: Iterable[T], im: Iterable[T])(implicit n: Numeric[T]) = {
import n.mkNumericOps
r zip(im) map(t => math.sqrt((t._1 * t._1 + t._2 * t._2).toDouble))
}
Doing generic/performant primitives in scala actually involves two related mechanisms which scala uses to avoid boxing/unboxing (e.g. wrapping an int in a java.lang.Integer and vice versa):
#specialize type annotations
Using Manifest with arrays
specialize is an annotation that tells the Java compiler to create "primitive" versions of code (akin to C++ templates, so I am told). Check out the type declaration of Tuple2 (which is specialized) compared with List (which isn't). It was added in 2.8 and means that, for example code like CC[Int].map(f : Int => Int) is executed without ever boxing any ints (assuming CC is specialized, of course!).
Manifests are a way of doing reified types in scala (which is limited by the JVM's type erasure). This is particularly useful when you want to have a method genericized on some type T and then create an array of T (i.e. T[]) within the method. In Java this is not possible because new T[] is illegal. In scala this is possible using Manifests. In particular, and in this case it allows us to construct a primitive T-array, like double[] or int[]. (This is awesome, in case you were wondering)
Boxing is so important from a performance perspective because it creates garbage, unless all of your ints are < 127. It also, obviously, adds a level of indirection in terms of extra process steps/method calls etc. But consider that you probably don't give a hoot unless you are absolutely positively sure that you definitely do (i.e. most code does not need such micro-optimization)
So, back to the question: in order to do this with no boxing/unboxing, you must use Array (List is not specialized yet, and would be more object-hungry anyway, even if it were!). The zipped function on a pair of collections will return a collection of Tuple2s (which will not require boxing, as this is specialized).
In order to do this generically (i.e. across various numeric types) you must require a context bound on your generic parameter that it is Numeric and that a Manifest can be found (required for array creation). So I started along the lines of...
def abs[T : Numeric : Manifest](rs : Array[T], ims : Array[T]) : Array[T] = {
import math._
val num = implicitly[Numeric[T]]
(rs, ims).zipped.map { (r, i) => sqrt(num.plus(num.times(r,r), num.times(i,i))) }
// ^^^^ no SQRT function for Numeric
}
...but it doesn't quite work. The reason is that a "generic" Numeric value does not have an operation like sqrt -> so you could only do this at the point of knowing you had a Double. For example:
scala> def almostAbs[T : Manifest : Numeric](rs : Array[T], ims : Array[T]) : Array[T] = {
| import math._
| val num = implicitly[Numeric[T]]
| (rs, ims).zipped.map { (r, i) => num.plus(num.times(r,r), num.times(i,i)) }
| }
almostAbs: [T](rs: Array[T],ims: Array[T])(implicit evidence$1: Manifest[T],implicit evidence$2: Numeric[T])Array[T]
Excellent - now see this purely generic method do some stuff!
scala> val rs = Array(1.2, 3.4, 5.6); val is = Array(6.5, 4.3, 2.1)
rs: Array[Double] = Array(1.2, 3.4, 5.6)
is: Array[Double] = Array(6.5, 4.3, 2.1)
scala> almostAbs(rs, is)
res0: Array[Double] = Array(43.69, 30.049999999999997, 35.769999999999996)
Now we can sqrt the result, because we have a Array[Double]
scala> res0.map(math.sqrt(_))
res1: Array[Double] = Array(6.609841147864296, 5.481788029466298, 5.980802621722272)
And to prove that this would work even with another Numeric type:
scala> import math._
import math._
scala> val rs = Array(BigDecimal(1.2), BigDecimal(3.4), BigDecimal(5.6)); val is = Array(BigDecimal(6.5), BigDecimal(4.3), BigDecimal(2.1))
rs: Array[scala.math.BigDecimal] = Array(1.2, 3.4, 5.6)
is: Array[scala.math.BigDecimal] = Array(6.5, 4.3, 2.1)
scala> almostAbs(rs, is)
res6: Array[scala.math.BigDecimal] = Array(43.69, 30.05, 35.77)
scala> res6.map(d => math.sqrt(d.toDouble))
res7: Array[Double] = Array(6.609841147864296, 5.481788029466299, 5.9808026217222725)
Use zip and map:
scala> val reals = List(1.0, 2.0, 3.0)
reals: List[Double] = List(1.0, 2.0, 3.0)
scala> val imags = List(1.5, 2.5, 3.5)
imags: List[Double] = List(1.5, 2.5, 3.5)
scala> reals zip imags
res0: List[(Double, Double)] = List((1.0,1.5), (2.0,2.5), (3.0,3.5))
scala> (reals zip imags).map {z => math.sqrt(z._1*z._1 + z._2*z._2)}
res2: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
scala> def abs(reals: List[Double], imags: List[Double]): List[Double] =
| (reals zip imags).map {z => math.sqrt(z._1*z._1 + z._2*z._2)}
abs: (reals: List[Double],imags: List[Double])List[Double]
scala> abs(reals, imags)
res3: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
UPDATE
It is better to use zipped because it avoids creating a temporary collection:
scala> def abs(reals: List[Double], imags: List[Double]): List[Double] =
| (reals, imags).zipped.map {(x, y) => math.sqrt(x*x + y*y)}
abs: (reals: List[Double],imags: List[Double])List[Double]
scala> abs(reals, imags)
res7: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
There isn't a easy way in Java to create generic numeric computational code; the libraries aren't there as you can see from oxbow's answer. Collections also are designed to take arbitrary types, which means that there's an overhead in working with primitives with them. So the fastest code (without careful bounds checking) is either:
def abs(re: Array[Double], im: Array[Double]) = {
val a = new Array[Double](re.length)
var i = 0
while (i < a.length) {
a(i) = math.sqrt(re(i)*re(i) + im(i)*im(i))
i += 1
}
a
}
or, tail-recursively:
def abs(re: Array[Double], im: Array[Double]) = {
def recurse(a: Array[Double], i: Int = 0): Array[Double] = {
if (i < a.length) {
a(i) = math.sqrt(re(i)*re(i) + im(i)*im(i))
recurse(a, i+1)
}
else a
}
recurse(new Array[Double](re.length))
}
So, unfortunately, this code ends up not looking super-nice; the niceness comes once you package it in a handy complex number array library.
If it turns out that you don't actually need highly efficient code, then
def abs(re: Array[Double], im: Array[Double]) = {
(re,im).zipped.map((i,j) => math.sqrt(i*i + j*j))
}
will do the trick compactly and conceptually clearly (once you understand how zipped works). The penalty in my hands is that this is about 2x slower. (Using List makes it 7x slower than while or tail recursion in my hands; List with zip makes it 20x slower; generics with arrays are 3x slower even without computing the square root.)
(Edit: fixed timings to reflect a more typical use case.)
After Edit:
OK I have got running what I wanted to do. Will take two Lists of any type of number and return an Array of Doubles.
def abs[A](r:List[A], im:List[A])(implicit numeric: Numeric[A]):Array[Double] = {
var t = new Array[Double](r.length)
for( i <- r.indices) {
t(i) = math.sqrt(numeric.toDouble(r(i))*numeric.toDouble(r(i))+numeric.toDouble(im(i))*numeric.toDouble(im(i)))
}
t
}

Interesting Scala typing solution, doesn't work in 2.7.7?

I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is:
trait Pixel[T <: Pixel[T]] {
def div(v: Double): T
def div(v: T): T
}
Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel:
class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] {
var data: Array[Double] = v
def this() = this(Array(0.0, 0.0, 0.0))
def div(v: Double): TripleDoublePixel = {
new TripleDoublePixel(data.map(x => x / v))
}
def div(v: TripleDoublePixel): TripleDoublePixel = {
var tmp = new Array[Double](3)
tmp(0) = data(0) / v.data(0)
tmp(1) = data(1) / v.data(1)
tmp(2) = data(2) / v.data(2)
new TripleDoublePixel(tmp)
}
}
Then we define an Image using Pixels:
class Image[T](nsize: Array[Int], ndata: Array[T]) {
val size: Array[Int] = nsize
val data: Array[T] = ndata
def this(isize: Array[Int]) {
this(isize, new Array[T](isize(0) * isize(1)))
}
def this(img: Image[T]) {
this(img.size, new Array[T](img.size(0) * img.size(1)))
for (i <- 0 until img.data.size) {
data(i) = img.data(i)
}
}
}
(I think I should be able to do without the explicit declaration of size and data, and use just what has been named in the default constructor, but I haven't gotten that to work.)
Now I want to write code to use this, that doesn't have to know what type the pixels are. For example:
def idiv[T](a: Image[T], b: Image[T]) {
for (i <- 0 until a.data.size) {
a.data(i) = a.data(i).div(b.data(i))
}
}
Unfortunately, this doesn't compile:
(fragment of lindet-gen.scala):145:
error: value div is not a member of T
a.data(i) = a.data(i).div(b.data(i))
I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but the RC1 doesn't compile for me. Is there any way to get this to work in 2.7.7?
Your idiv function has to know it'll actually be working with pixels.
def idiv[T <: Pixel[T]](a: Image[T], b: Image[T]) {
for (i <- 0 until a.data.size) {
a.data(i) = a.data(i).div(b.data(i))
}
}
A plain type parameter T would define the function for all possible types T, which of course don't all support a div operation. So you'll have to put a generic constraint limiting the possible types to Pixels.
(Note that you could put this constraint on the Image class as well, assuming that an image out of something different than pixels doesn't make sense)