Newbie Scala question about simple math array operations - scala

Newbie Scala Question:
Say I want to do this [Java code] in Scala:
public static double[] abs(double[] r, double[] im) {
double t[] = new double[r.length];
for (int i = 0; i < t.length; ++i) {
t[i] = Math.sqrt(r[i] * r[i] + im[i] * im[i]);
}
return t;
}
and also make it generic (since Scala efficiently do generic primitives I have read). Relying only on the core language (no library objects/classes, methods, etc), how would one do this? Truthfully I don't see how to do it at all, so I guess that's just a pure bonus point question.
I ran into sooo many problems trying to do this simple thing that I have given up on Scala for the moment. Hopefully once I see the Scala way I will have an 'aha' moment.
UPDATE:
Discussing this with others, this is the best answer I have found so far.
def abs[T](r: Iterable[T], im: Iterable[T])(implicit n: Numeric[T]) = {
import n.mkNumericOps
r zip(im) map(t => math.sqrt((t._1 * t._1 + t._2 * t._2).toDouble))
}

Doing generic/performant primitives in scala actually involves two related mechanisms which scala uses to avoid boxing/unboxing (e.g. wrapping an int in a java.lang.Integer and vice versa):
#specialize type annotations
Using Manifest with arrays
specialize is an annotation that tells the Java compiler to create "primitive" versions of code (akin to C++ templates, so I am told). Check out the type declaration of Tuple2 (which is specialized) compared with List (which isn't). It was added in 2.8 and means that, for example code like CC[Int].map(f : Int => Int) is executed without ever boxing any ints (assuming CC is specialized, of course!).
Manifests are a way of doing reified types in scala (which is limited by the JVM's type erasure). This is particularly useful when you want to have a method genericized on some type T and then create an array of T (i.e. T[]) within the method. In Java this is not possible because new T[] is illegal. In scala this is possible using Manifests. In particular, and in this case it allows us to construct a primitive T-array, like double[] or int[]. (This is awesome, in case you were wondering)
Boxing is so important from a performance perspective because it creates garbage, unless all of your ints are < 127. It also, obviously, adds a level of indirection in terms of extra process steps/method calls etc. But consider that you probably don't give a hoot unless you are absolutely positively sure that you definitely do (i.e. most code does not need such micro-optimization)
So, back to the question: in order to do this with no boxing/unboxing, you must use Array (List is not specialized yet, and would be more object-hungry anyway, even if it were!). The zipped function on a pair of collections will return a collection of Tuple2s (which will not require boxing, as this is specialized).
In order to do this generically (i.e. across various numeric types) you must require a context bound on your generic parameter that it is Numeric and that a Manifest can be found (required for array creation). So I started along the lines of...
def abs[T : Numeric : Manifest](rs : Array[T], ims : Array[T]) : Array[T] = {
import math._
val num = implicitly[Numeric[T]]
(rs, ims).zipped.map { (r, i) => sqrt(num.plus(num.times(r,r), num.times(i,i))) }
// ^^^^ no SQRT function for Numeric
}
...but it doesn't quite work. The reason is that a "generic" Numeric value does not have an operation like sqrt -> so you could only do this at the point of knowing you had a Double. For example:
scala> def almostAbs[T : Manifest : Numeric](rs : Array[T], ims : Array[T]) : Array[T] = {
| import math._
| val num = implicitly[Numeric[T]]
| (rs, ims).zipped.map { (r, i) => num.plus(num.times(r,r), num.times(i,i)) }
| }
almostAbs: [T](rs: Array[T],ims: Array[T])(implicit evidence$1: Manifest[T],implicit evidence$2: Numeric[T])Array[T]
Excellent - now see this purely generic method do some stuff!
scala> val rs = Array(1.2, 3.4, 5.6); val is = Array(6.5, 4.3, 2.1)
rs: Array[Double] = Array(1.2, 3.4, 5.6)
is: Array[Double] = Array(6.5, 4.3, 2.1)
scala> almostAbs(rs, is)
res0: Array[Double] = Array(43.69, 30.049999999999997, 35.769999999999996)
Now we can sqrt the result, because we have a Array[Double]
scala> res0.map(math.sqrt(_))
res1: Array[Double] = Array(6.609841147864296, 5.481788029466298, 5.980802621722272)
And to prove that this would work even with another Numeric type:
scala> import math._
import math._
scala> val rs = Array(BigDecimal(1.2), BigDecimal(3.4), BigDecimal(5.6)); val is = Array(BigDecimal(6.5), BigDecimal(4.3), BigDecimal(2.1))
rs: Array[scala.math.BigDecimal] = Array(1.2, 3.4, 5.6)
is: Array[scala.math.BigDecimal] = Array(6.5, 4.3, 2.1)
scala> almostAbs(rs, is)
res6: Array[scala.math.BigDecimal] = Array(43.69, 30.05, 35.77)
scala> res6.map(d => math.sqrt(d.toDouble))
res7: Array[Double] = Array(6.609841147864296, 5.481788029466299, 5.9808026217222725)

Use zip and map:
scala> val reals = List(1.0, 2.0, 3.0)
reals: List[Double] = List(1.0, 2.0, 3.0)
scala> val imags = List(1.5, 2.5, 3.5)
imags: List[Double] = List(1.5, 2.5, 3.5)
scala> reals zip imags
res0: List[(Double, Double)] = List((1.0,1.5), (2.0,2.5), (3.0,3.5))
scala> (reals zip imags).map {z => math.sqrt(z._1*z._1 + z._2*z._2)}
res2: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
scala> def abs(reals: List[Double], imags: List[Double]): List[Double] =
| (reals zip imags).map {z => math.sqrt(z._1*z._1 + z._2*z._2)}
abs: (reals: List[Double],imags: List[Double])List[Double]
scala> abs(reals, imags)
res3: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)
UPDATE
It is better to use zipped because it avoids creating a temporary collection:
scala> def abs(reals: List[Double], imags: List[Double]): List[Double] =
| (reals, imags).zipped.map {(x, y) => math.sqrt(x*x + y*y)}
abs: (reals: List[Double],imags: List[Double])List[Double]
scala> abs(reals, imags)
res7: List[Double] = List(1.8027756377319946, 3.2015621187164243, 4.6097722286464435)

There isn't a easy way in Java to create generic numeric computational code; the libraries aren't there as you can see from oxbow's answer. Collections also are designed to take arbitrary types, which means that there's an overhead in working with primitives with them. So the fastest code (without careful bounds checking) is either:
def abs(re: Array[Double], im: Array[Double]) = {
val a = new Array[Double](re.length)
var i = 0
while (i < a.length) {
a(i) = math.sqrt(re(i)*re(i) + im(i)*im(i))
i += 1
}
a
}
or, tail-recursively:
def abs(re: Array[Double], im: Array[Double]) = {
def recurse(a: Array[Double], i: Int = 0): Array[Double] = {
if (i < a.length) {
a(i) = math.sqrt(re(i)*re(i) + im(i)*im(i))
recurse(a, i+1)
}
else a
}
recurse(new Array[Double](re.length))
}
So, unfortunately, this code ends up not looking super-nice; the niceness comes once you package it in a handy complex number array library.
If it turns out that you don't actually need highly efficient code, then
def abs(re: Array[Double], im: Array[Double]) = {
(re,im).zipped.map((i,j) => math.sqrt(i*i + j*j))
}
will do the trick compactly and conceptually clearly (once you understand how zipped works). The penalty in my hands is that this is about 2x slower. (Using List makes it 7x slower than while or tail recursion in my hands; List with zip makes it 20x slower; generics with arrays are 3x slower even without computing the square root.)
(Edit: fixed timings to reflect a more typical use case.)

After Edit:
OK I have got running what I wanted to do. Will take two Lists of any type of number and return an Array of Doubles.
def abs[A](r:List[A], im:List[A])(implicit numeric: Numeric[A]):Array[Double] = {
var t = new Array[Double](r.length)
for( i <- r.indices) {
t(i) = math.sqrt(numeric.toDouble(r(i))*numeric.toDouble(r(i))+numeric.toDouble(im(i))*numeric.toDouble(im(i)))
}
t
}

Related

Eliminating identity wrapper types from Scala APIs

Suppose I am trying to "abstract over execution":
import scala.language.higherKinds
class Operator[W[_]]( f : Int => W[Int] ) {
def operate( i : Int ) : W[Int] = f(i)
}
Now I can define an Operator[Future] or Operator[Task] etc. For example...
import scala.concurrent.{ExecutionContext,Future}
def futureSquared( i : Int ) = Future( i * i )( ExecutionContext.global )
In REPL-style...
scala> val fop = new Operator( futureSquared )
fop: Operator[scala.concurrent.Future] = Operator#105c54cb
scala> fop.operate(4)
res0: scala.concurrent.Future[Int] = Future(<not completed>)
scala> res0
res1: scala.concurrent.Future[Int] = Future(Success(16))
Hooray!
But I also might want a straightforward synchronous version, so I define somewhere
type Identity[T] = T
And I can define a synchronous operator...
scala> def square( i : Int ) : Identity[Int] = i * i
square: (i: Int)Identity[Int]
scala> val sop = new Operator( square )
sop: Operator[Identity] = Operator#18f2960b
scala> sop.operate(9)
res2: Identity[Int] = 81
Sweet.
But, it's awkward that the inferred type of the result is Identity[Int], rather than the simpler, straightforward Int. Of course the two types are really the same, and so are identical in every way. But I'd like clients of my library who don't know anything about this abstracting-over-execution stuff not to be confused.
I could write a wrapper by hand...
class SimpleOperator( inner : Operator[Identity] ) extends Operator[Identity]( inner.operate ) {
override def operate( i : Int ) : Int = super.operate(i)
}
which does work...
scala> val simple = new SimpleOperator( sop )
simple: SimpleOperator = SimpleOperator#345c744e
scala> simple.operate(7)
res3: Int = 49
But this feels very boiler-platey, especially if my abstracted-over-execution class has lots of methods rather than just one. And I'd have to remember to keep the wrapper in sync as the generic class evolves.
Is there some more generic, maintainable way to get a version of Operator[Identity] that makes the containing type disappear from the type inference and API docs?
This more of long comment rather than an answer...
But, it's awkward that the inferred type of the result is Identity[Int], rather than the simpler, straightforward Int. Of course the two types apparent types are really the same, and so are identical in every way. But I'd like clients of my library who don't know anything about this abstracting-over-execution stuff not to be confused.
This sounds like you want to convert Indentity[T] back to T... Have you considered type ascription?
scala>def f[T](t: T): Identity[T] = t
scala>f(3)
// res11: Identity[Int] = 3
scala>f(3): Int
// res12: Int = 3
// So in your case
scala>sop.operate(9): Int
// res18: Int = 81
As Steve Waldman suggested in comments given type Identity[T] = T, the types T and Identity[T] really are identical without any ceremony, substitutable and transparent at call sites or anywhere else. For example, following works fine out-of-the-box
sop.operate(9) // res2: cats.Id[Int] = 81
def foo(i: Int) = i
foo(sop.operate(9)) // res3: Int = 81
extract from Cats is the dual of pure and extracts the value from its context, so perhaps we could provide similar methods for users not familiar with the above equivalence (like myself if you see my previous edit).
Can be done by providing types explicitly, but still looks magical for external users investigating method signature.
type Identity[T] = T
def square( i : Int ):Int = i * i
class Operator[W[_], T <: W[Int] ]( f : Int => T ) {
def operate(i : Int):T = f(i)
}
val op = new Operator[Identity,Int](square)
op.operate(5)
//res0: Int = 25
Works for new Operator[Future,Future[Int]] as well.

Is there a better way for reduce operation on RDD[Array[Double]]

I want to reduce a RDD[Array[Double]] in order to each element of the array will be add with the same element of the next array.
I use this code for the moment :
var rdd1 = RDD[Array[Double]]
var coord = rdd1.reduce( (x,y) => { (x, y).zipped.map(_+_) })
Is there a better way to make this more efficiently because it cost a harm.
Using zipped.map is very inefficient, because it creates a lot of temporary objects and boxes the doubles.
If you use spire, you can just do this
> import spire.implicits._
> val rdd1 = sc.parallelize(Seq(Array(1.0, 2.0), Array(3.0, 4.0)))
> var coord = rdd1.reduce( _ + _)
res1: Array[Double] = Array(4.0, 6.0)
This is much nicer to look at, and should also be much more efficient.
Spire is a dependency of spark, so you should be able to do the above without any extra dependencies. At least it worked with a spark-shell for spark 1.3.1 here.
This will work for any array where there is an AdditiveSemigroup typeclass instance available for the element type. In this case, the element type is Double. Spire typeclasses are #specialized for double, so there will be no boxing going on anywhere.
If you really want to know what is going on to make this work, you have to use reify:
> import scala.reflect.runtime.{universe => u}
> val a = Array(1.0, 2.0)
> val b = Array(3.0, 4.0)
> u.reify { a + b }
res5: reflect.runtime.universe.Expr[Array[Double]] = Expr[scala.Array[Double]](
implicits.additiveSemigroupOps(a)(
implicits.ArrayNormedVectorSpace(
implicits.DoubleAlgebra,
implicits.DoubleAlgebra,
Predef.this.implicitly)).$plus(b))
So the addition works because there is an instance of AdditiveSemigroup for Array[Double].
I assume the concern is that you have very large Array[Double] and the transformation as written does not distribute the addition of them. If so, you could do something like (untested):
// map Array[Double] to (index, double)
val rdd2 = rdd1.flatMap(a => a.zipWithIndex.map(t => (t._2,t._1))
// get the sum for each index
val reduced = rdd2.reduceByKey(_ + _)
// key everything the same to get a single iterable in groubByKey
val groupAll = reduced.map(t => ("constKey", (t._1, t._2)
// get the doubles back together into an array
val coord = groupAll.groupByKey { (k,vs) =>
vs.toList.sortBy(_._1).toArray.map(_._2) }

Scala Datatype for numeric real range

Is there some idiomatic scala type to limit a floating point value to a given float range that is defined by a upper an lower bound?
Concrete i want to have a float type that is only allowed to have values between 0.0 and 1.0.
More concrete i am about to write a function that takes a Int and another function that maps this Int to the range between 0.0 and 1.0, in pseudo-scala:
def foo(x : Int, f : (Int => {0.0,...,1.0})) {
// ....
}
Already searched the boards, but found nothing appropriate. some implicit-magic or custom typedef would be also ok for me.
I wouldn't know how to do it statically, except with dependent types (example), which Scala doesn't have. If you only dealt with constants it should be possible to use macros or a compiler plug-in that performs the necessary checks, but if you have arbitrary float-typed expressions it is very likely that you have to resort to runtime checks.
Here is an approach. Define a class that performs a runtime check to ensure that the float value is in the required range:
abstract class AbstractRangedFloat(lb: Float, ub: Float) {
require (lb <= value && value <= ub, s"Requires $lb <= $value <= $ub to hold")
def value: Float
}
You could use it as follows:
case class NormalisedFloat(val value: Float)
extends AbstractRangedFloat(0.0f, 1.0f)
NormalisedFloat(0.99f)
NormalisedFloat(-0.1f) // Exception
Or as:
case class RangedFloat(val lb: Float, val ub: Float)(val value: Float)
extends AbstractRangedFloat(lb, ub)
val RF = RangedFloat(-0.1f, 0.1f) _
RF(0.0f)
RF(0.2f) // Exception
It would be nice if one could use value classes in order to gain some performance, but the call to requires in the constructor (currently) prohibits that.
EDIT : addressing comments by #paradigmatic
Here is an intuitive argument why types depending on natural numbers can be encoded in a type system that does not (fully) support dependent types, but ranged floats probably cannot: The natural numbers are an enumerable set, which makes it possible to encode each element as path-dependent types using Peano numerals. The real numbers, however, are not enumerable any more, and it is thus no longer possible to systematically create types corresponding to each element of the reals.
Now, computer floats and reals are eventually finite sets, but still way to large to be reasonably efficiently enumerable in a type system. The set of computer natural numbers is of course also very large and thus poses a problem for arithmetic over Peano numerals encoded as types, see the last paragraph of this article. However, I claim that it is often sufficient to work with the first n (for a rather small n) natural numbers, as, for example, evidenced by HLists. Making the corresponding claim for floats is less convincing - would it be better to encode 10,000 floats between 0.0 and 1.0, or rather 10,000 between 0.0 and 100.0?
Here is another approach using an implicit class:
object ImplicitMyFloatClassContainer {
implicit class MyFloat(val f: Float) {
check(f)
val checksEnabled = true
override def toString: String = {
// The "*" is just to show that this method gets called actually
f.toString() + "*"
}
#inline
def check(f: Float) {
if (checksEnabled) {
print(s"Checking $f")
assert(0.0 <= f && f <= 1.0, "Out of range")
println(" OK")
}
}
#inline
def add(f2: Float): MyFloat = {
check(f2)
val result = f + f2
check(result)
result
}
#inline
def +(f2: Float): MyFloat = add(f2)
}
}
object MyFloatDemo {
def main(args: Array[String]) {
import ImplicitMyFloatClassContainer._
println("= Checked =")
val a: MyFloat = 0.3f
val b = a + 0.4f
println(s"Result 1: $b")
val c = 0.3f add 0.5f
println("Result 2: " + c)
println("= Unchecked =")
val x = 0.3f + 0.8f
println(x)
val f = 0.5f
val r = f + 0.3f
println(r)
println("= Check applied =")
try {
println(0.3f add 0.9f)
} catch {
case e: IllegalArgumentException => println("Failed as expected")
}
}
}
It requires a hint for the compiler to use the implicit class, either by typing the summands explicitly or by choosing a method which is not provided by Scala's Float.
This way at least the checks are centralized, so you can turn it off, if performance is an issue. As mhs pointed out, if this class is converted to an implicit value class, the checks must be removed from the constructor.
I have added #inline annotations, but I'm not sure, if this is helpful/necessary with implicit classes.
Finally, I have had no success to unimport the Scala Float "+" with
import scala.{Float => RealFloat}
import scala.Predef.{float2Float => _}
import scala.Predef.{Float2float => _}
possibly there is another way to achieve this in order to push the compiler to use the implict class
You can use value classes as pointed by mhs:
case class Prob private( val x: Double ) extends AnyVal {
def *( that: Prob ) = Prob( this.x * that.x )
def opposite = Prob( 1-x )
}
object Prob {
def make( x: Double ) =
if( x >=0 && x <= 1 )
Prob(x)
else
throw new RuntimeException( "X must be between 0 and 1" )
}
They must be created using the factory method in the companion object, which will check that the range is correct:
scala> val x = Prob.make(0.5)
x: Prob = Prob(0.5)
scala> val y = Prob.make(1.1)
java.lang.RuntimeException: X must be between 0 and 1
However using operations that will never produce a number outside the range will not require validity check. For instance * or opposite.

How to use scala.util.Sorting.quickSort() with arbitrary types?

I need to sort an array of pairs by second element. How do I pass comparator for my pairs to the quickSort function?
I'm using the following ugly approach now:
type AccResult = (AccUnit, Long) // pair
class Comparator(a:AccResult) extends Ordered[AccResult] {
def compare(that:AccResult) = lessCompare(a, that)
def lessCompare(a:AccResult, that:AccResult) = if (a._2 == that._2) 0 else if (a._2 < that._2) -1 else 1
}
scala.util.Sorting.quickSort(data)(d => new Comparator(d))
Why is quickSort designed to have an ordered view instead of usual comparator argument?
Scala 2.7 solutions are preferred.
I tend to prefer the non-implicit arguments unless its being used in more than a few places.
type Pair = (String,Int)
val items : Array[Pair] = Array(("one",1),("three",3),("two",2))
quickSort(items)(new Ordering[Pair] {
def compare(x: Pair, y: Pair) = {
x._2 compare y._2
}
})
Edit: After learning about view bounds in another question, I think that this approach might be better:
val items : Array[(String,Int)] = Array(("one",1),("three",3),("two",2))
class OrderTupleBySecond[X,Y <% Comparable[Y]] extends Ordering[(X,Y)] {
def compare(x: (X,Y), y: (X,Y)) = {
x._2 compareTo y._2
}
}
util.Sorting.quickSort(items)(new OrderTupleBySecond[String,Int])
In this way, OrderTupleBySecond could be used for any Tuple2 type where the type of the 2nd member of the tuple has a view in scope which would convert it to a Comparable.
Ok, I'm not sure exactly what you are unhappy about what you are currently doing, but perhaps all you are looking for is this?
implicit def toComparator(a: AccResult) = new Comparator(a)
scala.util.Sorting.quickSort(data)
If, on the other hand, the problem is that the tuple is Ordered and you want a different ordering, well, that's why it changed on Scala 2.8.
* EDIT *
Ouch! Sorry, I only now realize you said you preferred Scala 2.7 solutions. I have editted this answer soon to put the solution for 2.7 above. What follows is a 2.8 solution.
Scala 2.8 expects an Ordering, not an Ordered, which is a context bound, not a view bound. You'd write your code in 2.8 like this:
type AccResult = (AccUnit, Long) // pair
implicit object AccResultOrdering extends Ordering[AccResult] {
def compare(x: AccResult, y: AccResult) = if (x._2 == y._2) 0 else if (x._2 < y._2) -1 else 1
}
Or maybe just:
type AccResult = (AccUnit, Long) // pair
implicit val AccResultOrdering = Ordering by ((_: AccResult)._2)
And use it like:
scala.util.Sorting.quickSort(data)
On the other hand, the usual way to do sort in Scala 2.8 is just to call one of the sorting methods on it, such as:
data.sortBy((_: AccResult)._2)
Have your type extend Ordered, like so:
case class Thing(number : Integer, name: String) extends Ordered[Thing] {
def compare(that: Thing) = name.compare(that.name)
}
And then pass it to sort, like so:
val array = Array(Thing(4, "Doll"), Thing(2, "Monkey"), Thing(7, "Green"))
scala.util.Sorting.quickSort(array)
Printing the array will give you:
array.foreach{ e => print(e) }
>> Thing(4,Doll) Thing(7,Green) Thing(2,Monkey)

Increment (++) operator in Scala

Is there any reason for Scala not support the ++ operator to increment primitive types by default?
For example, you can not write:
var i=0
i++
Thanks
My guess is this was omitted because it would only work for mutable variables, and it would not make sense for immutable values. Perhaps it was decided that the ++ operator doesn't scream assignment, so including it may lead to mistakes with regard to whether or not you are mutating the variable.
I feel that something like this is safe to do (on one line):
i++
but this would be a bad practice (in any language):
var x = i++
You don't want to mix assignment statements and side effects/mutation.
I like Craig's answer, but I think the point has to be more strongly made.
There are no "primitives" -- if Int can do it, then so can a user-made Complex (for example).
Basic usage of ++ would be like this:
var x = 1 // or Complex(1, 0)
x++
How do you implement ++ in class Complex? Assuming that, like Int, the object is immutable, then the ++ method needs to return a new object, but that new object has to be assigned.
It would require a new language feature. For instance, let's say we create an assign keyword. The type signature would need to be changed as well, to indicate that ++ is not returning a Complex, but assigning it to whatever field is holding the present object. In Scala spirit of not intruding in the programmers namespace, let's say we do that by prefixing the type with #.
Then it could be like this:
case class Complex(real: Double = 0, imaginary: Double = 0) {
def ++: #Complex = {
assign copy(real = real + 1)
// instead of return copy(real = real + 1)
}
The next problem is that postfix operators suck with Scala rules. For instance:
def inc(x: Int) = {
x++
x
}
Because of Scala rules, that is the same thing as:
def inc(x: Int) = { x ++ x }
Which wasn't the intent. Now, Scala privileges a flowing style: obj method param method param method param .... That mixes well C++/Java traditional syntax of object method parameter with functional programming concept of pipelining an input through multiple functions to get the end result. This style has been recently called "fluent interfaces" as well.
The problem is that, by privileging that style, it cripples postfix operators (and prefix ones, but Scala barely has them anyway). So, in the end, Scala would have to make big changes, and it would be able to measure up to the elegance of C/Java's increment and decrement operators anyway -- unless it really departed from the kind of thing it does support.
In Scala, ++ is a valid method, and no method implies assignment. Only = can do that.
A longer answer is that languages like C++ and Java treat ++ specially, and Scala treats = specially, and in an inconsistent way.
In Scala when you write i += 1 the compiler first looks for a method called += on the Int. It's not there so next it does it's magic on = and tries to compile the line as if it read i = i + 1. If you write i++ then Scala will call the method ++ on i and assign the result to... nothing. Because only = means assignment. You could write i ++= 1 but that kind of defeats the purpose.
The fact that Scala supports method names like += is already controversial and some people think it's operator overloading. They could have added special behavior for ++ but then it would no longer be a valid method name (like =) and it would be one more thing to remember.
I think the reasoning in part is that +=1 is only one more character, and ++ is used pretty heavily in the collections code for concatenation. So it keeps the code cleaner.
Also, Scala encourages immutable variables, and ++ is intrinsically a mutating operation. If you require +=, at least you can force all your mutations to go through a common assignment procedure (e.g. def a_=).
The primary reason is that there is not the need in Scala, as in C. In C you are constantly:
for(i = 0, i < 10; i++)
{
//Do stuff
}
C++ has added higher level methods for avoiding for explicit loops, but Scala has much gone further providing foreach, map, flatMap foldLeft etc. Even if you actually want to operate on a sequence of Integers rather than just cycling though a collection of non integer objects, you can use Scala range.
(1 to 5) map (_ * 3) //Vector(3, 6, 9, 12, 15)
(1 to 10 by 3) map (_ + 5)//Vector(6, 9, 12, 15)
Because the ++ operator is used by the collection library, I feel its better to avoid its use in non collection classes. I used to use ++ as a value returning method in my Util package package object as so:
implicit class RichInt2(n: Int)
{
def isOdd: Boolean = if (n % 2 == 1) true else false
def isEven: Boolean = if (n % 2 == 0) true else false
def ++ : Int = n + 1
def -- : Int = n - 1
}
But I removed it. Most of the times when I have used ++ or + 1 on an integer, I have later found a better way, which doesn't require it.
It is possible if you define you own class which can simulate the desired output however it may be a pain if you want to use normal "Int" methods as well since you would have to always use *()
import scala.language.postfixOps //otherwise it will throw warning when trying to do num++
/*
* my custom int class which can do ++ and --
*/
class int(value: Int) {
var mValue = value
//Post-increment
def ++(): int = {
val toReturn = new int(mValue)
mValue += 1
return toReturn
}
//Post-decrement
def --(): int = {
val toReturn = new int(mValue)
mValue -= 1
return toReturn
}
//a readable toString
override def toString(): String = {
return mValue.toString
}
}
//Pre-increment
def ++(n: int): int = {
n.mValue += 1
return n;
}
//Pre-decrement
def --(n: int): int = {
n.mValue -= 1
return n;
}
//Something to get normal Int
def *(n: int): Int = {
return n.mValue
}
Some possible test cases
scala>var num = new int(4)
num: int = 4
scala>num++
res0: int = 4
scala>num
res1: int = 5 // it works although scala always makes new resources
scala>++(num) //parentheses are required
res2: int = 6
scala>num
res3: int = 6
scala>++(num)++ //complex function
res4: int = 7
scala>num
res5: int = 8
scala>*(num) + *(num) //testing operator_*
res6: Int = 16
Of course you can have that in Scala, if you really want:
import scalaz._
import Scalaz._
case class IncLens[S,N](lens: Lens[S,N], num : Numeric[N]) {
def ++ = lens.mods(num.plus(_, num.one))
}
implicit def incLens[S,N:Numeric](lens: Lens[S,N]) =
IncLens[S,N](lens, implicitly[Numeric[N]])
val i = Lens[Int,Int](identity, (x, y) => y)
val imperativeProgram = for {
_ <- i := 0;
_ <- i++;
_ <- i++;
x <- i++
} yield x
def runProgram = imperativeProgram ! 0
And here you go:
scala> runProgram
runProgram: Int = 3
It isn't included because Scala developers thought it make the specification more complex while achieving only negligible benefits and because Scala doesn't have operators at all.
You could write your own one like this:
class PlusPlusInt(i: Int){
def ++ = i+1
}
implicit def int2PlusPlusInt(i: Int) = new PlusPlusInt(i)
val a = 5++
// a is 6
But I'm sure you will get into some trouble with precedence not working as you expect. Additionally if i++ would be added, people would ask for ++i too, which doesn't really fit into Scala's syntax.
Lets define a var:
var i = 0
++i is already short enough:
{i+=1;i}
Now i++ can look like this:
i(i+=1)
To use above syntax, define somewhere inside a package object, and then import:
class IntPostOp(val i: Int) { def apply(op: Unit) = { op; i } }
implicit def int2IntPostOp(i: Int): IntPostOp = new IntPostOp(i)
Operators chaining is also possible:
i(i+=1)(i%=array.size)(i&=3)
The above example is similar to this Java (C++?) code:
i=(i=i++ %array.length)&3;
The style could depend, of course.