My understanding was that all non-capturing lambdas shouldn't require object creation at use site, because one can be created as a static field and reused. In principle, the same could be true for lambdas constituting of a class method call - only the field would be non static. I never actually tried to dig any deeper into it; now I am looking at the bytecode, don't see one in the enclosing class and don't have a good idea where to look? I see though that the lambda factory is different than in Java, so this should have a clear answer - at least for a given Scala version.
My motivation is simple: profiling is very time consuming. Introducing method values (or in general, lambdas capturing only the state of the enclosing object) as private class fields is less clean and more work than writing them inline and, in general, not good code. But when writing areas known (with high likelihood) to be a hot spot, it's a very simple optimisation that can be performed straight away without any real impact on the programmer's time. It doesn't make sense though if no new object is created anyway.
Take for example:
def alias(x :X) = aliases.getOrElse(x, x)
def alias2(x :X) = aliases.getOrElse(x, null) match {
case null => x
case a => a
}
The first lambda (a Function0) must be a new object because it captures method parameter x, while the second one returns a constant (null) and thus doesn't really have to. It is also less messy (IMO) than a private class field, which pollutes the namespace, but I would like to be able to know for sure - or have a way of easily confirming my expectations.
The following proves that at least some of the time, the answer is "no":
scala 2.13.4> def foo = () => 1
def foo: () => Int
scala 2.13.4> foo eq foo
val res5: Boolean = true
Looking at the bytecode produced by this code:
import scala.collection.immutable.ListMap
object ByName {
def aliases = ListMap("Ein" -> "One", "Zwei" -> "Two", "Drei" -> "Three")
val default = "NaN"
def alias(x: String) = aliases.getOrElse(x, x)
def alias2(x: String) = aliases.getOrElse(x, null) match {
case null => x
case a => a
}
def alias3(x: String) = aliases.getOrElse(x, default)
}
The compiler generates static methods for the by-name parameters. They look like this:
public static final java.lang.String $anonfun$alias$1(java.lang.String);
Code:
0: aload_0
1: areturn
public static final scala.runtime.Null$ $anonfun$alias2$1();
Code:
0: aconst_null
1: areturn
public static final java.lang.String $anonfun$alias3$1();
Code:
0: getstatic #26 // Field MODULE$:LByName$;
3: invokevirtual #138 // Method default:()Ljava/lang/String;
6: areturn
The naive approach would have been for the compiler to generate anonymous classes that implement the Function0 interface. However, this would cause bytecode-bloat. Instead the compiler defers creating these anonymous inner classes until runtime via invokedynamic instructions.
Exactly how Scala uses these invokedynamic instructions is beyond my knowledge. It's possible that they cache the generated Function0 object somehow, but my guess is that the invokedynamic call is sufficiently optimized that it's faster to just generate a new one every time. Allocating short lived objects is cheap, and the cost is most often overestimated. Reusing an existing object might even be slower than creating a new one if it means cache misses.
I also want to point out that this is a implementation detail, and likely to change at any time. The Scala compiler devs and JVM devs know what they are doing, so you are probably better off trusting that their implementation balances performance well.
Suppose I have a function that accepts a object and a list:
case class Point(x: Int, y: Int)
def f1(w: Point, l: List[String]) = { /* do something /* }
I would typically use it like this:
val w = Point(1,1)
val lst = List("Hello", "world")
f1(w, lst) // non empty list
Many times I would need to call the function with empty list as second parameter:
f1(w, List()) // empty list
f1(w, Nil) // empty list
Is there any performance difference between the last two lines ?
I think using List() will invoke List.apply() method. Does Scala compiler optimize it to Nil?
EDIT1
This is not a duplicate of Scala: Nil vs List()
NOTE: Is there any performance difference between Nil vs List() ? Does Scala compiler do any optimizations here?
With a class like this
import collections.immutable.List
class Test {
val l = List() // or Nil
}
Compiling both of them and then checking the generated bytecode with javap -v
List() gives:
5: getstatic #26 // Field scala/collection/immutable/Nil$.MODULE$:Lscala/collection/immutable/Nil$;
8: putfield #14 // Field l:Lscala/collection/immutable/List;
And Nil gives:
5: getstatic #24 // Field scala/collection/immutable/Nil$.MODULE$:Lscala/collection/immutable/Nil$;
8: putfield #13 // Field l:Lscala/collection/immutable/Nil$;
So, the bytecode (and the performance) is the same for both. There might be other reasons to chose one over the other though, as described in the issue linked by Govind in the comments.
A deeper dive into the rabbit hole:
Looking at the sources List() is the sugar for List.apply() which is implemented like this:
def apply[A](xs: A*) = xs.toList
toList comes from TraversableOnce and calls to[List] which implicitly takes a CanBuildFrom[Nothing, A, List[A]], which in this case will be List.canBuildFrom, which in turn comes from ReusableCBF, that builder will then be called with .apply(), ++= the empty array, and then build()
How this can be eliminated/transformed into a getstatic for List() isn't very clear to me. (Or i missed something clever on the way).
I'm currently trying to apply a more functional programming style to a project involving low-level (LWJGL-based) GUI development. Obviously, in such a case it is necessary to carry around a lot of state, which is mutable in the current version. My goal is to eventually have a completely immutable state, in order to avoid state changes as side effect. I studied scalaz's lenses and state monads for awhile, but my main concern remains: All these techniques rely on copy-on-write. Since my state has both a large number of fields and also some fields of considerable size, I'm worried about performance.
To my knowledge the most common approach to modify immutable objects is to use the generated copy method of a case class (this is also what lenses do under the hood). My first question is, how this copy method is actually implemented? I performed a few experiments with a class like:
case class State(
innocentField: Int,
largeMap: Map[Int, Int],
largeArray: Array[Int]
)
By benchmarking and also by looking at the output of -Xprof it looks like updating someState.copy(innocentField = 42) actually performs a deep copy and I observe a significant performance drop when I increase the size of largeMap and largeArray. I was somehow expecting that the newly constructed instance shares the object references of the original state, since internally the reference should just get passed to the constructor. Can I somehow force or disable this deep copy behaviour of the default copy?
While pondering on the copy-on-write issue, I was wondering whether there are more general solutions to this problem in FP, which store changes of immutable data in a kind of incremental way (in the sense of "collecting updates" or "gathering changes"). To my surprise I could not find anything, so I tried the following:
// example state with just two fields
trait State {
def getName: String
def getX: Int
def setName(updated: String): State = new CachedState(this) {
override def getName: String = updated
}
def setX(updated: Int): State = new CachedState(this) {
override def getX: Int = updated
}
// convenient modifiers
def modName(f: String => String) = setName(f(getName))
def modX(f: Int => Int) = setX(f(getX))
def build(): State = new BasicState(getName, getX)
}
// actual (full) implementation of State
class BasicState(
val getName: String,
val getX: Int
) extends State
// CachedState delegates all getters to another state
class CachedState(oldState: State) extends State {
def getName = oldState.getName
def getX = oldState.getX
}
Now this allows to do something like this:
var s: State = new BasicState("hello", 42)
// updating single fields does not copy
s = s.setName("world")
s = s.setX(0)
// after a certain number of "wrappings"
// we can extract (i.e. copy) a normal instance
val ns = s.setName("ok").setX(40).modX(_ + 2).build()
My question now is: What do you think of this design? Is this some kind of FP design pattern that I'm not aware of (apart from the similarity to the Builder pattern)? Since I have not found anything similar, I'm wondering if there is some major issue with this approach? Or are there any more standard ways to solve the copy-on-write bottleneck without giving up immutability?
Is there even a possibility to unify the get/set/mod functions in some way?
Edit:
My assumption that copy performs a deep copy was indeed wrong.
This is basically the same as views and is a type of lazy evaluation; this type of strategy is more or less the default in Haskell, and is used in Scala a fair bit (see e.g. mapValues on maps, grouped on collections, pretty much anything on Iterator or Stream that returns another Iterator or Stream, etc.). It is a proven strategy to avoid extra work in the right context.
But I think your premise is somewhat mistaken.
case class Foo(bar: Int, baz: Map[String,Boolean]) {}
Foo(1,Map("fish"->true)).copy(bar = 2)
does not in fact cause the map to be copied deeply. It just sets references. Proof in bytecode:
62: astore_1
63: iconst_2 // This is bar = 2
64: istore_2
65: aload_1
66: invokevirtual #72; //Method Foo.copy$default$2:()Lscala/collection/immutable/Map;
69: astore_3 // That was baz
70: aload_1
71: iload_2
72: aload_3
73: invokevirtual #76; //Method Foo.copy:(ILscala/collection/immutable/Map;)LFoo;
And let's see what that copy$default$2 thing does:
0: aload_0
1: invokevirtual #50; //Method baz:()Lscala/collection/immutable/Map;
4: areturn
Just returns the map.
And copy itself?
0: new #2; //class Foo
3: dup
4: iload_1
5: aload_2
6: invokespecial #44; //Method "<init>":(ILscala/collection/immutable/Map;)V
9: areturn
Just calls the regular constructor. No cloning of the map.
So when you copy, you create exactly one object--a new copy of what you're copying, with fields filled in. If you have a large number of fields, your view will be faster (as you have to create one new object (two if you use the function application version, since you need to create the function object also) but it has only one field). Otherwise it should be about the same.
So, yes, good idea potentially, but benchmark carefully to be sure it's worth it in your case--you have to write a fair bit of code by hand instead of letting the case class do it all for you.
I tried to write a (quite rough) test for timing performances on your case class copy operation.
object CopyCase {
def main(args: Array[String]) = {
val testSizeLog = byTen(10 #:: Stream[Int]()).take(6).toList
val testSizeLin = (100 until 1000 by 100) ++ (1000 until 10000 by 1000) ++ (10000 to 40000 by 10000)
//warmUp
runTest(testSizeLin)
//test with logarithmic size increments
val times = runTest(testSizeLog)
//test with linear size increments
val timesLin = runTest(testSizeLin)
times.foreach(println)
timesLin.foreach(println)
}
//The case class to test for copy
case class State(
innocentField: Int,
largeMap: Map[Int, Int],
largeArray: Array[Int]
)
//executes the test
def runTest(sizes: Seq[Int]) =
for {
s <- sizes
st = State(s, largeMap(s), largeArray(s))
//(time, state) = takeTime (st.copy(innocentField = 42)) //single run for each size
(time, state) = mean(st.copy(innocentField = 42))(takeTime) //mean time on multiple runs for each size
} yield (s, time)
//Creates the stream of 10^n with n = Naturals+{0}
def byTen(s: Stream[Int]): Stream[Int] = s.head #:: byTen(s map (_ * 10))
//append the execution time to the result
def takeTime[A](thunk: => A): (Double, A) = {
import System.{currentTimeMillis => millis, nanoTime => nanos}
val t0:Double = nanos
val res = thunk
val time = ((nanos - t0) / 1000)
(time, res)
}
//does a mean on multiple runs of the first element of the pair
def mean[A](thunk: => A)(fun: (=> A) => (Double, A)) = {
val population = 50
val mean = ((for (n <- 1 to population) yield fun(thunk)) map (_._1) ).sum / population
(mean, fun(thunk)._2)
}
//Build collections for the requested size
def largeMap(size: Int) = (for (i <- (1 to size)) yield (i, i)).toMap
def largeArray(size: Int) = Array.fill(size)(1)
}
On this machine:
CPU: 64bits dual-core-i5 3.10GHz
RAM: 8GB ram
OS: win7
Java: 1.7
Scala: 2.9.2
I have the following results, which looks like pretty regular to me.
(size, millisecs to copy)
(10,0.4347000000000001)
(100,0.4412600000000001)
(1000,0.3953200000000001)
(10000,0.42161999999999994)
(100000,0.4478600000000002)
(1000000,0.42816000000000015)
(100,0.4084399999999999)
(200,0.41494000000000014)
(300,0.42156000000000016)
(400,0.4281799999999999)
(500,0.42160000000000003)
(600,0.4347200000000001)
(700,0.43466000000000016)
(800,0.41498000000000007)
(900,0.40178000000000014)
(1000,0.44134000000000007)
(2000,0.42151999999999995)
(3000,0.42148)
(4000,0.40842)
(5000,0.38860000000000006)
(6000,0.4413600000000001)
(7000,0.4743200000000002)
(8000,0.44795999999999997)
(9000,0.45448000000000005)
(10000,0.45448)
(20000,0.4281600000000001)
(30000,0.46768)
(40000,0.4676200000000001)
Maybe you have different performance measurements in mind.
Or could it be that your profiled times are actually spent on generating the Map and the Array, instead of copying the case class?
First. Consider the following code
scala> val fail = (x: Any) => { throw new RuntimeException }
fail: Any => Nothing = <function1>
scala> List(1).foreach(fail)
java.lang.RuntimeException
at $anonfun$1.apply(<console>:7)
at $anonfun$1.apply(<console>:7)
at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
There is additional anonfun between foreach and exception. One is expected to be a value of fail itself (object of a class Function1[]), but where is the second comes from?
foreach signature takes this function:
def foreach[U](f: A => U): Unit
So, what is the purpose of the second one?
Second, consider the following code:
scala> def outer() {
| def innerFail(x: Any) = { throw new RuntimeException("inner fail") }
|
| Set(1) foreach innerFail
| }
outer: ()Unit
scala> outer()
java.lang.RuntimeException: inner fail
at .innerFail$1(<console>:8)
at $anonfun$outer$1.apply(<console>:10)
at $anonfun$outer$1.apply(<console>:10)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:86)
There are two additional anonfuns... do they really needed? :-E
Let's look at the bytecode.
object ExtraClosure {
val fail = (x: Any) => { throw new RuntimeException }
List(1).foreach(fail)
}
We find, inside the (single) anonymous function:
public final scala.runtime.Nothing$ apply(java.lang.Object);
Code:
0: new #15; //class java/lang/RuntimeException
3: dup
4: invokespecial #19; //Method java/lang/RuntimeException."<init>":()V
7: athrow
public final java.lang.Object apply(java.lang.Object);
Code:
0: aload_0
1: aload_1
2: invokevirtual #27; //Method apply:(Ljava/lang/Object;)Lscala/runtime/Nothing$;
5: athrow
So it's actually not an extra closure after all. We have one method overloaded with two different return values (which is perfectly okay for the JVM since it treats the type of all parameters as part of the function signature). Function is generic, so it has to take the object return, but the code you wrote returns specifically Nothing, it also creates a method that returns the type you'd expect.
There are various ways around this, but none are without their flaws. This is the type of thing that JVMs are pretty good at eliding, however, so I wouldn't worry about it too much.
Edit: And of course in your second example, you used a def, and the anonfun is the class that wraps that def in a function object. That is of course needed since foreach takes a Function1. You have to generate that Function1 somehow.
I just wonder why there is no i++ to increase a number. As what I know, languages like Ruby or Python doesn't support it because they are dynamically typed. So it's obviously we cannot write code like i++ because maybe i is a string or something else. But Scala is statically typed - the compiler absolutely can infer that if it is legal or not to put ++ behind a variable.
So, why doesn't i++ exist in Scala?
Scala doesn't have i++ because it's a functional language, and in functional languages, operations with side effects are avoided (in a purely functional language, no side effects are permitted at all). The side effect of i++ is that i is now 1 larger than it was before. Instead, you should try to use immutable objects (e.g. val not var).
Also, Scala doesn't really need i++ because of the control flow constructs it provides. In Java and others, you need i++ often to construct while and for loops that iterate over arrays. However, in Scala, you can just say what you mean: for(x <- someArray) or someArray.foreach or something along those lines. i++ is useful in imperative programming, but when you get to a higher level, it's rarely necessary (in Python, I've never found myself needing it once).
You're spot on that ++ could be in Scala, but it's not because it's not necessary and would just clog up the syntax. If you really need it, say i += 1, but because Scala calls for programming with immutables and rich control flow more often, you should rarely need to. You certainly could define it yourself, as operators are indeed just methods in Scala.
Of course you can have that in Scala, if you really want:
import scalaz._, Scalaz._
case class IncLens[S,N](lens: Lens[S,N], num: Numeric[N]) {
def ++ = lens.mods(num.plus(_, num.one))
}
implicit def incLens[S,N: Numeric](lens: Lens[S,N]) =
IncLens[S,N](lens, implicitly[Numeric[N]])
val i = Lens.lensu[Int,Int]((x, y) => y, identity)
val imperativeProgram = for {
_ <- i++;
_ <- i++;
x <- i++
} yield x
def runProgram = imperativeProgram exec 0
And here you go:
scala> runProgram
res26: scalaz.Id.Id[Int] = 3
No need to resort to violence against variables.
Scala is perfectly capable of parsing i++ and, with a small modification to the language, could be made to modify a variable. But there are a variety of reasons not to.
First, it saves only one character, i++ vs. i+=1, which is not very much savings for adding a new language feature.
Second, the ++ operator is widely used in the collections library, where xs ++ ys takes collection xs and ys and produces a new collection that contains both.
Third, Scala tries to encourage you, without forcing you, to write code in a functional way. i++ is a mutable operation, so it's inconsistent with the idea of Scala to make it especially easy. (Likewise with a language feature that would allow ++ to mutate a variable.)
Scala doesn't have a ++ operator because it is not possible to implement one in it.
EDIT: As just pointed out in response to this answer, Scala 2.10.0 can implement an increment operator through use of macros. See this answer for details, and take everything below as being pre-Scala 2.10.0.
Let me elaborate on this, and I'll rely heavily on Java, since it actually suffers from the same problem, but it might be easier for people to understand it if I use a Java example.
To start, it is important to note that one of the goals of Scala is that the "built-in" classes must not have any capability that could not be duplicated by a library. And, of course, in Scala an Int is a class, whereas in Java an int is a primitive -- a type entirely distinct from a class.
So, for Scala to support i++ for i of type Int, I should be able to create my own class MyInt also supporting the same method. This is one of the driving design goals of Scala.
Now, naturally, Java does not support symbols as method names, so let's just call it incr(). Our intent then is to try to create a method incr() such that y.incr() works just like i++.
Here's a first pass at it:
public class Incrementable {
private int n;
public Incrementable(int n) {
this.n = n;
}
public void incr() {
n++;
}
#Override
public String toString() {
return "Incrementable("+n+")";
}
}
We can test it with this:
public class DemoIncrementable {
static public void main(String[] args) {
Incrementable i = new Incrementable(0);
System.out.println(i);
i.incr();
System.out.println(i);
}
}
Everything seems to work, too:
Incrementable(0)
Incrementable(1)
And, now, I'll show what the problem is. Let's change our demo program, and make it compare Incrementable to int:
public class DemoIncrementable {
static public void main(String[] args) {
Incrementable i = new Incrementable(0);
Incrementable j = i;
int k = 0;
int l = 0;
System.out.println("i\t\tj\t\tk\tl");
System.out.println(i+"\t"+j+"\t"+k+"\t"+l);
i.incr();
k++;
System.out.println(i+"\t"+j+"\t"+k+"\t"+l);
}
}
As we can see in the output, Incrementable and int are behaving differently:
i j k l
Incrementable(0) Incrementable(0) 0 0
Incrementable(1) Incrementable(1) 1 0
The problem is that we implemented incr() by mutating Incrementable, which is not how primitives work. Incrementable needs to be immutable, which means that incr() must produce a new object. Let's do a naive change:
public Incrementable incr() {
return new Incrementable(n + 1);
}
However, this doesn't work:
i j k l
Incrementable(0) Incrementable(0) 0 0
Incrementable(0) Incrementable(0) 1 0
The problem is that, while, incr() created a new object, that new object hasn't been assigned to i. There's no existing mechanism in Java -- or Scala -- that would allow us to implement this method with the exact same semantics as ++.
Now, that doesn't mean it would be impossible for Scala to make such a thing possible. If Scala supported parameter passing by reference (see "call by reference" in this wikipedia article), like C++ does, then we could implement it!
Here's a fictitious implementation, assuming the same by-reference notation as in C++.
implicit def toIncr(Int &n) = {
def ++ = { val tmp = n; n += 1; tmp }
def prefix_++ = { n += 1; n }
}
This would either require JVM support or some serious mechanics on the Scala compiler.
In fact, Scala does something similar to what would be needed that when it create closures -- and one of the consequences is that the original Int becomes boxed, with possibly serious performance impact.
For example, consider this method:
def f(l: List[Int]): Int = {
var sum = 0
l foreach { n => sum += n }
sum
}
The code being passed to foreach, { n => sum += n }, is not part of this method. The method foreach takes an object of the type Function1 whose apply method implements that little code. That means { n => sum += n } is not only on a different method, it is on a different class altogether! And yet, it can change the value of sum just like a ++ operator would need to.
If we use javap to look at it, we'll see this:
public int f(scala.collection.immutable.List);
Code:
0: new #7; //class scala/runtime/IntRef
3: dup
4: iconst_0
5: invokespecial #12; //Method scala/runtime/IntRef."<init>":(I)V
8: astore_2
9: aload_1
10: new #14; //class tst$$anonfun$f$1
13: dup
14: aload_0
15: aload_2
16: invokespecial #17; //Method tst$$anonfun$f$1."<init>":(Ltst;Lscala/runtime/IntRef;)V
19: invokeinterface #23, 2; //InterfaceMethod scala/collection/LinearSeqOptimized.foreach:(Lscala/Function1;)V
24: aload_2
25: getfield #27; //Field scala/runtime/IntRef.elem:I
28: ireturn
Note that instead of creating an int local variable, it creates an IntRef on the heap (at 0), which is boxing the int. The real int is inside IntRef.elem, as we see on 25. Let's see this same thing implemented with a while loop to make clear the difference:
def f(l: List[Int]): Int = {
var sum = 0
var next = l
while (next.nonEmpty) {
sum += next.head
next = next.tail
}
sum
}
That becomes:
public int f(scala.collection.immutable.List);
Code:
0: iconst_0
1: istore_2
2: aload_1
3: astore_3
4: aload_3
5: invokeinterface #12, 1; //InterfaceMethod scala/collection/TraversableOnce.nonEmpty:()Z
10: ifeq 38
13: iload_2
14: aload_3
15: invokeinterface #18, 1; //InterfaceMethod scala/collection/IterableLike.head:()Ljava/lang/Object;
20: invokestatic #24; //Method scala/runtime/BoxesRunTime.unboxToInt:(Ljava/lang/Object;)I
23: iadd
24: istore_2
25: aload_3
26: invokeinterface #29, 1; //InterfaceMethod scala/collection/TraversableLike.tail:()Ljava/lang/Object;
31: checkcast #31; //class scala/collection/immutable/List
34: astore_3
35: goto 4
38: iload_2
39: ireturn
No object creation above, no need to get something from the heap.
So, to conclude, Scala would need additional capabilities to support an increment operator that could be defined by the user, as it avoids giving its own built-in classes capabilities not available to external libraries. One such capability is passing parameters by-reference, but JVM does not provide support for it. Scala does something similar to call by-reference, and to do so it uses boxing, which would seriously impact performance (something that would most likely come up with an increment operator!). In the absence of JVM support, therefore, that isn't much likely.
As an additional note, Scala has a distinct functional slant, privileging immutability and referential transparency over mutability and side effects. The sole purpose of call by-reference is to cause side effects on the caller! While doing so can bring performance advantages in a number of situations, it goes very much against the grain of Scala, so I doubt call by-reference will ever be part of it.
Other answers have already correctly pointed out that a ++ operator is neither particularly useful nor desirable in a functional programming language. I would like to add that since Scala 2.10, you can add a ++ operator, if you want to. Here is how:
You need an implicit macro that converts ints to instances of something that has a ++ method. The ++ method is "written" by the macro, which has access to the variable (as opposed to its value) on which the ++ method is called. Here is the macro implementation:
trait Incrementer {
def ++ : Int
}
implicit def withPp(i:Int):Incrementer = macro withPpImpl
def withPpImpl(c:Context)(i:c.Expr[Int]):c.Expr[Incrementer] = {
import c.universe._
val id = i.tree
val f = c.Expr[()=>Unit](Function(
List(),
Assign(
id,
Apply(
Select(
id,
newTermName("$plus")
),
List(
Literal(Constant(1))
)
)
)
))
reify(new Incrementer {
def ++ = {
val res = i.splice
f.splice.apply
res
}
})
}
Now, as long as the implicit conversion macro is in scope, you can write
var i = 0
println(i++) //prints 0
println(i) //prints 1
Rafe's answer is true about the rationale for why something like i++ doesn't belong in Scala. However I have one nitpick. It's actually not possible to implement i++ in Scala without changing the language.
In Scala, ++ is a valid method, and no method implies assignment. Only = can do that.
Languages like C++ and Java treat ++ specially to mean both increment and assign. Scala treats = specially, and in an inconsistent way.
In Scala when you write i += 1 the compiler first looks for a method called += on the Int. It's not there so next it does it's magic on = and tries to compile the line as if it read i = i + 1. If you write i++ then Scala will call the method ++ on i and assign the result to... nothing. Because only = means assignment. You could write i ++= 1 but that kind of defeats the purpose.
The fact that Scala supports method names like += is already controversial and some people think it's operator overloading. They could have added special behavior for ++ but then it would no longer be a valid method name (like =) and it would be one more thing to remember.
Quite a few languages do not support the ++ notation, such as Lua. In languages in which it is supported, it is frequently a source of confusion and bugs, so it's quality as a language feature is dubious, and compared to the alternative of i += 1 or even just i = i + 1, the saving of such minor characters is fairly pointless.
This is not at all relevant to the type system of the language. While it's true that most static type languages do offer and most dynamic types don't, that's a correlation and definitely not a cause.
Scala encourages using of FP style, which i++ certainly is not.
The question to ask is why there should be such an operator, not why there shouldn't be. Would Scala be improved by it?
The ++ operator is single-purpose, and having an operator that can change the value of a variable can cause problems. It's easy to write confusing expressions, and even if the language defines what i = i + i++ means, for example, that's a lot of detailed rules to remember.
Your reasoning on Python and Ruby is wrong, by the way. In Perl, you can write $i++ or ++$i just fine. If $i turns out to be something that can't be incremented, you get a run-time error. It isn't in Python or Ruby because the language designers didn't think it was a good idea, not because they're dynamically typed like Perl.
You could simulate it, though. As a trivial example:
scala> case class IncInt(var self: Int = 0) { def ++ { self += 1 } }
defined class IncInt
scala> val i = IncInt()
i: IncInt = IncInt(0)
scala> i++
scala> i++
scala> i
res28: IncInt = IncInt(2)
Add some implicit conversions and you're good to go. However, this sort of changes the question into: why isn't there a mutable RichInt with this functionality?
As another answer suggests, the increment operator, as found in i++, was—
supposedly ... added to the B language [a predecessor of the C language] by Ken Thompson specifically because [it was] capable of translating directly to a single opcode once compiled
and not necessarily because such an operator is as useful to have as, say, general addition and subtraction. Although certain object-oriented languages (such as Java and C#) also have an increment operator (often borrowed from C), not all do (such as Ruby).