There are a few cases of source incompatibilities with Scala 2.8.0. For example, creating an anonymous Seq once required defining the abstract def elements : Iterator[A], which is now called def iterator : Iterator[A].
To me, a "brute force" solution is to create two branches that align to the different major scala versions.
Are there general techniques so that code like this will compile under both systems?
// Note: this code resembles techniques used by xml.NodeSeq
trait FooSeq extends Seq[ Foo ] {
def internal : Seq[ Foo ]
def elements = internal.elements
def iterator = internal.iterator // Only compiles in 2.8
// need to remove for 2.7.X
}
There are a few cases where usage is simply different and you must change. But in almost all cases--such as the elements code above--the 2.7 style is simply deprecated in 2.8, not gone altogether. If you're okay leaving your 2.8 users with deprecation warnings (edit: if they compile your code, otherwise you'll just have the warnings yourself), just implement the new features in terms of the old:
def iterator = internal.elements
Otherwise, I would recommend what you call the brute force solution. Use a sufficiently clever VCS so that you don't actually have to write much code twice (Git, Bazaar, Mercurial) and branch.
Related
Why is Scala designed with the following irritating form of boilerplate?
It would be convenient to write
def doStuffWithInts(ints: BaseIterable[Int]): Unit = ints foreach doStuffWithInt
for a common superclass BaseIterable of Iterable and ParIterable so that we can write both
val sequentialInts: Vector[Int] = getSomeHugeVector()
doStuffWithInts(sequentialInts)
and
val parInts: ParVector[Int] = getSomeHugeParVector()
doStuffWithInts(parInts)
Yet Scala forces us to copy and paste our doStuff method, once for Iterable and once for ParIterable. Why does Scala thrust such boilerplate on us by failing to have a common superclass BaseIterator of both Iterator and ParIterator?
You can use IterableOnce but that would force you to get an Iterator which is always sequential.
This is a conscious decision from the maintainers, you can read all the related discussions by starting here: https://github.com/scala/scala-parallel-collections/issues/101
The TL;DR; is that the maintainers agree that it is a bad idea to provide an abstraction between two; mainly because parallel collections should not be used as general collections but rather as localized optimizations. Also, the point out how easy it would be to introduce errors if you could abstract over the two (as was the case in 2.12).
Now, if you insist you want to abstract over the two, you may create your own typeclass.
Finally, I may suggest looking at using Future.traverse instead of parallel collections.
While it is recommended to turn on compiler flags like -Wvalue-discard or -Wunused:implicits either explicitly or implicitly throught the use of sbt-tpolecat.
Sometimes you need to workarround those, but in a way that makes it explicit; since we generally consider such things bugs and that was the reason for using the compiler flags in the first place.
One, somewhat common, workarroud for those cases is the following void function (courtesy of Rob Norris).
#inline final def void(args: Any*): Unit = (args, ())._2
However, such function has two problems.
It has a couple of unnecesary extra allocations; namely the Seq for the varargs and the Tuple.
It is not part of the stdlib and adding it on all projects is somewhat tedious.
Is there any other good workarroud that works out of the box?
2.13
Since Scala 2.13 there are two ways to disable both warnings.
Assign the values to an non-existent variable:
def testFix1()(implicit i: Int): Unit = {
val _ = i
val _ = data
}
Type-ascript the expression to Unit:
def testFix2()(implicit i: Int): Unit = {
i : Unit
data : Unit
}
We do not have a formal reference or proof, but it is believed that the second option should be transparent; in the sense that it should not have any impact in runtime, like extra allocations or unwanted code generation.
You can see the code running here.
3.0
As far as we know, the same tricks should work on Scala 3 (aka Dotty).
2.12
???
Or is this even relevant?
What I have in mind is using the ClassTag or TypeTag annotations, like so:
scala>
import scala.reflect.runtime.universe.TypeTag
def f[T : TypeTag](ls : List[T]) : String = {
???
}
results in :
f: [T](ls: List[T])(implicit evidence$1: reflect.runtime.universe.TypeTag[T])String
As you can see, the TypeTag is seen by the compiler which adds an implicit argument. Is there an equivalent in scala.meta? How will this work, and will there be any changes in the way erasure is handled?
At the moment scala.meta does not provide runtime introspection, however, that's planned for future releases. APIs would be similar to scala.reflect (but in terms of scala.meta, e.g. different Abstract Syntax Trees, no exposed compiler internals, etc), and I really hope that end user wont see much difference.
So, functionality of ClassTag/TypeTag is not likely to disappear. Most probably, scala.meta will use a bridge (paradise) to get access to scalac internals (and that involves scala.reflect).
Also note that scala.reflect will be supported in scala 2.x branch, but not in dotty.
I am trying to understand why one would use untyped actors over typed actors.
I have read several posts on this, some of them below:
What is the difference between Typed and UnTyped Actors in Akka? When to use what?
http://letitcrash.com/post/19074284309/when-to-use-typedactors
I am interested in understanding why untyped actors are better in the context of:
a web server,
A distributed architecture
Scalability,
Interoperability with applications written in other programming
languages.
I am aware, that untyped actors are better in the context of FSM because of the become/unbecome functionality.
I can see the possibilities of untyped in a load balancer, as it does not have to be aware of the contents of the messages, but just forward them to other actors. However this could be implemented in a typedactor as well.
Can someone come up with a few use case in the areas mentioned above, where untyped actors are "better"?
There is a generic disadvantage for type actors: they are hard to extend. When you use normal traits you can easily combine them to build object that implements both interfaces
trait One {
def callOne(arg : String)
}
trait Two {
def callTwo(arg : Double)
}
trait Both extends One with Two
The Both trait supports two calls combined from two traits.
If you usage actor approach that process messages instead of making direct calls you is still capable with extending interfaces refusing type safety as price.
trait One {
val receiveOne : PartialFunction[String,Unit] = {
case msg : String => ()
}
}
trait Two {
val receiveTwo : PartialFunction[Double, Unit] = {
case msg : Double => ()
}
}
trait Both extends One with Two {
val receive : PartialFunction[Any, Unit] = receiveOne orElse receiveTwo
}
The receive value in Both trait combines two partial functions. The first accepts only Strings, the second - only Doubles. They have single common supertype: Any. So extended version should use Any as argument and becomes effectively untyped. The flaw is in scala type system that supports type multiplication using with keyword, but does not support union types. You could not define Double or String.
Typed actors lose ability for easy extension. Actors shifts type checks to contravariant position and extending it requires union types. You can see how they works in ceylon programming language.
It is not that untyped and typed actors have different sphere of application. All questioned functionality may be expressed in terms of both. The choice is more about methodology and convenience.
Typing allows you to avoid some errors before going to unit testing. It will cost boilerplate for auxiliary protocol declarations. In the example above you should declare union type explicitly:
trait Protocol
final case class First(message : String) extends Protocol
final case class Second(message : Double) extends Protocol
And you lose easy callback combination: no orElse method for you. Only hand-written
val receive : PartialFunction[Protocol, Unit] = {
case First(msg) => receiveOne(msg)
case Second(msg) => receiveTwo(msg)
}
And if you would like to add a bit of new functionality with trait Three then you would be busy with rewriting that boilerplate code.
Akka provides some useful predefined enhancements for actors. They add new functionality either by mixin (e.g. receive pipeline) or by delegating (e.g. reliable proxy). Proxy patterns are used pretty much in akka applications and they change protocol on the fly, adding control command to it. That could not be done that easily with typed actors. So instead of predefined utilities you would be forced to write you own implementations. And forsaken utilities would not be limited with FSM.
It is up to you decide whether typing improvement worth increased work. No one can give precise advise without deep understanding of your project.
Typed actors are very new; they're explicitly marked as experimental and not ready for production use.
Warning
This module is currently experimental in the sense of being the subject of active research. This means that API or semantics can change without warning or deprecation period and it is not recommended to use this module in production just yet—you have been warned.
(as of the time this is written)
I'd like to point out a confusion that seems to have surfaced here.
Casper, the "typed actors" you refer to are deprecated and will be even removed eventually, I have explained in much detail why that's the case here: Akka typed actors in Java. The link you found with Viktor Klang answering, is talking about Akka 1.2 which is "ancient" as of now (when 2.4 is the stable release).
Having that said, there is a new experimental module called "Akka Typed", to which Daenyth is referring to in his reply. That module may indeed become a new core abstraction, however it's not yet ready for prime time.
I recommend you give the typed modules: Akka Streams (the latest addition to Akka, which will become not experimental very soon) and
Akka Typed to see how Actors may become typed in the near future (perhaps). Then, actually look again at Actors and see which model best works for your use case. Untyped Actors have the advantage of being a true and tried mature module / model, so you can really trust them in that sense, if you want more types - Akka Streams has you covered in many cases, but not all, so then you may consider the experimental module (but be aware, we most likely will change the Typed API while maturing it).
I am working on a library which depends on Scala 2.9 but only for a minor feature. I would like to propose version compatible with 2.8, but I don't want to maintain two code branch. Since I'm using SBT, I would like to benefits from it cross-compilation features.
However I don't know is there is a way to provide an equivalent of conditional compilation, to include a piece of code only if Scala 2.9 is used. Reflexivity could be an option (but how?).
Edit: The features I am using in 2.9 are the new sys package object.
I got it with reflection. So if I want to get the sys.SystemProperties, I can do:
try {
val k = java.lang.Class.forName("scala.sys.package$")
val m = k.getMethod( "props" )
// etc.
} catch {
case _ => throw new UnsupportedOperationException("Only available with Scala 2.9")
}
But it is so boring and ugly that I think I will drop those features...
Read this blog post, which describes how to do it with metaprogramming:
http://michid.wordpress.com/2008/10/29/meta-programming-with-scala-conditional-compilation-and-loop-unrolling/