'Happens before' on Scala constructors: final fields - scala

Java specification mentions that classes having only final fields have their constructors in a happens-before relation with any thread reading any reference to that object: in other words, it is not possible for the application to see a partially constructed object.
Scala hacks initialization by extracting it to separate methods in order to ensure that 'primary constructor vals' are set before any initializing code in superclasses. This is at least one reaason why Scala final val doesn't translate always (or ever?) to a Java final field.
Is there a way to achieve this, i.e. ensure the happens-before relation between class clients and its constructors?
One which is a reasonably stable feature of the compiler?
One which is not writing the class in Java?

Scala hacks initialization by extracting it to separate methods in order to ensure that 'primary constructor vals' are set before any initializing code in superclasses.
In java that doesn't break final guarantees as long as this doesn't escape from the constructor.
("doesn't escape" means that constructor's code doesn't store this in a variable/collection/etc which can be read by another thread)
Also because the JMM is defined for java language and not for the JVM, I'm afraid it only works in languages that compile to java code.

Related

Special grammar in scala

I am very new at Scala and Spark area, and I found a strange grammar usage in the scala inside the Apache beam project and I can't understand.
Here is the strange place:
JavaDStream<Metadata> metadataDStream = mapWithStateDStream.map(new Tuple2MetadataFunction());
// register ReadReportDStream to report information related to this read.
new ReadReportDStream(metadataDStream.dstream(), id, getSourceName(source, id), stepName)
.register();
From the above code, you can see inside the constructor of ReadReportDstream, the first parameter is
metadataDStream.dstream()
If we go inside the dstream() method, you will see the following code:
class JavaDStream[T](val dstream: DStream[T])(implicit val classTag: ClassTag[T])
extends AbstractJavaDStreamLike[T, JavaDStream[T], JavaRDD[T]] {
I am wondering why it uses "metadataDStream.dstream()" in the constructor instead of "metadataDStream.dstream"? What does the "()" do?
It's mostly a question of convention. Methods with empty parameter lists are evaluated for their side-effects. Methods without parameters are assumed to be purely functional, and free of side-effects. You can read more about that here - https://docs.scala-lang.org/style/method-invocation.html (Arity-0 section)
So in that case, we're probably having some side-effects in metadataDStream.dstream(). However, syntactically writing it as metadataDStream.dstream won't be an error.

How to generate top-level class/object with scala macro

As we know, it is easy to create an inner class in some methods with scala macro.
But I'd like to know is it possible to generate a top level class/object?
If the answer is yes, then how to avoid generate the same class twice?
my scala version is 2.11
Top-level expansions must retain the number of annottees, their flavors and their names, with the only exception that a class might expand into a same-named class plus a same-named module, in which case they automatically become companions as per previous rule.
https://docs.scala-lang.org/overviews/macros/annotations.html
So you can transform top-level
#annot
class A
into
class A
object A
or
#annot
object A
into
class A
object A
Also there existed c.introduceTopLevel but it was removed.
Context.introduceTopLevel. The Context.introduceTopLevel API, which used to be available in early milestone builds of Scala 2.11.0 as a stepping stone towards type macros, was removed from the final release, because type macros were rejected for including in Scala and discontinued in macro paradise.
https://docs.scala-lang.org/overviews/macros/changelog211.html
Scala Macro: Define Top Level Object
introduceTopLevel has provided a long-requested functionality of generating definitions that can be used outside macro expansions. However, metaprogrammers have
quickly discovered that introduceTopLevel is dangerous. Top-level scope is a resource shared between the typechecker and user metaprograms, so mutating it with
introduceTopLevel can lead to compilation order problems. For example, if one file
in a compilation run relies on definitions created by a macro expansion performed in
another file, compiling the former before the latter may lead to unexpected compilation
errors.
https://infoscience.epfl.ch/record/226166/files/EPFL_TH7159.pdf (section 5.2.3 Conclusion)
If the companion you want to generate already exists then the companion you return in macro annotation's macroTransform will replace the original. You don't need to beware that there will be two "companions", compiler will watch that. But surely normally you match if that's the case (whether there is only annottee or annottee + companion).

Common in scala's Array and List

I'm new to scala(just start learning it), but have figured out smth strange for me: there are classes Array and List, they both have such methods/functions as foreach, forall, map etc. But any of these methods aren't inherited from some special class(trait). From java perspective if Array and List provide some contract, that contract have to be declared in interface and partially implemented in abstract classes. Why do in scala each type(Array and List) declares own set of methods? Why do not they have some common type?
But any of these methods aren't inherited from some special class(trait)
That simply not true.
If you open scaladoc and lookup say .map method of Array and List and then click on it you'll see where it is defined:
For list:
For array:
See also info about Traversable and Iterable both of which define most of the contracts in scala collections (but some collections may re-implement methods defined in Traversable/Iterable, e.g. for efficiency).
You may also want to look at relations between collections (scroll to the two diagrams) in general.
I'll extend om-nom-nom answer here.
Scala doesn't have an Array -- that's Java Array, and Java Array doesn't implement any interface. In fact, it isn't even a proper class, if I'm not mistaken, and it certainly is implemented through special mechanisms at the bytecode level.
On Scala, however, everything is a class -- an Int (Java's int) is a class, and so is Array. But in these cases, where the actual class comes from Java, Scala is limited by the type hierarchy provided by Java.
Now, going back to foreach, map, etc, they are not methods present in Java. However, Scala allows one to add implicit conversions from one class to another, and, through that mechanism, add methods. When you call arr.foreach(println), what is really done is Predef.refArrayOps(arr).foreach(println), which means foreach belongs to the ArrayOps class -- as you can see in the scaladoc documentation.

Why does Array.fill take an implicit scala.reflect.ClassManifest?

So I'm playing with writing a battlecode player in Scala. In battlecode certain classes are disallowed and there is a runtime exception if you ever try to access them. When I use the Array.fill function I get a message from the battlecode server saying [java] Illegal class: scala/reflect/Manifest$. This is the offending line:
val g_score = Array.fill[Int](rc.getMapWidth(), rc.getMapHeight())(0)
The method takes an implicit ClassManifest argument which has the following documentation:
A ClassManifest[T] is an opaque descriptor for type T. It is used by the compiler
to preserve information necessary for instantiating Arrays in those cases where
the element type is unknown at compile time.
But I do know the type of the array elements at compile time, as shown above I explicitly state that they will be Int. Is there a way to avoid this? To workaround I've written my own version of Array.fill. This seems like a hack. As an aside, does Scala have real 2D arrays? Array.fill seems to return an Array[Array[T]] which is the only way I found to write my own. This also seems inelegant.
Edit: Using Scala 2.9.1
For background information, see this related question: What is a Manifest in Scala and when do you need it?. In this answer, you will find an explanation why manifests are needed for arrays.
In short: Although the JVM uses type erasure, arrays are an exception and need a manifest. Since you could compile your code, that manifest was found (manifests are always available for proper types). Your error occurs at runtime.
I don't know the details of the battlecode server, but there are two possibilities: Either you are running your compiled classes with a binary incompatible version of Scala (difference in major version, e.g. compiled with Scala 2.9 and server uses 2.10). Or the server doesn't even have the scala-library.jar on its class path.
As said in the comment, manifests are deprecated in Scala 2.10 and replaced by ClassTag.
EDIT: So it seems the class loader is artificially restricting the allowed classes. My suggestion would be: Add a helper Java class. You can easily mix Java and Scala code. If it's just about the Int-Array instantiation, you could provide something like:
public static class Helper {
public static int[][] makeArray(int d1, int d2) { return new int[d1][d2](); }
}
(hope that's valid java code, a bit rusty)
Also, have you tried to create the outer array with new Array[Array[Int]](d1), and then iterate to create the inner arrays?

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection