How is that .value can be called on a SettingKey or TaskKey? - scala

One can write something like
(managedClasspath in Compile).value
to obtain the value of managedClasspath in the Compile configuration.
The type of (managedClasspath in Compile) is yet again a sbt.TaskKey (because we call the in method with a ConfigKey).
There is however no value method on SettingKey or TaskKey, and I can't find any implicit class that provides such a method. So how come it exists? Is this some magical macro voodoo?

It's both, there are a few things at work components:
In sbt, any *XYZKey[_] can be converted into an appropriate Initialize[_] instance via an implicit. This, by default, is an initializer that reads the existing value at the key and returns it.
The sbt.std.MacroValue[T] type is a compile-time only class which holds something that can have .value called on it: http://www.scala-sbt.org/0.13.5/api/index.html#sbt.std.MacroValue. We use this to track the underlying instances in the macro and denote that they have special significance (i.e. we have to rework the code such that we wait for the value to exist before using it).
The sbt.Def object has a set of implicits called macroValueXYZ which lift Initialize[_] instances into the macro API.
So, as you can see, it's a bit of black magic through our internals to get there. We'll have to look into a way to better document the API in a scaladoc tool.

Related

How to generate top-level class/object with scala macro

As we know, it is easy to create an inner class in some methods with scala macro.
But I'd like to know is it possible to generate a top level class/object?
If the answer is yes, then how to avoid generate the same class twice?
my scala version is 2.11
Top-level expansions must retain the number of annottees, their flavors and their names, with the only exception that a class might expand into a same-named class plus a same-named module, in which case they automatically become companions as per previous rule.
https://docs.scala-lang.org/overviews/macros/annotations.html
So you can transform top-level
#annot
class A
into
class A
object A
or
#annot
object A
into
class A
object A
Also there existed c.introduceTopLevel but it was removed.
Context.introduceTopLevel. The Context.introduceTopLevel API, which used to be available in early milestone builds of Scala 2.11.0 as a stepping stone towards type macros, was removed from the final release, because type macros were rejected for including in Scala and discontinued in macro paradise.
https://docs.scala-lang.org/overviews/macros/changelog211.html
Scala Macro: Define Top Level Object
introduceTopLevel has provided a long-requested functionality of generating definitions that can be used outside macro expansions. However, metaprogrammers have
quickly discovered that introduceTopLevel is dangerous. Top-level scope is a resource shared between the typechecker and user metaprograms, so mutating it with
introduceTopLevel can lead to compilation order problems. For example, if one file
in a compilation run relies on definitions created by a macro expansion performed in
another file, compiling the former before the latter may lead to unexpected compilation
errors.
https://infoscience.epfl.ch/record/226166/files/EPFL_TH7159.pdf (section 5.2.3 Conclusion)
If the companion you want to generate already exists then the companion you return in macro annotation's macroTransform will replace the original. You don't need to beware that there will be two "companions", compiler will watch that. But surely normally you match if that's the case (whether there is only annottee or annottee + companion).

The relationship between Type Symbol and Mirror of Scala reflection

The Scala reflection is really complicated. It contains type symbol and mirror. Could you tell me the relationship between them?
When working with the Scala Reflection API you encounter a lot more types that what you're used to if you've used the Java Reflection API. Given an example scenario where you start with a String containing a fully qualified class name of a class, these are the types you are likely to encounter:
Universe: Scala supports both runtime and compile time reflection. You choose what kind of reflection you're doing by importing from the corresponding universe. For runtime reflection, this corresponds to the scala.reflect.runtime package, for compile time reflection it corresponds to the scala.reflect.macros package. This answer focus on the former.
Like Java you typically start any reflection by choosing which ClassLoader's classes you want to reflect on. Scala provides a shortcut for using the ClassLoader of the current class: scala.reflect.runtime.currentMirror, this gives you a Mirror (more on mirrors later). Many JVM applications use just a single class loader, so this is a common entry point for the Scala Reflection API. Since you're importing from runtime you're now in that Universe.
Symbols: Symbols contain static metadata about whatever you want to reflect. This includes anything you can think of: is this thing a case class, is it a field, is it a class, what are the type parameters, is it abstract, etc. You may not query anything that might depend on the current lexical scope, for example what members a class has. You may also not interact with the thing you reflect on in any way (like accessing fields or calling methods). You can just query metadata.
Lexical scope is everything you can "see" at the place you're doing reflection, excluding implicit scope (see this SO for a treatment of different scopes). How can members of a class vary with lexical scope? Imagine an abstract class with a single def foo: String. The name foo might be bound to a def in one context (giving you a MethodSymbol should you query for it) or it could be bound to a val in another context (giving you a TermSymbol). When working with Symbols it's common to explicitly have to state what kind of symbol you expect, you do this through the methods .asTerm, .asMethod, .asClass etc.
Continuing the String example we started with. You use the Mirror to derive a ClassSymbol describing the class: currentMirror.staticClass(myString).
Types: Types lets you query information about the type the symbol refers to in the current lexical context. You typically use Types for two things: querying what vars, vals and defs there are, and querying type relationships (e.g. is this type a subclass of that type). There are two ways of getting hold of a Type. Either through a TypeSymbol (ClassSymbol is a TypeSymbol) or through a TypeTag.
Continuing the example you would invoke the .toType method on the symbol you got to get the Type.
Scopes: When you ask a Type for .members or .decl—this is what gives you terms (vars and vals) and methods—you get a list of Symbols of the members in the current lexical scope. This list is kept in a type MemberScope, it's just a glorified List[Symbol].
In our example with the abstract class above, this list would contain a TermSymbol or a MethodSymbol for the name foo depending on the current scope.
Names: Names come in two flavors: TermName and TypeName. It's just a wrapper of a String. You can use the type to determine what is named by any Name.
Mirrors: Finally mirrors are what you use to interact with "something". You typically start out with a Symbol and then use that symbol to derive symbols for the methods, constructors or fields you want to interact with. When you have the symbols you need, you use currentMirror to create mirrors for those symbols. Mirrors lets you call constructors (ClassMirror), access fields (FieldMirror) or invoke methods (MethodMirror). You may not use mirrors to query metadata about the thing being reflected.
So putting an example together reflecting the description above, this is how you would search for fields, invoke constructors and read a val, given a String with the fully qualified class name:
// Do runtime reflection on classes loaded by current ClassLoader
val currentMirror: universe.Mirror = scala.reflect.runtime.currentMirror
// Use symbols to navigate to pick out the methods and fields we want to invoke
// Notice explicit symbol casting with the `.as*` methods.
val classSymbol: universe.ClassSymbol = currentMirror.staticClass("com.example.Foo")
val constructorSymbol: universe.MethodSymbol = classSymbol.primaryConstructor.asMethod
val fooSymbol: Option[universe.TermSymbol] = classSymbol.toType.members.find(_.name.toString == "foo").map(_.asTerm)
// Get mirrors for performing constructor and field invocations
val classMirror: universe.ClassMirror = currentMirror.reflectClass(classSymbol)
val fooInstance: Foo = classMirror.reflectConstructor(constructorSymbol).apply().asInstanceOf[Foo]
val instanceMirror: universe.InstanceMirror = currentMirror.reflect(fooInstance)
// Do the actual invocation
val fooValue: String = instanceMirror.reflectField(fooSymbol.get).get.asInstanceOf[String]
println(fooValue) // Prints the value of the val "foo" of the object "fooInstance"

Why does Array.fill take an implicit scala.reflect.ClassManifest?

So I'm playing with writing a battlecode player in Scala. In battlecode certain classes are disallowed and there is a runtime exception if you ever try to access them. When I use the Array.fill function I get a message from the battlecode server saying [java] Illegal class: scala/reflect/Manifest$. This is the offending line:
val g_score = Array.fill[Int](rc.getMapWidth(), rc.getMapHeight())(0)
The method takes an implicit ClassManifest argument which has the following documentation:
A ClassManifest[T] is an opaque descriptor for type T. It is used by the compiler
to preserve information necessary for instantiating Arrays in those cases where
the element type is unknown at compile time.
But I do know the type of the array elements at compile time, as shown above I explicitly state that they will be Int. Is there a way to avoid this? To workaround I've written my own version of Array.fill. This seems like a hack. As an aside, does Scala have real 2D arrays? Array.fill seems to return an Array[Array[T]] which is the only way I found to write my own. This also seems inelegant.
Edit: Using Scala 2.9.1
For background information, see this related question: What is a Manifest in Scala and when do you need it?. In this answer, you will find an explanation why manifests are needed for arrays.
In short: Although the JVM uses type erasure, arrays are an exception and need a manifest. Since you could compile your code, that manifest was found (manifests are always available for proper types). Your error occurs at runtime.
I don't know the details of the battlecode server, but there are two possibilities: Either you are running your compiled classes with a binary incompatible version of Scala (difference in major version, e.g. compiled with Scala 2.9 and server uses 2.10). Or the server doesn't even have the scala-library.jar on its class path.
As said in the comment, manifests are deprecated in Scala 2.10 and replaced by ClassTag.
EDIT: So it seems the class loader is artificially restricting the allowed classes. My suggestion would be: Add a helper Java class. You can easily mix Java and Scala code. If it's just about the Int-Array instantiation, you could provide something like:
public static class Helper {
public static int[][] makeArray(int d1, int d2) { return new int[d1][d2](); }
}
(hope that's valid java code, a bit rusty)
Also, have you tried to create the outer array with new Array[Array[Int]](d1), and then iterate to create the inner arrays?

Scala Case Class Map Expansion

In groovy one can do:
class Foo {
Integer a,b
}
Map map = [a:1,b:2]
def foo = new Foo(map) // map expanded, object created
I understand that Scala is not in any sense of the word, Groovy, but am wondering if map expansion in this context is supported
Simplistically, I tried and failed with:
case class Foo(a:Int, b:Int)
val map = Map("a"-> 1, "b"-> 2)
Foo(map: _*) // no dice, always applied to first property
A related thread that shows possible solutions to the problem.
Now, from what I've been able to dig up, as of Scala 2.9.1 at least, reflection in regard to case classes is basically a no-op. The net effect then appears to be that one is forced into some form of manual object creation, which, given the power of Scala, is somewhat ironic.
I should mention that the use case involves the servlet request parameters map. Specifically, using Lift, Play, Spray, Scalatra, etc., I would like to take the sanitized params map (filtered via routing layer) and bind it to a target case class instance without needing to manually create the object, nor specify its types. This would require "reliable" reflection and implicits like "str2Date" to handle type conversion errors.
Perhaps in 2.10 with the new reflection library, implementing the above will be cake. Only 2 months into Scala, so just scratching the surface; I do not see any straightforward way to pull this off right now (for seasoned Scala developers, maybe doable)
Well, the good news is that Scala's Product interface, implemented by all case classes, actually doesn't make this very hard to do. I'm the author of a Scala serialization library called Salat that supplies some utilities for using pickled Scala signatures to get typed field information
https://github.com/novus/salat - check out some of the utilities in the salat-util package.
Actually, I think this is something that Salat should do - what a good idea.
Re: D.C. Sobral's point about the impossibility of verifying params at compile time - point taken, but in practice this should work at runtime just like deserializing anything else with no guarantees about structure, like JSON or a Mongo DBObject. Also, Salat has utilities to leverage default args where supplied.
This is not possible, because it is impossible to verify at compile time that all parameters were passed in that map.

scala - is it possible to force immutability on an object?

I mean if there's some declarative way to prevent an object from changing any of it's members.
In the following example
class student(var name:String)
val s = new student("John")
"s" has been declared as a val, so it will always point to the same student.
But is there some way to prevent s.name from being changed by just declaring it like immutable???
Or the only solution is to declare everything as val, and manually force immutability?
No, it's not possible to declare something immutable. You have to enforce immutability yourself, by not allowing anyone to change it, that is remove all ways of modifying the class.
Someone can still modify it using reflection, but that's another story.
Scala doesn't enforce that, so there is no way to know. There is, however, an interesting compiler-plugin project named pusca (I guess it stands for Pure-Scala). Pure is defined there as not mutating a non-local variable and being side-effect free (e.g. not printing to the console)—so that calling a pure method repeatedly will always yield the same result (what is called referentially transparent).
I haven't tried out that plug-in myself, so I can't say if it's any stable or usable already.
There is no way that Scala could do this generally.
Consider the following hypothetical example:
class Student(var name : String, var course : Course)
def stuff(course : Course) {
magically_pure_val s = new Student("Fredzilla", course)
someFunctionOfStudent(s)
genericHigherOrderFunction(s, someFunctionOfStudent)
course.someMethod()
}
The pitfalls for any attempt to actually implement that magically_pure_val keyword are:
someFunctionOfStudent takes an arbitrary student, and isn't implemented in this compilation unit. It was written/compiled knowing that Student consists of two mutable fields. How do we know it doesn't actually mutate them?
genericHigherOrderFunction is even worse; it's going to take our Student and a function of Student, but it's written polymorphically. Whether or not it actually mutates s depends on what its other arguments are; determining that at compile time with full generality requires solving the Halting Problem.
Let's assume we could get around that (maybe we could set some secret flags that mean exceptions get raised if the s object is actually mutated, though personally I wouldn't find that good enough). What about that course field? Does course.someMethod() mutate it? That method call isn't invoked from s directly.
Worse than that, we only know that we'll have passed in an instance of Course or some subclass of Course. So even if we are able to analyze a particular implementation of Course and Course.someMethod and conclude that this is safe, someone can always add a new subclass of Course whose implementation of someMethod mutates the Course.
There's simply no way for the compiler to check that a given object cannot be mutated. The pusca plugin mentioned by 0__ appears to detect purity the same way Mercury does; by ensuring that every method is known from its signature to be either pure or impure, and by raising a compiler error if the implementation of anything declared to be pure does anything that could cause impurity (unless the programmer promises that the method is pure anyway).[1]
This is quite a different from simply declaring a value to be completely (and deeply) immutable and expecting the compiler to notice if any of the code that could touch it could mutate it. It's also not a perfect inference, just a conservative one
[1]The pusca README claims that it can infer impurity of methods whose last expression is a call to an impure method. I'm not quite sure how it can do this, as checking if that last expression is an impure call requires checking if it's calling a not-declared-impure method that should be declared impure by this rule, and the implementation might not be available to the compiler at that point (and indeed could be changed later even if it is). But all I've done is look at the README and think about it for a few minutes, so I might be missing something.