Conditional Module instantiation in Chisel - scala

I'm trying to instantiate one of two chisel Module according to boolean parameter.
val useLib = true
val myModule = if(useLib) Module(new MyModule1()) else Module (new MyModule2())
But that doesn't work. Chisel doesn't recognize io interface :
[error] /path/to/source/mysource.scala:59:13: value io is not a member of Any
[error] myModule.io.pdm <> io.pdm
[error] ^
And of course, MyModule1() and MyModule2() have same io interfaces.
Is it possible to conditionally instantiate Module() as we do with preprocessor in C or C++ ?

I've written a new doc about upgrading from Chisel 3.4 to 3.5 that deals with this issue. It's not live on the website yet but will be once Chisel 3.5.0-RC2 is released. Here's a link to the doc: https://gist.github.com/jackkoenig/4949f6a455ae74923bbcce10dbf846b5#value-io-is-not-a-member-of-chisel3module
In sort, from Scala's perspective, MyModule1 and MyModule2 actually do not have the same interface, even though they are structurally the same. The trick is to factor out that interface into a named Bundle class and then use that in each of those modules. You then make each Module extend a trait that has that interface, and then Scala will know that the interfaces are the same.
For more information and examples, see the above linked doc.

Related

Where is the Object class or java.lang imported into the scala package or Any class?

From my understanding the ultimate class in Scala is Any class. However, I thought Scala built of the Java, so would not the ultimate class be Object? I have been checking the documentation and I could be wrong but it does not show that Object is the parent class of Any nor can I see anywhere the java.lang package being imported into Scala, which should be its backbone right?
You are confusing the Scala Programming Language with one of its Implementations.
A Programming Language is a set of mathematical rules and restrictions. Not more. It isn't "written in anything" (except maybe in English) and it isn't "built off anything" (except maybe the paper that the specification is written on).
An Implementation is a piece of software that either reads a program written in the Programming Language and executes that program in such a way that the exact things that the Programming Language Specification say should happen, do happen (in which case we call the Implementation an Interpreter), or it reads the program and outputs another program in another language in such a way that executing that output program with an interpreter for its language makes things happen in exactly the way that the Specification for the input language says.
Either way, it is the job of the person writing the Implementation to make sure that his Implementation does what the Specification says it should do.
So, even if I am writing an Implementation of Scala that is "built off Java" and written in Java, I still need to make sure that Any is the top type, because that's what the Scala Language Specification says. It is probably instructive to look at how, exactly, the Scala Language Specification phrases this [bold emphasis mine]:
Classes AnyRef and AnyVal are required to provide only the members declared in class Any, but implementations may add host-specific methods to these classes (for instance, an implementation may identify class AnyRef with its own root class for objects).
There are currently three actively-maintained Implementations of Scala, and one abandoned one.
The abandoned Implementation of Scala is Scala.NET, which was a compiler targeting the Common Language Infrastructure. It was abandoned due to lack of interest and funding. (Basically, all the users that probably would have used Scala.NET were already using F#.)
The currently maintained Implementations of Scala are:
Scala-native: a compiled Implementation targeting unixoid Operating Systems and Windows.
Scala.js: a compiled Implementation targeting the ECMAScript and Web platform.
Scala (a rather unfortunately confusing name, because it is the same as the language): a compiled Implementation targeting the Java platform. And by "Java platform", I mean the Java Virtual Machine and the Java Runtime Environment but not the Java Programming Language.
All three Implementations are written 100% in Scala. Actually, they are not three fully independent implementations, they use the same compiler frontend with only different backends, and they use the same parts of the Scala Standard Library that are written in Scala, and only re-implement the parts written in other languages.
So, what is true is that the Java Implementation of Scala does indeed do something with java.lang.Object. However, java.lang.Object is not the superclass of scala.Any. In fact, it can't be because scala.Any is the root superclass of both reference types and value types, whereas java.lang.Object is only the root superclass of all reference types. Therefore, java.lang.Object is actually equivalent to scala.AnyRef and not to scala.Any. However, java.lang.Object is not the superclass of scala.AnyRef either, but rather, both are the same class.
Also, java.lang._ is automatically imported just like scala._ is. But this does not apply to the Scala Programming Language, it only applies to the Java Implementation of the Scala Programming Language, whose name is unfortunately also Scala.
So, only for one of the three Implementations, there is some truth to the statement that java.lang.Object is the root class, but it is not a superclass of scala.Any, rather it is the same class as scala.AnyRef.
But again, this is only true for the Java Implementation of Scala. For example, in Scala.NET, the root superclass would be identified with System.Object, not java.lang.Object, and it would be equivalent to scala.Any, not scala.AnyRef because the CLI has a unified type system like Scala where reference types and value types are unified in the same type system. And I haven't checked Scala.js, but I would assume that it would identify Object with scala.AnyRef.
Note, however, that none of this is because the Implementation is "built off" something. The reason for doing this, for trying to merge the Scala and Java / CLI / ECMAScript class hierarchy, is for interoperability, it is for making it easy to call Scala code from every other language on the Java / CLI / ECMAScript platform, and vice versa, call code written in other languages from Scala. If you didn't care about that, then there would be no need to jump through these hoops.
java.lang.Object is not the parent of scala.Any. Consider the following relationships
implicitly[Any <:< java.lang.Object] // error
implicitly[AnyVal <:< java.lang.Object] // error
implicitly[AnyRef <:< java.lang.Object] // ok
However, say you had the following Java class
public class Foo {
public void bar(Object o) {}
public void zar(int o) {}
public void qux(java.lang.Integer o) {}
}
then all of the following would still work when called from Scala
val foo = new Foo
foo.bar(42.asInstanceOf[Int])
foo.bar(42.asInstanceOf[Any])
foo.bar(42.asInstanceOf[AnyVal])
foo.bar(42.asInstanceOf[AnyRef])
foo.zar(42) // zar takes primitive int
foo.qux(42) // qux takes java.lang.Integer

can`t bind[SttpBackend[Try, Nothing]]

I want to use sttp library with guice(with scalaguice wrapper) in my app. But seems it is not so easy to correctly bind things like SttpBackend[Try, Nothing]
SttpBackend.scala
Try[_] and Try[AnyRef] show some other errors, but still have no idea how it should be done correctly
the error I got:
kinds of the type arguments (scala.util.Try) do not conform to the expected kinds of the type parameters (type T).
[error] scala.util.Try's type parameters do not match type T's expected parameters:
[error] class Try has one type parameter, but type T has none
[error] bind[SttpBackend[Try, Nothing]].toProvider[SttpBackendProvider]
[error] ` ^
SttpBackendProvider looks like:
def get: SttpBackend[Try, Nothing] = TryHttpURLConnectionBackend(opts)
complete example in scastie
interesting that version scalaguice 4.1.0 show this error, but latest 4.2.2 shows error inside it with converting Nothing to JavaType
I believe you hit two different bugs in the Scala-Guice one of which is not fixed yet (and probably even not submitted yet).
To describe those issues I need a fast intro into how Guice and Scala-Guice work. Essentially what Guice do is have a mapping from type onto the factory method for an object of that type. To support some advanced features types are mapped onto some internal "keys" representation and then for each "key" Guice builds a way to construct a corresponding object. Also it is important that generics in Java are implemented using type erasure. That's why when you write something like:
bind(classOf[SttpBackend[Try, Nothing]]).toProvider(classOf[SttpBackendProvider])
in raw-Guice, the "key" actually becomes something like "com.softwaremill.sttp.SttpBackend". Luckily Guice developers have thought about this issue with generics and introduced TypeLiteral[T] so you can convey the information about generics.
Scala type system is more reach than in Java and it has some better reflection support from the compiler. Scala-Guice exploits it to map Scala-types on those more detailed keys automatically. Unfortunately it doesn't always work perfectly.
The first issue is the result of the facts that the type SttpBackend is defined as
trait SttpBackend[R[_], -S]
so it uses it expects its first parameter to be a type constructor; and that originally Scala-Guice used the scala.reflect.Manifest infrastructure. AFAIU such higher-kind types are not representable as Manifest and this is what the error in your question really says.
Luckily Scala has added a new scala.reflect.runtime.universe.TypeTag infrastructure to tackle this issue in a better and more consistent way and the Scala-Guice migrated to its usage. That's why with the newer version of Scala-Guice the compiler error goes away. Unfortunately there is another bug in the Scala-Guice that makes the code fail in runtime and it is a lack of handling of the Nothing Scala type. You see, the Nothing type is a kind of fake one on the JVM. It is one of the things where the Scala type system is more reach than the Java one. There is no direct mapping for Nothing in the JVM world. Luckily there is no way to create any value of the type Nothing. Unfortunately you still can create a classOf[Nothing]. The Scala-to-JVM compiler handles it by using an artificial scala.runtime.Nothing$. It is not a part of the public API, it is implementation details of specifically Scala over JVM. Anyway this means that the Nothing type needs additional handling when converting into the Guice TypeLiteral and there is none. There is for Any the cousin of Nothing but not for Nothing (see the usage of the anyType in TypeConversions.scala).
So there are really two workarounds:
Use raw Java-based syntax for Guice instead of the nice Scala-Guice one:
bind(new TypeLiteral[SttpBackend[Try, Nothing]]() {})
.toInstance(sttpBackend) // or to whatever
See online demo based on your example.
Patch the TypeConversions.scala in the Scala-Guice as in:
private[scalaguice] object TypeConversions {
private val mirror = runtimeMirror(getClass.getClassLoader)
private val anyType = typeOf[Any]
private val nothingType = typeOf[Nothing] // added
...
def scalaTypeToJavaType(scalaType: ScalaType): JavaType = {
scalaType.dealias match {
case `anyType` => classOf[java.lang.Object]
case `nothingType` => classOf[scala.runtime.Nothing$] //added
...
I tried it locally and it seems to fix your example. I didn't do any extensive tests so it might have broken something else.

Does Shapeless use reflection and is it safe to use in scala production code?

I'm still a bit confused about the scala Shapeless library after reading many articles. It seems that Shapeless uses scala compiling features? So does it use reflection and is it safe for production code?
Shapeless doesn't use reflection, it uses macros to inspect the structure of classes. With Shapeless this inspection happens at compilation time and not runtime (reflection happens at runtime). As a result of this Shapeless can be considered safer than reflection because it will be able to make many checks at compilation time.
Let's try to get a field by name using shapeless
case class MyClass(field: String)
import shapeless._
val myClassLens = lens[MyClass] >> 'field
val res = myClassLens.get(MyClass("value")) // res == "value"
if we use an invalid field name the compiler will complain with a compilation error
On the other hand if we tried to achieve this same thing using reflection the field name would be checked at runtime (maybe in production), that's why reflection is not considered as safe as Shapeless. It will also be way faster with Shapeless than reflection

Why does Array.fill take an implicit scala.reflect.ClassManifest?

So I'm playing with writing a battlecode player in Scala. In battlecode certain classes are disallowed and there is a runtime exception if you ever try to access them. When I use the Array.fill function I get a message from the battlecode server saying [java] Illegal class: scala/reflect/Manifest$. This is the offending line:
val g_score = Array.fill[Int](rc.getMapWidth(), rc.getMapHeight())(0)
The method takes an implicit ClassManifest argument which has the following documentation:
A ClassManifest[T] is an opaque descriptor for type T. It is used by the compiler
to preserve information necessary for instantiating Arrays in those cases where
the element type is unknown at compile time.
But I do know the type of the array elements at compile time, as shown above I explicitly state that they will be Int. Is there a way to avoid this? To workaround I've written my own version of Array.fill. This seems like a hack. As an aside, does Scala have real 2D arrays? Array.fill seems to return an Array[Array[T]] which is the only way I found to write my own. This also seems inelegant.
Edit: Using Scala 2.9.1
For background information, see this related question: What is a Manifest in Scala and when do you need it?. In this answer, you will find an explanation why manifests are needed for arrays.
In short: Although the JVM uses type erasure, arrays are an exception and need a manifest. Since you could compile your code, that manifest was found (manifests are always available for proper types). Your error occurs at runtime.
I don't know the details of the battlecode server, but there are two possibilities: Either you are running your compiled classes with a binary incompatible version of Scala (difference in major version, e.g. compiled with Scala 2.9 and server uses 2.10). Or the server doesn't even have the scala-library.jar on its class path.
As said in the comment, manifests are deprecated in Scala 2.10 and replaced by ClassTag.
EDIT: So it seems the class loader is artificially restricting the allowed classes. My suggestion would be: Add a helper Java class. You can easily mix Java and Scala code. If it's just about the Int-Array instantiation, you could provide something like:
public static class Helper {
public static int[][] makeArray(int d1, int d2) { return new int[d1][d2](); }
}
(hope that's valid java code, a bit rusty)
Also, have you tried to create the outer array with new Array[Array[Int]](d1), and then iterate to create the inner arrays?

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection