I'm learning Scala along with a DSL called Chisel.
In Chisel there is an omnipresent pattern like:
class TestModule extends Module {
// overriding an io field of Module
val io = IO(new Bundle {
// add a new field, that was not defined in Bundle
val a = ...
})
// do something with io.a
}
As I figured out, this code can't be compiled ever since Scala 2.12.0 . Because of a change in type inference, the io field does not get the 'extended' type, and io.a is not accessible effectively from anywhere outside the definition of the anonymous subclass.
I can understand (more or less) the motivation of this change.
But this very implication of it looks quite strange to me.
I've managed to write some ways to overcome this problem, but none of them satisfies me completely.
So:
What is the shortest way to extend an overridden-field-object with new fields?
From a DSL-user point of view
From a DSL(library)-writer point of view
And being more general: what is the best way to add fields 'in-place'? And if there is no good way, why is adding fields in-place is so much left out?
My solutions for DSL-users:
add an explicit type definition, as supposed by the authors of the change
class TestModule extends Module {
val io: {val a: Type; ...} = IO(new Bundle {
val a = ...
...
})
}
This kinda doubles the typing. With more than about three additional fields this looks really scary. Even being broken into multiple lines. And in Chisel there usually are quite a lot of fields here.
assign the whole thing to another (non-overriding) field first
class TestModule extends Module {
val myIo = IO(new Bundle {
val a = ...
...
})
val io = myIo
}
Adds one unnecessary line, forces to invent another name... And looks like magic. Really.
make the class named instead of anonymous
class TestModule extends Module {
class MyIo extends Bundle {
val a = ...
...
}
val io = IO(new MyIo)
}
Still a line and a name. And looks a bit too involved for people who came to use a DSL without much will to dive deep into Scala (I'm not one of them, but I know a lot of such people).
As for DSL-writers, I can suggest only to write a macro. And, as far as I understand the problem, the macro should not only be plugged in place of the IO() function, but it should replace the whole val io = IO(...) assignment.
Related
I try to get names of all trait a class extends using getInterfaces which returns an array of trait's names. When I manually access each member of the array, the method getName returns simple names like this
trait A
trait B
class C() extends A, B
val c = C()
val arr = c.getClass.getInterfaces
arr(0).getName // : String = A
arr(1).getName // : String = B
However, when I use map function on arr. The resulting array contains a cryptic version of trait's names
arr.map(t => t.getName) // : Array[String] = Array(repl$.rs$line$1$A, repl$.rs$line$2$B)
The goal of this question is not about how to get the resulting array that contains simple names (for that purpose, I can just use arr.map(t => t.getSimpleName).) What I'm curious about is that why accessing array manually and using a map do not yield a compatible result. Am I wrong to think that both ways are equivalent?
I believe you run things in Scala REPL or Ammonite.
When you define:
trait A
trait B
class C() extends A, B
classes A, B and C aren't defined in top level of root package. REPL creates some isolated environment, compiles the code and loads the results into some inner "anonymous" namespace.
Except this is not true. Where this bytecode was created is reflected in class name. So apparently there was something similar (not necessarily identical) to
// repl$ suggest object
object repl {
// .rs sound like nested object(?)
object rs {
// $line sounds like nested class
class line { /* ... */ }
// $line$1 sounds like the first anonymous instance of line
new line { trait A }
// import from `above
// $line$2 sounds like the second anonymous instance of line
new line { trait B }
// import from above
//...
}
}
which was made because of how scoping works in REPL: new line creates a new scope with previous definitions seen and new added (possibly overshadowing some old definition). This could be achieved by creating a new piece of code as code of new anonymous class, compiling it, reading into classpath, instantiating and importing its content. Byt putting each new line into separate class REPL is able to compile and run things in steps, without waiting for you to tell it that the script is completed and closed.
When you are accessing class names with runtime reflection you are seeing the artifacts of how things are being evaluated. One path might go trough REPLs prettifiers which hide such things, while the other bypass them so you see the raw value as JVM sees it.
The problem is not with map rather with Array, especially its toString method (which is one among the many reasons for not using Array).
Actually, in this case it is even worse since the REPL does some weird things to try to pretty-print Arrays which in this case didn't work well (and, IMHO, just add to the confusion)
You can fix this problem calling mkString directly like:
val arr = c.getClass.getInterfaces
val result = arr.map(t => t.getName)
val text = result.mkString("[", ", ", "]")
println(text)
However, I would rather suggest just not using Array at all, instead convert it to a proper collection (e.g. List) as soon as possible like:
val interfaces = c.getClass.getInterfaces.toList
interfaces .map(t => t.getName)
Note: About the other reasons for not using Arrays
They are mutable.
Thet are invariant.
They are not part of the collections hierarchy thus you can't use them on generic methods (well, you actually can but that requires more tricks).
Their equals is by reference instead of by value.
I am trying to pass generalized class as parameter . If I give Case class and values then it is working fine . But , would like to make it generic.
class DB[T] {
lazy val ctx = new OracleJdbcContext(SnakeCase, "ctx")
import ctx._
def insert(input: T) = {
val q = quote {
query[T].insert(lift(input))
}
ctx.run(q)
}
}
I am getting errors saying::
"Can't find an implicit SchemaMeta for type T
Can't find Encoder for type 'T'. Note that Encoders are invariant"
But, If I give actual class name then it is going well.
case class Location(street:String,pinCode:Int)
class DB {
lazy val ctx = new OracleJdbcContext(SnakeCase, "ctx")
import ctx._
val q = quote {
query[Location].insert(Location("2ndcross",500001))
}
ctx.run(q)
}
You need to have SchemaMeta[T] in scope to be able to execute queries using this type. Dummy solution would be to demand it as parameter constraint (and so implicit class parameter) like this
class DB[T: SchemaMeta]
but this wouldn't work, because it's ctx who provides those instances.
I believe that you will need to follow examples shown here: https://getquill.io/#contexts-dependent-contexts
But even then what you want may not be achievable.
Imporant thing to understand when working with quill is that almost everything there is based on macros and if you abstract things away then there is not enough information for these macros to act. So you either need to duplicate code requiring macros or wrap the code that is meant to be generic in your own macro.
This is a "real life" OO design question. I am working with Scala, and interested in specific Scala solutions, but I'm definitely open to hear generic thoughts.
I am implementing a branch-and-bound combinatorial optimization program. The algorithm itself is pretty easy to implement. For each different problem we just need to implement a class that contains information about what are the allowed neighbor states for the search, how to calculate the cost, and then potentially what is the lower bound, etc...
I also want to be able to experiment with different data structures. For instance, one way to store a logic formula is using a simple list of lists of integers. This represents a set of clauses, each integer a literal. We can have a much better performance though if we do something like a "two-literal watch list", and store some extra information about the formula in general.
That all would mean something like this
object BnBSolver[S<:BnBState]{
def solve(states: Seq[S], best_state:Option[S]): Option[S] = if (states.isEmpty) best_state else
val next_state = states.head
/* compare to best state, etc... */
val new_states = new_branches ++ states.tail
solve(new_states, new_best_state)
}
class BnBState[F<:Formula](clauses:F, assigned_variables) {
def cost: Int
def branches: Seq[BnBState] = {
val ll = clauses.pick_variable
List(
BnBState(clauses.assign(ll), ll :: assigned_variables),
BnBState(clauses.assign(-ll), -ll :: assigned_variables)
)
}
}
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign(ll: Int) :F =
Formula(clauses.filterNot(_ contains ll)
.map(_.filterNot(_==-ll))))
}
Hopefully this is not too crazy, wrong or confusing. The whole issue here is that this assign method from a formula would usually take just the current literal that is going to be assigned. In the case of two-literal watch lists, though, you are doing some lazy thing that requires you to know later what literals have been previously assigned.
One way to fix this is you just keep this list of previously assigned literals in the data structure, maybe as a private thing. Make it a self-standing lazy data structure. But this list of the previous assignments is actually something that may be naturally available by whoever is using the Formula class. So it makes sense to allow whoever is using it to just provide the list every time you assign, if necessary.
The problem here is that we cannot now have an abstract Formula class that just declares a assign(ll:Int):Formula. In the normal case this is OK, but if this is a two-literal watch list Formula, it is actually an assign(literal: Int, previous_assignments: Seq[Int]).
From the point of view of the classes using it, it is kind of OK. But then how do we write generic code that can take all these different versions of Formula? Because of the drastic signature change, it cannot simply be an abstract method. We could maybe force the user to always provide the full assigned variables, but then this is a kind of a lie too. What to do?
The idea is the watch list class just becomes a kind of regular assign(Int) class if I write down some kind of adapter method that knows where to take the previous assignments from... I am thinking maybe with implicit we can cook something up.
I'll try to make my answer a bit general, since I'm not convinced I'm completely following what you are trying to do. Anyway...
Generally, the first thought should be to accept a common super-class as a parameter. Obviously that won't work with Int and Seq[Int].
You could just have two methods; have one call the other. For instance just wrap an Int into a Seq[Int] with one element and pass that to the other method.
You can also wrap the parameter in some custom class, e.g.
class Assignment {
...
}
def int2Assignment(n: Int): Assignment = ...
def seq2Assignment(s: Seq[Int]): Assignment = ...
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign(ll: Assignment) :F = ...
}
And of course you would have the option to make those conversion methods implicit so that callers just have to import them, not call them explicitly.
Lastly, you could do this with a typeclass:
trait Assigner[A] {
...
}
implicit val intAssigner = new Assigner[Int] {
...
}
implicit val seqAssigner = new Assigner[Seq[Int]] {
...
}
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign[A : Assigner](ll: A) :F = ...
}
You could also make that type parameter at the class level:
case class Formula[A:Assigner,F<:Formula[A,F]](clauses:List[List[Int]]) {
def assign(ll: A) :F = ...
}
Which one of these paths is best is up to preference and how it might fit in with the rest of the code.
I am trying to pickle some relatively-simple-structured but large-and-slow-to-create classes in a Scala NLP (natural language processing) app of mine. Because there's lots of data, it needs to pickle and esp. unpickle quickly and without bloat. Java serialization evidently sucks in this regard. I know about Kryo but I've never used it. I've also run into Apache Avro, which seems similar although I'm not quite sure why it's not normally mentioned as a suitable solution. Neither is Scala-specific and I see there's a Scala-specific package called Scala Pickling. Unfortunately it lacks almost all documentation and I'm not sure how to create a custom pickler.
I see a question here:
Scala Pickling: Writing a custom pickler / unpickler for nested structures
There's still some context lacking in that question, and also it looks like an awful lot of boilerplate to create a custom pickler, compared with the examples given for Kryo or Avro.
Here's some of the classes I need to serialize:
trait ToIntMemoizer[T] {
protected val minimum_raw_index: Int = 1
protected var next_raw_index: Int = minimum_raw_index
// For replacing items with ints. This is a wrapper around
// gnu.trove.map.TObjectIntMap to make it look like mutable.Map[T, Int].
// It behaves the same way.
protected val value_id_map = trovescala.ObjectIntMap[T]()
// Map in the opposite direction. This is a wrapper around
// gnu.trove.map.TIntObjectMap to make it look like mutable.Map[Int, T].
// It behaves the same way.
protected val id_value_map = trovescala.IntObjectMap[T]()
...
}
class FeatureMapper extends ToIntMemoizer[String] {
val features_to_standardize = mutable.BitSet()
...
}
class LabelMapper extends ToIntMemoizer[String] {
}
case class FeatureLabelMapper(
feature_mapper: FeatureMapper = new FeatureMapper,
label_mapper: LabelMapper = new LabelMapper
)
class DoubleCompressedSparseFeatureVector(
var keys: Array[Int], var values: Array[Double],
val mappers: FeatureLabelMapper
) { ... }
How would I create custom pickers/unpicklers in way that uses as little boilerplate as possible (since I have a number of other classes that need similar treatment)?
Thanks!
From an example in book "Begining in Scala", the script is:
import scala.collection.mutable.Map
object ChecksumAccumulator {
private val cache=Map[String,Int]()
def calculate(s: String):Int =
if(cache.contains(s))
cache(s)
else{
val acc = new ChecksumAccumulator
for(c <- s)
acc.add(c.toByte)
val cs=acc.checksum
cache+= (s -> cs)
cs
}
}
but, when I tried to compile this file
$scalac ChecksumAccumulator.scala, then generate an error, "not found: type ChecksumAccumulator val acc = new ChecksumAccumulator", any suggest?
Thanks,
'object' keyword defines a singleton object, not a class. So you can't new an object, the 'new' keyword requires a class.
check this Difference between object and class in Scala
You probably left some code out that looks like
class ChecksumAccumulator {
//...
}
The other answers are correct in saying what the problem is, but not really helping you understand why the example from the book is apparently not correct.
However, if you look at the Artima site, you will find the example is in a file here
Your code is an incomplete fragment. The file also includes these lines
// In file ChecksumAccumulator.scala
class ChecksumAccumulator {
private var sum = 0
def add(b: Byte) { sum += b }
def checksum(): Int = ~(sum & 0xFF) + 1
}
... without which you will get the error you had.
your issue is here
val acc = new ChecksumAccumulator
you cannot use new keyword with the object.
objects cannot be re-instantiated. You always have single instance of an object in scala. This is similar to static members in java.
Your code, probably meant as a companion object. That's kinda factories in imperative languages.
Basicaly, you have object and class pair. Object (singleton in imperative langs) can't be instantiated multiple times, as people here already noted, and usually used to define some static logic. In fact, there is only one instantiation -- when you call him for the first time. But object can have compaion -- regular class, and, as I think you've missed definition of that regular class, so object can't see anyone else, but itself.
The solution is to define that class, or to omit new (but i think that would be logicaly wrong).