I accidentally ran into a situation like this (the example is simplified to isolate the problem):
abstract class Element(val other: Element)
case object First extends Element(Second)
case object Second extends Element(First)
object Main {
def main(arguments: Array[String]) {
val e1 = First
val e2 = Second
println("e1: "+e1+" e1.other: "+e1.other)
println("e2: "+e2+" e2.other: "+e2.other)
}
}
Anyone would like to guess the output? :-)
e1: First e1.other: Second
e2: Second e2.other: null
The output makes kind of sense. Apparently at the time the Second object is created, the First one does not yet exist, therefore null is assigned. The problem is... It's so wrong! It took me a couple of hours to track this one down. Shouldn't the compiler tell something about this?
Interestingly, when I tried to run the thing as a Scala script (the same code, minus object Main and def main lines, and closing }s), I got an infinite sequence (not really infinite - at some point the list stops, I guess due to some limitation on the depth of Exception traces, or something) of exceptions like this:
vilius#blackone:~$ scala 1.scala
...
at Main$$anon$1.Main$$anon$$Second(1.scala:4)
at Main$$anon$1$First$.<init>(1.scala:3)
at Main$$anon$1.Main$$anon$$First(1.scala:3)
at Main$$anon$1$Second$.<init>(1.scala:4)
at Main$$anon$1.Main$$anon$$Second(1.scala:4)
at Main$$anon$1$First$.<init>(1.scala:3)
...
I'd love to get something at least as informative during runtime...
Ok. I finished my rant. Now I guess I should ask something. :)
So, could you recommend any nice design for case objects pointing one to another? By the way, in my real situation there are several objects pointing to the next and previous instances in circular way (the last one points to the first one and vice versa).
Using Scala 2.8.1-final
EDIT:
I found a solution for my main problem:
abstract class Element {
val other: Element
}
case object First extends Element {
val other = Second
}
case object Second extends Element {
val other = First
}
This seems to work in compiled version (but not as a Scala script!). Could anyone shed some light on what's going on here?
EDIT2: This works as a script (the same thing, just using defs):
abstract class Element { def other: Element }
case object First extends Element { def other = Second }
case object Second extends Element { def other = First }
The usual way is like this (changed nesting so you can paste it into the REPL):
object Main{
abstract class Element(other0: => Element) {
lazy val other = other0
}
case object First extends Element(Second)
case object Second extends Element(First)
def main(arguments: Array[String]) {
val e1 = First
val e2 = Second
println("e1: "+e1+" e1.other: "+e1.other)
println("e2: "+e2+" e2.other: "+e2.other)
}
}
That is, take a by-name parameter and stick it into a lazy val for future reference.
Edit: The fix you found works because objects are themselves lazy in that you can refer to them but they don't get created until you use them. Thus, one object is free to point itself at the other without requiring that the other one has been initialized already.
Related
I try to get names of all trait a class extends using getInterfaces which returns an array of trait's names. When I manually access each member of the array, the method getName returns simple names like this
trait A
trait B
class C() extends A, B
val c = C()
val arr = c.getClass.getInterfaces
arr(0).getName // : String = A
arr(1).getName // : String = B
However, when I use map function on arr. The resulting array contains a cryptic version of trait's names
arr.map(t => t.getName) // : Array[String] = Array(repl$.rs$line$1$A, repl$.rs$line$2$B)
The goal of this question is not about how to get the resulting array that contains simple names (for that purpose, I can just use arr.map(t => t.getSimpleName).) What I'm curious about is that why accessing array manually and using a map do not yield a compatible result. Am I wrong to think that both ways are equivalent?
I believe you run things in Scala REPL or Ammonite.
When you define:
trait A
trait B
class C() extends A, B
classes A, B and C aren't defined in top level of root package. REPL creates some isolated environment, compiles the code and loads the results into some inner "anonymous" namespace.
Except this is not true. Where this bytecode was created is reflected in class name. So apparently there was something similar (not necessarily identical) to
// repl$ suggest object
object repl {
// .rs sound like nested object(?)
object rs {
// $line sounds like nested class
class line { /* ... */ }
// $line$1 sounds like the first anonymous instance of line
new line { trait A }
// import from `above
// $line$2 sounds like the second anonymous instance of line
new line { trait B }
// import from above
//...
}
}
which was made because of how scoping works in REPL: new line creates a new scope with previous definitions seen and new added (possibly overshadowing some old definition). This could be achieved by creating a new piece of code as code of new anonymous class, compiling it, reading into classpath, instantiating and importing its content. Byt putting each new line into separate class REPL is able to compile and run things in steps, without waiting for you to tell it that the script is completed and closed.
When you are accessing class names with runtime reflection you are seeing the artifacts of how things are being evaluated. One path might go trough REPLs prettifiers which hide such things, while the other bypass them so you see the raw value as JVM sees it.
The problem is not with map rather with Array, especially its toString method (which is one among the many reasons for not using Array).
Actually, in this case it is even worse since the REPL does some weird things to try to pretty-print Arrays which in this case didn't work well (and, IMHO, just add to the confusion)
You can fix this problem calling mkString directly like:
val arr = c.getClass.getInterfaces
val result = arr.map(t => t.getName)
val text = result.mkString("[", ", ", "]")
println(text)
However, I would rather suggest just not using Array at all, instead convert it to a proper collection (e.g. List) as soon as possible like:
val interfaces = c.getClass.getInterfaces.toList
interfaces .map(t => t.getName)
Note: About the other reasons for not using Arrays
They are mutable.
Thet are invariant.
They are not part of the collections hierarchy thus you can't use them on generic methods (well, you actually can but that requires more tricks).
Their equals is by reference instead of by value.
This is a "real life" OO design question. I am working with Scala, and interested in specific Scala solutions, but I'm definitely open to hear generic thoughts.
I am implementing a branch-and-bound combinatorial optimization program. The algorithm itself is pretty easy to implement. For each different problem we just need to implement a class that contains information about what are the allowed neighbor states for the search, how to calculate the cost, and then potentially what is the lower bound, etc...
I also want to be able to experiment with different data structures. For instance, one way to store a logic formula is using a simple list of lists of integers. This represents a set of clauses, each integer a literal. We can have a much better performance though if we do something like a "two-literal watch list", and store some extra information about the formula in general.
That all would mean something like this
object BnBSolver[S<:BnBState]{
def solve(states: Seq[S], best_state:Option[S]): Option[S] = if (states.isEmpty) best_state else
val next_state = states.head
/* compare to best state, etc... */
val new_states = new_branches ++ states.tail
solve(new_states, new_best_state)
}
class BnBState[F<:Formula](clauses:F, assigned_variables) {
def cost: Int
def branches: Seq[BnBState] = {
val ll = clauses.pick_variable
List(
BnBState(clauses.assign(ll), ll :: assigned_variables),
BnBState(clauses.assign(-ll), -ll :: assigned_variables)
)
}
}
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign(ll: Int) :F =
Formula(clauses.filterNot(_ contains ll)
.map(_.filterNot(_==-ll))))
}
Hopefully this is not too crazy, wrong or confusing. The whole issue here is that this assign method from a formula would usually take just the current literal that is going to be assigned. In the case of two-literal watch lists, though, you are doing some lazy thing that requires you to know later what literals have been previously assigned.
One way to fix this is you just keep this list of previously assigned literals in the data structure, maybe as a private thing. Make it a self-standing lazy data structure. But this list of the previous assignments is actually something that may be naturally available by whoever is using the Formula class. So it makes sense to allow whoever is using it to just provide the list every time you assign, if necessary.
The problem here is that we cannot now have an abstract Formula class that just declares a assign(ll:Int):Formula. In the normal case this is OK, but if this is a two-literal watch list Formula, it is actually an assign(literal: Int, previous_assignments: Seq[Int]).
From the point of view of the classes using it, it is kind of OK. But then how do we write generic code that can take all these different versions of Formula? Because of the drastic signature change, it cannot simply be an abstract method. We could maybe force the user to always provide the full assigned variables, but then this is a kind of a lie too. What to do?
The idea is the watch list class just becomes a kind of regular assign(Int) class if I write down some kind of adapter method that knows where to take the previous assignments from... I am thinking maybe with implicit we can cook something up.
I'll try to make my answer a bit general, since I'm not convinced I'm completely following what you are trying to do. Anyway...
Generally, the first thought should be to accept a common super-class as a parameter. Obviously that won't work with Int and Seq[Int].
You could just have two methods; have one call the other. For instance just wrap an Int into a Seq[Int] with one element and pass that to the other method.
You can also wrap the parameter in some custom class, e.g.
class Assignment {
...
}
def int2Assignment(n: Int): Assignment = ...
def seq2Assignment(s: Seq[Int]): Assignment = ...
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign(ll: Assignment) :F = ...
}
And of course you would have the option to make those conversion methods implicit so that callers just have to import them, not call them explicitly.
Lastly, you could do this with a typeclass:
trait Assigner[A] {
...
}
implicit val intAssigner = new Assigner[Int] {
...
}
implicit val seqAssigner = new Assigner[Seq[Int]] {
...
}
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign[A : Assigner](ll: A) :F = ...
}
You could also make that type parameter at the class level:
case class Formula[A:Assigner,F<:Formula[A,F]](clauses:List[List[Int]]) {
def assign(ll: A) :F = ...
}
Which one of these paths is best is up to preference and how it might fit in with the rest of the code.
There are two ways of defining a method for two different classes inheriting the same trait in Scala.
sealed trait Z { def minus: String }
case class A() extends Z { def minus = "a" }
case class B() extends Z { def minus = "b" }
The alternative is the following:
sealed trait Z { def minus: String = this match {
case A() => "a"
case B() => "b"
}
case class A() extends Z
case class B() extends Z
The first method repeats the method name, whereas the second method repeats the class name.
I think that the first method is the best to use because the codes are separated. However, I found myself often using the second one for complicated methods, so that adding additional arguments can be done very easily for example like this:
sealed trait Z {
def minus(word: Boolean = false): String = this match {
case A() => if(word) "ant" else "a"
case B() => if(word) "boat" else "b"
}
case class A() extends Z
case class B() extends Z
What are other differences between those practices? Are there any bugs that are waiting for me if I choose the second approach?
EDIT:
I was quoted the open/closed principle, but sometimes, I need to modify not only the output of the functions depending on new case classes, but also the input because of code refactoring. Is there a better pattern than the first one? If I want to add the previous mentioned functionality in the first example, this would yield the ugly code where the input is repeated:
sealed trait Z { def minus(word: Boolean): String ; def minus = minus(false) }
case class A() extends Z { def minus(word: Boolean) = if(word) "ant" else "a" }
case class B() extends Z { def minus(word: Boolean) = if(word) "boat" else "b" }
I would choose the first one.
Why ? Merely to keep Open/Closed Principle.
Indeed, if you want to add another subclass, let's say case class C, you'll have to modify supertrait/superclass to insert the new condition... ugly
Your scenario has a similar in Java with template/strategy pattern against conditional.
UPDATE:
In your last scenario, you can't avoid the "duplication" of input. Indeed, parameter type in Scala isn't inferable.
It still better to have cohesive methods than blending the whole inside one method presenting as many parameters as the method union expects.
Just Imagine ten conditions in your supertrait method. What if you change inadvertently the behavior of one of each? Each change would be risked and supertrait unit tests should always run each time you modify it ...
Moreover changing inadvertently an input parameter (not a BEHAVIOR) is not "dangerous" at all. Why? because compiler would tell you that a parameter/parameter type isn't relevant any more.
And if you want to change it and do the same for every subclasses...ask to your IDE, it loves refactoring things like this in one click.
As this link explains:
Why open-closed principle matters:
No unit testing required.
No need to understand the sourcecode from an important and huge class.
Since the drawing code is moved to the concrete subclasses, it's a reduced risk to affect old functionallity when new functionality is added.
UPDATE 2:
Here a sample avoiding inputs duplication fitting your expectation:
sealed trait Z {
def minus(word: Boolean): String = if(word) whenWord else whenNotWord
def whenWord: String
def whenNotWord: String
}
case class A() extends Z { def whenWord = "ant"; def whenNotWord = "a"}
Thanks type inference :)
Personally, I'd stay away from the second approach. Each time you add a new sub class of Z you have to touch the shared minus method, potentially putting at risk the behavior tied to the existing implementations. With the first approach adding a new subclass has no potential side effect on the existing structures. There might be a little of the Open/Closed Principle in here and your second approach might violate it.
Open/Closed principle can be violated with both approaches. They are orthogonal to each other. The first one allows to easily add new type and implement required methods, it breaks Open/Closed principle if you need to add new method into hierarchy or refactor method signatures to the point that it breaks any client code. It is after all reason why default methods were added to Java8 interfaces so that old API can be extended without requiring client code to adapt.
This approach is typical for OOP.
The second approach is more typical for FP. In this case it is easy to add methods but it is hard to add new type (it breaks O/C here). It is good approach for closed hierarchies, typical example are Algebraic Data Types (ADT). Standardized protocol which is not meant to be extended by clients could be a candidate.
Languages struggle to allow to design API which would have both benefits - easy to add types as well as adding methods. This problem is called Expression Problem. Scala provides Typeclass pattern to solve this problem which allows to add functionality to existing types in ad-hoc and selective manner.
Which one is better depends on your use case.
Starting in Scala 3, you have the possibility to use trait parameters (just like classes have parameters), which simplifies things quite a lot in this case:
trait Z(x: String) { def minus: String = x }
case class A() extends Z("a")
case class B() extends Z("b")
A().minus // "a"
B().minus // "b"
val and var in scala, the concept is understandable enough, I think.
I wanted to do something like this (java like):
trait PersonInfo {
var name: Option[String] = None
var address: Option[String] = None
// plus another 30 var, for example
}
case class Person() extends PersonInfo
object TestObject {
def main(args: Array[String]): Unit = {
val p = new Person()
p.name = Some("someName")
p.address = Some("someAddress")
}
}
so I can change the name, address, etc...
This works well enough, but the thing is, in my program I end up with everything as vars.
As I understand val are "preferred" in scala. How can val work in this
type of example without having to rewrite all 30+ arguments every time one of them is changed?
That is, I could have
trait PersonInfo {
val name: Option[String]
val address: Option[String]
// plus another 30 val, for example
}
case class Person(name: Option[String]=None, address: Option[String]=None, ...plus another 30.. ) extends PersonInfo
object TestObject {
def main(args: Array[String]): Unit = {
val p = new Person("someName", "someAddress", .....)
// and if I want to change one thing, the address for example
val p2 = new Person("someName", "someOtherAddress", .....)
}
}
Is this the "normal" scala way of doing thing (not withstanding the 22 parameters limit)?
As can be seen, I'm very new to all this.
At first the basic option of Tony K.:
def withName(n : String) = Person(n, address)
looked promising, but I have quite a few classes that extends PersonInfo.
That means in each one I would have to re-implement the defs, lots of typing and cutting and pasting,
just to do something simple.
If I convert the trait PersonInfo to a normal class and put all the defs in it, then
I have the problem of how can I return a Person, not a PersonInfo?
Is there a clever scala thing to somehow implement in the trait or super class and have
all subclasses really extend?
As far as I can see all works very well in scala when the examples are very simple,
2 or 3 parameters, but when you have dozens it becomes very tedious and unworkable.
PersonContext of weirdcanada is I think similar, still thinking about this one. I guess if
I have 43 parameters I would need to breakup into multiple temp classes just to pump
the parameters into Person.
The copy option is also interesting, cryptic but a lot less typing.
Coming from java I was hoping for some clever tricks from scala.
Case classes have a pre-defined copy method which you should use for this.
case class Person(name: String, age: Int)
val mike = Person("Mike", 42)
val newMike = mike.copy(age = 43)
How does this work? copy is just one of the methods (besides equals, hashCode etc) that the compiler writes for you. In this example it is:
def copy(name: String = name, age: Int = age): Person = new Person(name, age)
The values name and age in this method shadow the values in the outer scope. As you can see, default values are provided, so you only need to specify the ones that you want to change. The others default to what there are in the current instance.
The reason for the existence of var in scala is to support mutable state. In some cases, mutable state is truly what you want (e.g. for performance or clarity reasons).
You are correct, though, that there is much evidence and experience behind the encouragement to use immutable state. Things work better on many fronts (concurrency, clarity of reason, etc).
One answer to your question is to provide mutator methods to the class in question that don't actually mutate the state, but instead return a new object with a modified entry:
case class Person(val name : String, val address : String) {
def withName(n : String) = Person(n, address)
...
}
This particular solution does involve coding potentially long parameter lists, but only within the class itself. Users of it get off easy:
val p = Person("Joe", "N St")
val p2 = p.withName("Sam")
...
If you consider the reasons you'd want to mutate state, then thing become clearer. If you are reading data from a database, you could have many reasons for mutating an object:
The database itself changed, and you want to auto-refresh the state of the object in memory
You want to make an update to the database itself
You want to pass an object around and have it mutated by methods all over the place
In the first case, immutable state is easy:
val updatedObj = oldObj.refresh
The second is much more complex, and there are many ways to handle it (including mutable state with dirty field tracking). It pays to look at libraries like Squery, where you can write things in a nice DSL (see http://squeryl.org/inserts-updates-delete.html) and avoid using the direct object mutation altogether.
The final one is the one you generally want to avoid for reasons of complexity. Such things are hard to parallelize, hard to reason about, and lead to all sorts of bugs where one class has a reference to another, but no guarantees about the stability of it. This kind of usage is the one that screams for immutable state of the form we are talking about.
Scala has adopted many paradigms from Functional Programming, one of them being a focus on using objects with immutable state. This means moving away from getters and setters within your classes and instead opting to to do what #Tony K. above has suggested: when you need to change the "state" of an inner object, define a function that will return a new Person object.
Trying to use immutable objects is likely the preferred Scala way.
In regards to the 22 parameter issue, you could create a context class that is passed to the constructor of Person:
case class PersonContext(all: String, of: String, your: String, parameters: Int)
class Person(context: PersonContext) extends PersonInfo { ... }
If you find yourself changing an address often and don't want to have to go through the PersonContext rigamarole, you can define a method:
def addressChanger(person: Person, address: String): Person = {
val contextWithNewAddress = ...
Person(contextWithNewAddress)
}
You could take this even further, and define a method on Person:
class Person(context: PersonContext) extends PersonInfo {
...
def newAddress(address: String): Person = {
addressChanger(this, address)
}
}
In your code, you just need to make remember that when you are updating your objects that you're often getting new objects in return. Once you get used to that concept, it becomes very natural.
I have a tree object that implements lazy depth-first-search as a TraversableView.
import collection.TraversableView
case class Node[T](label: T, ns: Node[T]*)
case class Tree[T](root: Node[T]) extends TraversableView[T, Traversable[_]] {
protected def underlying = null
def foreach[U](f: (T) => U) {
def dfs(r: Node[T]): TraversableView[T, Traversable[_]] = {
Traversable(r.label).view ++ r.ns.flatMap(dfs(_))
}
dfs(root).foreach(f)
}
}
This is appealingly concise and appears to work; however, the underlying = null method makes me nervous because I don't understand what it means. (IntelliJ wrote that line for me.) I suppose it might be correct, because in this case there is no underlying strict representation of the tree, but I'm not sure.
Is the above code correct, or do I have to do something more with underlying?
Users of views will expect to be able to call force to get a strict collection. With your implementation, calling force on a tree (or any transformation of a tree—e.g., tree.take(10).filter(pred), etc.) will result in a null pointer exception.
This may be fine with you—you'll still be able to force evaluation using toList, for example (although you should follow the advice in DaoWen's comment if you go that route).
The actual contents of underlying should never get used, though, so there's an easy fix—just make it an appropriately typed empty collection:
protected def underlying = Vector.empty[T]
Now if a user calls tree.force, they'll get a vector of labels, statically typed as a Traversable[T].