overriding classes using sbt console/spark-shell - scala

So I’ve recently gotten much more interested in developing using sbt console/spark-shell and I had a question about working with existing packages/jars. So I know that you can import jars that that it’s possible to override classes, but I’m wondering: Is it possible to override classes and force all other classes to point to that overriden class?
So if I have
class Bar(){
def a() = { (new Foo).blah()}
}
and I override Foo, is there a way I can do that so that I don’t need to also override Bar?

Let's explain this with a timeline:
1. class X { def t = 1 }
2. class Y {
def x: X = new X
}
Up to here the definition of class Y at line 2 refers to the definition of X in line 1.
3. class X { def t = 2 }
Now, class Y from line 2 still refers to X from line 1. This is how the REPL works. Changes are effective forward in time not backwards.
4. class Y {
def x: X = new X
}
Now, as you expect, the new Y at line 4 will refer to the new X from line 3.

Normally, you'd do that by replacing the class in your classpath. If the new version is binary-compatible, you could even re-run without re-compiling.
The couple of hitches are that the REPL compiler is resident, and the class is in a specific package (e.g., $line8). You'd need a fresh compiler to use the refreshed package.
There are open tickets to retain or discard $line packages when resetting the compiler. The other missing piece is to compile the new version of the class in the appropriate package, or conversely to regenerate the consuming class.
Note that the :require command lets you add a jar but not replace classes.

Related

Why this map function does not give traits' simple names

I try to get names of all trait a class extends using getInterfaces which returns an array of trait's names. When I manually access each member of the array, the method getName returns simple names like this
trait A
trait B
class C() extends A, B
val c = C()
val arr = c.getClass.getInterfaces
arr(0).getName // : String = A
arr(1).getName // : String = B
However, when I use map function on arr. The resulting array contains a cryptic version of trait's names
arr.map(t => t.getName) // : Array[String] = Array(repl$.rs$line$1$A, repl$.rs$line$2$B)
The goal of this question is not about how to get the resulting array that contains simple names (for that purpose, I can just use arr.map(t => t.getSimpleName).) What I'm curious about is that why accessing array manually and using a map do not yield a compatible result. Am I wrong to think that both ways are equivalent?
I believe you run things in Scala REPL or Ammonite.
When you define:
trait A
trait B
class C() extends A, B
classes A, B and C aren't defined in top level of root package. REPL creates some isolated environment, compiles the code and loads the results into some inner "anonymous" namespace.
Except this is not true. Where this bytecode was created is reflected in class name. So apparently there was something similar (not necessarily identical) to
// repl$ suggest object
object repl {
// .rs sound like nested object(?)
object rs {
// $line sounds like nested class
class line { /* ... */ }
// $line$1 sounds like the first anonymous instance of line
new line { trait A }
// import from `above
// $line$2 sounds like the second anonymous instance of line
new line { trait B }
// import from above
//...
}
}
which was made because of how scoping works in REPL: new line creates a new scope with previous definitions seen and new added (possibly overshadowing some old definition). This could be achieved by creating a new piece of code as code of new anonymous class, compiling it, reading into classpath, instantiating and importing its content. Byt putting each new line into separate class REPL is able to compile and run things in steps, without waiting for you to tell it that the script is completed and closed.
When you are accessing class names with runtime reflection you are seeing the artifacts of how things are being evaluated. One path might go trough REPLs prettifiers which hide such things, while the other bypass them so you see the raw value as JVM sees it.
The problem is not with map rather with Array, especially its toString method (which is one among the many reasons for not using Array).
Actually, in this case it is even worse since the REPL does some weird things to try to pretty-print Arrays which in this case didn't work well (and, IMHO, just add to the confusion)
You can fix this problem calling mkString directly like:
val arr = c.getClass.getInterfaces
val result = arr.map(t => t.getName)
val text = result.mkString("[", ", ", "]")
println(text)
However, I would rather suggest just not using Array at all, instead convert it to a proper collection (e.g. List) as soon as possible like:
val interfaces = c.getClass.getInterfaces.toList
interfaces .map(t => t.getName)
Note: About the other reasons for not using Arrays
They are mutable.
Thet are invariant.
They are not part of the collections hierarchy thus you can't use them on generic methods (well, you actually can but that requires more tricks).
Their equals is by reference instead of by value.

Aliasing Scala package objects

I'm developing a library that depends on another. The dependency has a package object that I'd like to alias into my own package domain, to 'hide' the underlying library from the users of the one I'm developing, for potential later reimplementation of that library. I've tried a couple things, including
object functions {
def identity(a: Any): Any = a
def toUpper(s: String): String = s.toUpperCase
}
object renamedfunctions {
import functions._
}
This compiles but import renamedfunctions._ brings nothing into scope. I've also tried extending the backing object, but scala objects are un-extendable. Does anyone know of a way to accomplish what I'm trying to do without forking the underlying library?
It is not possible to do this with Scala packages, in general. Usually, you would only alias a package locally within a file:
import scala.{ math => physics }
scala> physics.min(1, 2)
res6: Int = 1
But this doesn't do what you ask. Packages themselves aren't values or types, so you cannot assign them as such. These will fail:
type physics = scala.math
val physics = scala.math
With a package object, you can grab ahold of it's concrete members, but not the classes within. For example:
scala> val physics = scala.math.`package`
physics: math.type = scala.math.package$#42fcc7e6
scala> physics.min(1, 2)
res0: Int = 1
But using objects or types that belong to the traditional package won't work:
scala> scala.math.BigDecimal(1)
res1: scala.math.BigDecimal = 1
scala> physics.BigDecimal(1)
<console>:13: error: value BigDecimal is not a member of object scala.math.package
physics.BigDecimal(1)
^
Ok, so what should you do?
The reason you're even considering this is that you want to hide the implementation of which library you're using so that it can easily be replaced later. If that's the case, what you should do is hide the library within another interface or object (a facade). It doesn't mean you need to forward every single method and value contained within the library, only the one's you're actually using. This way, when it comes to migrating to another library, you only need to change one class, because the rest of the code will only reference the facade.
For example, if we wanted to use min and max from scala.math, but later wanted to replace it with another library that provided a more efficient solution (if such a thing exists), we could create a facade like this:
object Math {
def min(x: Int, y: Int): Int = scala.math.min(x, y)
def max(x: Int, y: Int): Int = scala.math.max(x, y)
}
All other classes would use Math.min and Math.max, so that when scala.math was replaced, they could remain the same. You could also make Math a trait (sans implementations) and provide the implementations in a sub-class or object (say ScalaMath), so that classes could inject different implementations.
Unfortunately, the commented-out code crashes the compiler:
package object p { def f = 42 }
package q {
object `package` { def f = p.f }
}
/*
package object q {
val `package` = p.`package`
}
*/
package client {
import q.`package`._
object Test extends App {
println(f)
}
}
That would make clients not break when you migrated to implementations in a package object.
Simply:
val renamedfunctions = functions
import renamedfunctions._
You can see it being done in the scala library itself: https://github.com/scala/scala/blob/2.12.x/src/library/scala/Predef.scala#L150
val Map = immutable.Map

Scala worksheet can not resolve class name - IntelliJ IDEA

I am following along with a lecture, the lecturer is using Eclipse but I am using IntelliJ IDEA Community Edition 15.0.6, and the code on a Scala worksheet named rationals.scala is as follows
object rationals{
val x = new Rational(1,2)
x.numer
x.denom
}
//noinspection ScalaPackageName
class Rational(x: Int, y: Int) {
def numer = x
def denom = y
}
The Scala worksheet worksheet will not compute and there is a warning (not error) associated with the class definition that reads
Package names doesn't correspond to directories structure, this may
cause problems with resolve to classes from this file
Also, and this is odd but maybe significant, IDEA flags numer and denom as typos.
Any guidance? thx
The problem isn't with the name matching directory structure, the actual problem is that you have multiple definitions in the worksheet which it doesn't like. If you declare the class inside the object, then it'll compute properly:
object rationals {
class Rational(x: Int, y: Int) {
def numer = x
def denom = y
}
val x = new Rational(1,2)
x.numer
x.denom
}
#Yuval Itzchakov #MaxWen
If you reference Rational.scala outside the worksheet, then you have to make sure to tick the Make project box the first time you run it.
Eclipse and IntelliJ worksheets don't work the same. –

Trying to understand Precedence Rule 4 in Scala In Depth Book

I am trying to understand the 4th rule on Precedence on Bindings on page 93 in Joshua Suareth's book Scala in depth.
According to this rule:
definitions made available by a package clause not in the source file where the definition occurs have lowest precedence.
It is this rule that I intend to test.
So off I went and tried to follow Josh's train of thought on Page 94. He creates a source file called externalbindings.scala and I did the same, with some changes to it as below
package com.att.scala
class extbindings {
def showX(x: Int): Int = {
x
}
object x1 {
override def toString = "Externally bound obj object in com.att.scala"
}
}
Next he asks us to create another file that will allow us to test the above rule. I created a file called precedence.scala:
package com.att.scala
class PrecedenceTest { //Josh has an object here instead of a class
def testPrecedence(): Unit = { //Josh has main method instead of this
testSamePackage()
//testWildCardImport()
//testExplicitImport()
//testInlineDefinition()
}
println("First statement of Constructor")
testPrecedence
println("Last statement of Constructor")
def testSamePackage() {
val ext1 = new extbindings()
val x = ext1.showX(100)
println("x is "+x)
println(obj1) // Eclipse complains here
}
}
Now, Josh is able to print out the value of the object in his example by simply doing the <package-name>.<object-name>.testSamePackage method on the REPL.
His output is:
Externally bound x object in package test
In my versions, the files are in Eclipse and I have my embedded Scala interpreter.
Eclipse complains right here: println(obj), it says: not found value obj1
Am I doing something obviously wrong in setting up the test files?
I would like to be able to test the rule I mentioned above and get the output:
Externally bound obj object in package com.att.scala
I haven't read the book, thus I'm not really sure if your code shows what the book wants to tell you.
Nevertheless, the error message is correct. obj1 is not found because it doesn't exist. In your code it is called x1. Because it is a member of extbindings you have to access it as a member of this class:
println(ext1.x1)
If x1 is defined outside of class extbinding, in scope of package com.att.scala, you can access it directly:
println(x1)
If it is defined in another package you have to put the package name before:
println(com.att.scala2.x1)
To simplify some things you can import x1:
import ext1.x1
println(x1)
Finally a tip to improve your code: name types in UpperCamelCase: extbindings -> Extbindings, x1 -> X1
If you replace a singleton object with a class, you will need to create an instance of that class.

Scala singleton factories and class constants

OK, in the question about 'Class Variables as constants', I get the fact that the constants are not available until after the 'official' constructor has been run (i.e. until you have an instance). BUT, what if I need the companion singleton to make calls on the class:
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
If I create companion object first, the 'new thing(x)' (in the companion) causes an error. However, if I define the class first, the 'x * someConst' (in the class definition) causes an error.
I also tried placing the class definition inside the singleton.
object thing {
var someConst = 42
def apply(x: Int) = new thing(x)
class thing(x: Int) {
val field = x * someConst
override def toString = "val: " + field
}
}
However, doing this gives me a 'thing.thing' type object
val t = thing(2)
results in
t: thing.thing = val: 84
The only useful solution I've come up with is to create an abstract class, a companion and an inner class (which extends the abstract class):
abstract class thing
object thing {
val someConst = 42
def apply(x: Int) = new privThing(x)
class privThing(x: Int) extends thing {
val field = x * someConst
override def toString = "val: " + field
}
}
val t1 = thing(2)
val tArr: Array[thing] = Array(t1)
OK, 't1' still has type of 'thing.privThing', but it can now be treated as a 'thing'.
However, it's still not an elegant solution, can anyone tell me a better way to do this?
PS. I should mention, I'm using Scala 2.8.1 on Windows 7
First, the error you're seeing (you didn't tell me what it is) isn't a runtime error. The thing constructor isn't called when the thing singleton is initialized -- it's called later when you call thing.apply, so there's no circular reference at runtime.
Second, you do have a circular reference at compile time, but that doesn't cause a problem when you're compiling a scala file that you've saved on disk -- the compiler can even resolve circular references between different files. (I tested. I put your original code in a file and compiled it, and it worked fine.)
Your real problem comes from trying to run this code in the Scala REPL. Here's what the REPL does and why this is a problem in the REPL. You're entering object thing and as soon as you finish, the REPL tries to compile it, because it's reached the end of a coherent chunk of code. (Semicolon inference was able to infer a semicolon at the end of the object, and that meant the compiler could get to work on that chunk of code.) But since you haven't defined class thing it can't compile it. You have the same problem when you reverse the definitions of class thing and object thing.
The solution is to nest both class thing and object thing inside some outer object. This will defer compilation until that outer object is complete, at which point the compiler will see the definitions of class thing and object thing at the same time. You can run import thingwrapper._ right after that to make class thing and object thing available in global scope for the REPL. When you're ready to integrate your code into a file somewhere, just ditch the outer class thingwrapper.
object thingwrapper{
//you only need a wrapper object in the REPL
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
}
Scala 2.12 or more could benefit for sip 23 which just (August 2016) pass to the next iteration (considered a “good idea”, but is a work-in-process)
Literal-based singleton types
Singleton types bridge the gap between the value level and the type level and hence allow the exploration in Scala of techniques which would typically only be available in languages with support for full-spectrum dependent types.
Scala’s type system can model constants (e.g. 42, "foo", classOf[String]).
These are inferred in cases like object O { final val x = 42 }. They are used to denote and propagate compile time constants (See 6.24 Constant Expressions and discussion of “constant value definition” in 4.1 Value Declarations and Definitions).
However, there is no surface syntax to express such types. This makes people who need them, create macros that would provide workarounds to do just that (e.g. shapeless).
This can be changed in a relatively simple way, as the whole machinery to enable this is already present in the scala compiler.
type _42 = 42.type
type Unt = ().type
type _1 = 1 // .type is optional for literals
final val x = 1
type one = x.type // … but mandatory for identifiers