I am following along with a lecture, the lecturer is using Eclipse but I am using IntelliJ IDEA Community Edition 15.0.6, and the code on a Scala worksheet named rationals.scala is as follows
object rationals{
val x = new Rational(1,2)
x.numer
x.denom
}
//noinspection ScalaPackageName
class Rational(x: Int, y: Int) {
def numer = x
def denom = y
}
The Scala worksheet worksheet will not compute and there is a warning (not error) associated with the class definition that reads
Package names doesn't correspond to directories structure, this may
cause problems with resolve to classes from this file
Also, and this is odd but maybe significant, IDEA flags numer and denom as typos.
Any guidance? thx
The problem isn't with the name matching directory structure, the actual problem is that you have multiple definitions in the worksheet which it doesn't like. If you declare the class inside the object, then it'll compute properly:
object rationals {
class Rational(x: Int, y: Int) {
def numer = x
def denom = y
}
val x = new Rational(1,2)
x.numer
x.denom
}
#Yuval Itzchakov #MaxWen
If you reference Rational.scala outside the worksheet, then you have to make sure to tick the Make project box the first time you run it.
Eclipse and IntelliJ worksheets don't work the same. –
Related
I'm developing a library that depends on another. The dependency has a package object that I'd like to alias into my own package domain, to 'hide' the underlying library from the users of the one I'm developing, for potential later reimplementation of that library. I've tried a couple things, including
object functions {
def identity(a: Any): Any = a
def toUpper(s: String): String = s.toUpperCase
}
object renamedfunctions {
import functions._
}
This compiles but import renamedfunctions._ brings nothing into scope. I've also tried extending the backing object, but scala objects are un-extendable. Does anyone know of a way to accomplish what I'm trying to do without forking the underlying library?
It is not possible to do this with Scala packages, in general. Usually, you would only alias a package locally within a file:
import scala.{ math => physics }
scala> physics.min(1, 2)
res6: Int = 1
But this doesn't do what you ask. Packages themselves aren't values or types, so you cannot assign them as such. These will fail:
type physics = scala.math
val physics = scala.math
With a package object, you can grab ahold of it's concrete members, but not the classes within. For example:
scala> val physics = scala.math.`package`
physics: math.type = scala.math.package$#42fcc7e6
scala> physics.min(1, 2)
res0: Int = 1
But using objects or types that belong to the traditional package won't work:
scala> scala.math.BigDecimal(1)
res1: scala.math.BigDecimal = 1
scala> physics.BigDecimal(1)
<console>:13: error: value BigDecimal is not a member of object scala.math.package
physics.BigDecimal(1)
^
Ok, so what should you do?
The reason you're even considering this is that you want to hide the implementation of which library you're using so that it can easily be replaced later. If that's the case, what you should do is hide the library within another interface or object (a facade). It doesn't mean you need to forward every single method and value contained within the library, only the one's you're actually using. This way, when it comes to migrating to another library, you only need to change one class, because the rest of the code will only reference the facade.
For example, if we wanted to use min and max from scala.math, but later wanted to replace it with another library that provided a more efficient solution (if such a thing exists), we could create a facade like this:
object Math {
def min(x: Int, y: Int): Int = scala.math.min(x, y)
def max(x: Int, y: Int): Int = scala.math.max(x, y)
}
All other classes would use Math.min and Math.max, so that when scala.math was replaced, they could remain the same. You could also make Math a trait (sans implementations) and provide the implementations in a sub-class or object (say ScalaMath), so that classes could inject different implementations.
Unfortunately, the commented-out code crashes the compiler:
package object p { def f = 42 }
package q {
object `package` { def f = p.f }
}
/*
package object q {
val `package` = p.`package`
}
*/
package client {
import q.`package`._
object Test extends App {
println(f)
}
}
That would make clients not break when you migrated to implementations in a package object.
Simply:
val renamedfunctions = functions
import renamedfunctions._
You can see it being done in the scala library itself: https://github.com/scala/scala/blob/2.12.x/src/library/scala/Predef.scala#L150
val Map = immutable.Map
This very simple worksheet content demonstrates the issue:
object Test {
println("This does not print!")
add(5, 6)
}
println("This however prints!")
add(5, 6)
def add(a: Int, b: Int): Int = a + b
Results from the above worksheet content are:
defined module Test
This however prints!
res0: Unit = ()
res1: Int = 11
add: add[](val a: Int,val b: Int) => Int
Based on JetBrains official website Scala Worksheets example and every other reliable resource I've found (like Martin Odresky's own examples in Functional Programming in Scala course), I would expect the contents of object Test to execute. My software is:
OS X El Capitan
IntelliJ IDEA 2016.2
SBT Plugin 1.8.0
SBT Version 0.13.12
Scala Plugin 2016.2.1
Scala Version 2.11.8
I was able to evaluate the Scala work sheet by changing the Scala worksheet settings.
change run type to plain (original it was REPL)
You can get to settings by clicking the settings icon on the Scala worksheet.
The scala worksheet executes the contents of the object test{....} if all the code is inside that object. If there is some code outside the object, it will just define that Test object (and not run it).
object Test {
println("This does not print!")
def add(a: Int, b: Int): Int = {print("Sum:" + a + b); a + b}
add(2, 2)
}
// defined the above Test object
Test // executes the Test object
I think this is what you want:
object Test {
println("This does not print!")
add(5, 6)
println("This however prints!")
add(5, 6)
def add(a: Int, b: Int): Int = a + b
}
How the worksheet works is, that if you define an object and nothing defined outside the object scope, it will execute it as Test extends App. Which is what the intellij page displays
If you have any statement outside the object scope, it is then treated as any other object and compiler will initialize it like anything else. This is what you are experiencing.
So I’ve recently gotten much more interested in developing using sbt console/spark-shell and I had a question about working with existing packages/jars. So I know that you can import jars that that it’s possible to override classes, but I’m wondering: Is it possible to override classes and force all other classes to point to that overriden class?
So if I have
class Bar(){
def a() = { (new Foo).blah()}
}
and I override Foo, is there a way I can do that so that I don’t need to also override Bar?
Let's explain this with a timeline:
1. class X { def t = 1 }
2. class Y {
def x: X = new X
}
Up to here the definition of class Y at line 2 refers to the definition of X in line 1.
3. class X { def t = 2 }
Now, class Y from line 2 still refers to X from line 1. This is how the REPL works. Changes are effective forward in time not backwards.
4. class Y {
def x: X = new X
}
Now, as you expect, the new Y at line 4 will refer to the new X from line 3.
Normally, you'd do that by replacing the class in your classpath. If the new version is binary-compatible, you could even re-run without re-compiling.
The couple of hitches are that the REPL compiler is resident, and the class is in a specific package (e.g., $line8). You'd need a fresh compiler to use the refreshed package.
There are open tickets to retain or discard $line packages when resetting the compiler. The other missing piece is to compile the new version of the class in the appropriate package, or conversely to regenerate the consuming class.
Note that the :require command lets you add a jar but not replace classes.
OK, in the question about 'Class Variables as constants', I get the fact that the constants are not available until after the 'official' constructor has been run (i.e. until you have an instance). BUT, what if I need the companion singleton to make calls on the class:
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
If I create companion object first, the 'new thing(x)' (in the companion) causes an error. However, if I define the class first, the 'x * someConst' (in the class definition) causes an error.
I also tried placing the class definition inside the singleton.
object thing {
var someConst = 42
def apply(x: Int) = new thing(x)
class thing(x: Int) {
val field = x * someConst
override def toString = "val: " + field
}
}
However, doing this gives me a 'thing.thing' type object
val t = thing(2)
results in
t: thing.thing = val: 84
The only useful solution I've come up with is to create an abstract class, a companion and an inner class (which extends the abstract class):
abstract class thing
object thing {
val someConst = 42
def apply(x: Int) = new privThing(x)
class privThing(x: Int) extends thing {
val field = x * someConst
override def toString = "val: " + field
}
}
val t1 = thing(2)
val tArr: Array[thing] = Array(t1)
OK, 't1' still has type of 'thing.privThing', but it can now be treated as a 'thing'.
However, it's still not an elegant solution, can anyone tell me a better way to do this?
PS. I should mention, I'm using Scala 2.8.1 on Windows 7
First, the error you're seeing (you didn't tell me what it is) isn't a runtime error. The thing constructor isn't called when the thing singleton is initialized -- it's called later when you call thing.apply, so there's no circular reference at runtime.
Second, you do have a circular reference at compile time, but that doesn't cause a problem when you're compiling a scala file that you've saved on disk -- the compiler can even resolve circular references between different files. (I tested. I put your original code in a file and compiled it, and it worked fine.)
Your real problem comes from trying to run this code in the Scala REPL. Here's what the REPL does and why this is a problem in the REPL. You're entering object thing and as soon as you finish, the REPL tries to compile it, because it's reached the end of a coherent chunk of code. (Semicolon inference was able to infer a semicolon at the end of the object, and that meant the compiler could get to work on that chunk of code.) But since you haven't defined class thing it can't compile it. You have the same problem when you reverse the definitions of class thing and object thing.
The solution is to nest both class thing and object thing inside some outer object. This will defer compilation until that outer object is complete, at which point the compiler will see the definitions of class thing and object thing at the same time. You can run import thingwrapper._ right after that to make class thing and object thing available in global scope for the REPL. When you're ready to integrate your code into a file somewhere, just ditch the outer class thingwrapper.
object thingwrapper{
//you only need a wrapper object in the REPL
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
}
Scala 2.12 or more could benefit for sip 23 which just (August 2016) pass to the next iteration (considered a “good idea”, but is a work-in-process)
Literal-based singleton types
Singleton types bridge the gap between the value level and the type level and hence allow the exploration in Scala of techniques which would typically only be available in languages with support for full-spectrum dependent types.
Scala’s type system can model constants (e.g. 42, "foo", classOf[String]).
These are inferred in cases like object O { final val x = 42 }. They are used to denote and propagate compile time constants (See 6.24 Constant Expressions and discussion of “constant value definition” in 4.1 Value Declarations and Definitions).
However, there is no surface syntax to express such types. This makes people who need them, create macros that would provide workarounds to do just that (e.g. shapeless).
This can be changed in a relatively simple way, as the whole machinery to enable this is already present in the scala compiler.
type _42 = 42.type
type Unt = ().type
type _1 = 1 // .type is optional for literals
final val x = 1
type one = x.type // … but mandatory for identifiers
So I've been playing around with remote actors, and I've run into some difficulties with serialization exceptions. One of my message is an instance of a case class, which itself contains an instance of a list of Path classes. The Path class is defined as follows, and is essentially a collection of Point instances with a precomputed distance attribute:
class Point (xi:Int,yi:Int) {
val x: Int = xi
val y: Int = yi
// Determine distance to another point
def distanceTo(p:Point):Int={
val dx = (x - p.x).toDouble
val dy = (y - p.y).toDouble
sqrt(dx*dx + dy*dy).round.toInt
}
override def equals(arg0:Any) : Boolean = {
if (arg0.isInstanceOf[Point] && arg0.asInstanceOf[Point].x == x && arg0.asInstanceOf[Point].y == y) return true
false
}
}
class Path(p: List[Point]) {
val path: List[Point] = p
val length: Int = Point.pathLength(p)
}
While these class instances can be passed around with no issuse using normal actors, any attempt to send a message containing a List[Path] collection fails with java.io.NotSerializableException.
So what do I do? Do I need to define serialization methods for these classes? Is there a better practice for this purpose other than sending class instances over the wire?
Any help would be greatly appreciated -- there seems to be a real shortage of information and examples of the Scala remote actor stuff.
Why would you expect your Path class to be serializable? Only case classes are automatically serializable in Scala. You need to either attach a #serializable annotation to Path (and a #SerialVersionUID for safety), declare Path as extending java.io.Serializable or java.io.Externalizable, or make it a case class (thus getting serializability for free).
Try using the #serialized annotation to the classes. However be careful, I have a friend who has run into all sorts of issues with non-trivial serialized methods. Stay immutable and stay simple ;)
Argh -- I'm an idiot -- #serializable did the trick. It would have helped to actually have recompiled the file in question...