Diagnosing Scala compile error "value to is not a member of Int" - scala

I made a code change within a Scala class that had been working fine. Upon trying to compile the modification, the compiler spit out the error message, "value to is not a member of Int" relating to this (pre-existing) line of code:
for (i <- 0 to cColumn -1) { ... }
Doing some research, I came across some bug reports on the "to" method - and also that "to" is apparently a method provided within the intWrapper class(?).
So, based upon that info, I started looking at my class's import statements... no such import for intWrapper. (Q: That being the case, how did this ever compile/run in the first place?) What makes this even more interesting (to me) is that when I started to do a global search in the codebase for that import I accidentally terminated the compiler (sbt) session...but when I restarted it, the class compiled just fine. No errors at all. (And no code changes from the previous session)
Anyone have any ideas as to what would cause this intermittent behavior?
NOTES:
1) using Scala 2.10.2 with javac 1.7.0_25
2) the code change to the class had nothing to do with the example functionality, nor did it alter any of the class's imports
Update: Here are the variable declarations:
val meta = rs.getMetaData()
val cColumn = meta.getColumnCount()
EDIT: Per suggestion, here is the test lines (all of them compile fine now):
implicitly[scala.Int => scala.runtime.RichInt]
intWrapper(3) to 4
for (i <- 0 to 33 -1) { /* do something smart */ }
for (i <- 0 to cColumn -1) { ... }
EDIT 2 Here is the full compiler error:
[error] /path/to/src/file/DBO.scala:329: value to is not a member of Int
[error] for (i <- 0 to cColumn -1) {
[error]
That error was repeating ~18 times in the class. (It's a DBO-DB interface layer); where DBO.scala is the file containing the newly modified trait.

I just encountered this same issue. In my case, it was caused by an unnecessary import, like this:
import scala.Predef.String
class Test() {
for (t <- 1 to 3) {}
}
By default, Scala imports all of scala.Predef. Predef extends LowPriorityImplicits, which includes an implicit conversion from Int to RichInt.
to is actually defined on RichInt, so you need this conversion in order to use it. By importing just part of Predef, I lose this conversion. Get rid of the unnecessary import and the error goes away.

how did this ever compile/run in the first place?
By default, the contents of scala.Predef is imported. There you have method intWrapper which produces a RichInt with method to.
You probably have shadowed symbol intWrapper. Does the following work:
implicitly[scala.Int => scala.runtime.RichInt]
or this:
intWrapper(3) to 4
...if not, there lies your problem.
EDIT: So, since you say that compiles, what happens is you replace cColumn with a constant, e.g.
for (i <- 0 to 33 -1) { ... }
? It would also help to post the complete compiler message with indicated line etc.

Without knowing where that error comes from, you might also try to work around it by constructing the Range by hand:
for (i <- Range.inclusive(0, cColumn-1)) { ... }
or
Range.inclusive(0, cColumn-1).foreach { i => ... }

Related

Scala eta expression ambiguous reference doesn't list all overloaded methods

I'm using Scala 2.12.1. In the Interpreter I make an Int val:
scala> val someInt = 3
someInt: Int = 3
Then I tried to use the eta expansion and get the following error:
scala> someInt.== _
<console>:13: error: ambiguous reference to overloaded definition,
both method == in class Int of type (x: Char)Boolean
and method == in class Int of type (x: Byte)Boolean
match expected type ?
someInt.== _
^
I see in the scaladoc that the Int class has more than 2 overloaded methods.
Question: is there a particular reason that the error message shows only 2 overloaded methods as opposed to listing all of them?
By the way, to specify which method you want to use the syntax is this:
scala> someInt.== _ : (Double => Boolean)
res9: Double => Boolean = $$Lambda$1103/1350894905#65e21ce3
The choice of the two methods that are listed seems to be more or less arbitrary. For example, this snippet:
class A
class B
class C
class Foo {
def foo(thisWontBeListed: C): Unit = {}
def foo(thisWillBeListedSecond: B): Unit = {}
def foo(thisWillBeListedFirst: A): Unit = {}
}
val x: Foo = new Foo
x.foo _
fails to compile with the error message:
error: ambiguous reference to overloaded definition,
both method foo in class Foo of type (thisWillBeListedFirst: this.A)Unit
and method foo in class Foo of type (thisWillBeListedSecond: this.B)Unit
match expected type ?
x.foo _
That is, it simply picks the two last methods that have been added to the body of the class, and lists them in the error message. Maybe those methods are stored in a List in reverse order, and the two first items are picked to compose the error message.
Why does it do it? It does it, because it has been programmed to do so, I'd take it as a given fact.
What was the main reason why it was programmed to do exactly this and not something else? That would be a primarily opinion-based question, and probably nobody except the authors of the Scala compiler themselves could give a definitive answer to that. I can think of at least three good reasons why only two conflicting methods are listed:
It's faster: why search for all conflicts, if it is already clear that a particular line of code does not compile? There is simply no reason to waste any time enumerating all possible ways how a particular line of code could be wrong.
The full list of conflicting methods is usually not needed anyway: the error messages are already quite long, and can at times be somewhat cryptic. Why aggravate it by printing an entire wall of error messages for a single line?
Implementation is easier: whenever you write a language interpreter of some sort, you quickly notice that returning all errors is somewhat more difficult than returning just the first error. Maybe in this particular case, it was decided not to bother collecting all possible conflicts.
PS: The order of the methods in the source code of Int seems to be different, but I don't know exactly what this "source" code has to do with the actual Int implementation: it seems to be a generated file without any implementations, it's there just so that #scaladoc has something to process, the real implementation is elsewhere.

Scala, why do I not need to import deduced types

I feel like I should preface this with the fact that I'm building my projects with sbt.
My problem is that, if at compile time a method returns something of an unimported type, in the file where I call the method, as long as I use type inference, everything compiles. Once I try to assign the unimported type to the var/val which I created with the return value of my function, I get a compiler error.
Lets say I have two classes in two package. Class App in package main and class Imported in package libraries. Lets further more say that we have a class ImportedFactory in the package main and that this class has a method for creating objects of the type Imported.
This code compiles just fine:
class App() {
// method return object of type Imported
val imp = ImportedFactory.createImportedObject()
}
This doesn't:
class App() {
// method return object of type Imported
val imp : Imported = ImportedFactory.createImportedObject()
}
This yet again does:
import libraries.Imported
class App() {
// method return object of type Imported
val imp : Imported = ImportedFactory.createImportedObject()
}
This seems like rather strange behavior. Is this normal for languages with type inference at compile time and I've yet to notice it until now in go/C++ due to my ignorance ?
Does one of the two valid approaches (import&explicit type vs infered) have advantages/drawback over the other ? (expect for, of course, one being more explicit and verbose and the other one being shorter)
Is this black magic or does the Scala compiler accomplish these deductions in a rather straight forward way ?
The only thing importing does is making a not fully qualified name available in the current scope. You could just as well write this:
class App() {
val imp: libraries.Imported = ImportedFactory.createImportedObject()
}
The reason you import libraries.Imported is for making the shorter name Imported available for you to write. If you let the compiler infer the type, you don't mention the type in your code, so you don't have to import its shorter name.
And by the way: this has nothing to do with dynamic casting in C++. The only mechanism at work in your code is type inference.
note: You'll get better search results with the term type inference
With val imp = ImportedFactory.createImportedObject() you are letting the compiler figure out what type imp should be based on type inference. Whatever type createImportObject returns, that's what type imp is.
With val imp : Imported = ImportedFactory.createImportedObject() you are explicitly stating that imp is an Imported. But the compiler doesn't know what you mean by that unless you... import... it.
Both approaches have merit:
inferred types
Inferred types are great for when you're throwing together code where the type should be obvious:
val i = 1 // obviously `i` is an int
val j = i + 10 // obviously still an int
It's also great for local vars/vals where the type would be too much of a pain to write
val myFoo: FancyAbstractThing[TypeParam, AnotherTypeParam[OhNoMoreTypeParams]] = ...
// vs
val myFoo = FancyThingFactory.makeANewOne()
The downside is that if you have allowed a public def/val to have an inferred type, it can be more difficult to determine how to use that method. For this reason, omitting type annotations is typically only used for simple constants, and in local vals/vars that "client code" doesn't have to look at.
explicit types
When you do want to write library-ish code (i.e. public vals/defs), the convention is to explicitly-type them.
Probably the simplest reason for this is because this:
def myLibraryMethod = {
// super complicated implementation
}
is harder to understand than
def myLibraryMethod: String = {
// super complicated implementation
}
Another benefit to explicitly-typing your code is when you want to expose a less-specific type than what the value actually is:
val invalidNumbers: Set[Int] = TreeSet(4, 8, 15, 16, 23, 42)
In this example, you don't want client code to need to care that your invalidNumbers is actually a TreeSet. That's an implementation detail. In this case you're hiding some information that, while true, would be distracting.

Trying to understand Precedence Rule 4 in Scala In Depth Book

I am trying to understand the 4th rule on Precedence on Bindings on page 93 in Joshua Suareth's book Scala in depth.
According to this rule:
definitions made available by a package clause not in the source file where the definition occurs have lowest precedence.
It is this rule that I intend to test.
So off I went and tried to follow Josh's train of thought on Page 94. He creates a source file called externalbindings.scala and I did the same, with some changes to it as below
package com.att.scala
class extbindings {
def showX(x: Int): Int = {
x
}
object x1 {
override def toString = "Externally bound obj object in com.att.scala"
}
}
Next he asks us to create another file that will allow us to test the above rule. I created a file called precedence.scala:
package com.att.scala
class PrecedenceTest { //Josh has an object here instead of a class
def testPrecedence(): Unit = { //Josh has main method instead of this
testSamePackage()
//testWildCardImport()
//testExplicitImport()
//testInlineDefinition()
}
println("First statement of Constructor")
testPrecedence
println("Last statement of Constructor")
def testSamePackage() {
val ext1 = new extbindings()
val x = ext1.showX(100)
println("x is "+x)
println(obj1) // Eclipse complains here
}
}
Now, Josh is able to print out the value of the object in his example by simply doing the <package-name>.<object-name>.testSamePackage method on the REPL.
His output is:
Externally bound x object in package test
In my versions, the files are in Eclipse and I have my embedded Scala interpreter.
Eclipse complains right here: println(obj), it says: not found value obj1
Am I doing something obviously wrong in setting up the test files?
I would like to be able to test the rule I mentioned above and get the output:
Externally bound obj object in package com.att.scala
I haven't read the book, thus I'm not really sure if your code shows what the book wants to tell you.
Nevertheless, the error message is correct. obj1 is not found because it doesn't exist. In your code it is called x1. Because it is a member of extbindings you have to access it as a member of this class:
println(ext1.x1)
If x1 is defined outside of class extbinding, in scope of package com.att.scala, you can access it directly:
println(x1)
If it is defined in another package you have to put the package name before:
println(com.att.scala2.x1)
To simplify some things you can import x1:
import ext1.x1
println(x1)
Finally a tip to improve your code: name types in UpperCamelCase: extbindings -> Extbindings, x1 -> X1
If you replace a singleton object with a class, you will need to create an instance of that class.

Is it possible to use scalap from a scala script?

I am using scalap to read out the field names of some case classes (as discussed in this question). Both the case classes and the code that uses scalap to analyze them have been compiled and put into a jar file on the classpath.
Now I want to run a script that uses this code, so I followed the instructions and came up with something like
::#!
#echo off
call scala -classpath *;./libs/* %0 %*
goto :eof
::!#
//Code relying on pre-compiled code that uses scalap
which does not work:
java.lang.ClassCastException: scala.None$ cannot be cast to scala.Option
at scala.tools.nsc.interpreter.ByteCode$.caseParamNamesForPath(ByteCode.
scala:45)
at scala.tools.nsc.interpreter.ProductCompletion.caseNames(ProductComple
tion.scala:22)
However, the code works just fine when I compile everything. I played around with additional scala options like -savecompiled, but this did not help. Is this a bug, or can't this work in principle? (If so, could someone explain why not? As I said, the case classes that shall be analyzed by scalap are compiled.)
Note: I use Scala 2.9.1-1.
EDIT
Here is what I am essentially trying to do (providing a simple way to create multiple instances of a case class):
//This is pre-compiled:
import scala.tools.nsc.interpreter.ProductCompletion
//...
trait MyFactoryTrait[T <: MyFactoryTrait[T] with Product] {
this: T =>
private[this] val copyMethod = this.getClass.getMethods.find(x => x.getName == "copy").get
lazy val productCompletion = new ProductCompletion(this)
/** The names of all specified fields. */
lazy val fieldNames = productCompletion.caseNames //<- provokes the exception (see above)
def createSeq(...):Seq[T] = {
val x = fieldNames map { ... } // <- this method uses the fieldNames value
//[...] invoke copyMethod to create instances
}
// ...
}
//This is pre-compiled too:
case class MyCaseClass(x: Int = 0, y: Int = 0) extends MyFactoryTrait[MyCaseClass]
//This should be interpreted (but crashes):
val seq = MyCaseClass().createSeq(...)
Note: I moved on to Scala 2.9.2, the error stays the same (so probably not a bug).
This is a bug in the compiler:
If you run the program inside an ide, for example Intellij IDEA the code is executed fine, however no fields names are found.
If you run it from command line using scala, you obtain the error you mentioned.
There is no way type-safe could should ever compiler and throw a runtime ClassCastException.
Please open a bug at https://issues.scala-lang.org/secure/Dashboard.jspa

How to set an expected exception using Scala and JUnit 4

I want to set an expected exception for a JUnit 4 test using Scala. I am current doing something similar to the following:
#Test(expected=classOf[NullPointerException])
def someTest() = {
// Some test code
}
But I get the following compiler error:
error: wrong number of arguments for constructor Test: ()org.junit.Test
This is looking forward a bit, but the syntax for annotations in 2.8 has changed to be the same as what you originally posted. The syntax Tristan posted is correct in the current stable version, but it gave me errors when I upgraded my project to a nightly 2.8 compiler. I'm guessing this is due to the inclusion of named and default arguments. There is also some discussion on the Scala mailing list. Quoting Lukas Rytz:
Also note that in 2.8.0 the syntax for java annotations will no longer use the name-value
pairs but named arguments instead, i.e.
#ann{ val x = 1, val y = 2} ==> #ann(x = 1, y = 2)
The way scala deals with attributes is a little funky. I think what you're trying to do should be expressed like this:
#Test { val expected = classOf[ NullPointerException] }
def someTest {
// test code
}
Scala language page with many annotation examples.
This works for me (JUnit 4.10, Scala 2.10.2):
#Test(expected = classOf[NullPointerException])
def testFoo() {
foo(null)
}
Similar to what Tristan suggested, but this syntax actually compiles and works in my project.
Edit: Uh, looking closer, this is exactly what the original question had. Well, I guess having the latest working syntax also in answers doesn't hurt.
You can also try specs with:
class mySpec extends SpecificationWithJUnit {
"this expects an exception" in {
myCode must throwA[NullPointerException]
}
}
Eric.
Use ScalaTest and JUnit together and you can do:
import org.scalatest.junit.JUnitSuite
import org.scalatest.junit.ShouldMatchersForJUnit
import org.junit.Test
class ExampleSuite extends JUnitSuite with ShouldMatchersForJUnit {
#Test def toTest() {
evaluating { "yo".charAt(-1) } should produce [StringIndexOutOfBoundsException]
}
}