Why is my Scala class not visible to its matching test class? - scala

I am starting out in Scala with SBT, making a Hello World program.
Here's my project layout:
I've made sure to download the very latest JDK and Scala, and configure my Project Settings. Here's my build.sbt:
name := "Coursera_Scala"
version := "1.0"
scalaVersion := "2.11.6"
libraryDependencies += "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test"
Hello.scala itself compiles okay:
package demo
class Hello {
def sayHelloTo(name: String) = "Hello, $name!"
}
However, my accompanying HelloTest.scala does not. Here's the test:
package demo
import org.scalatest.FunSuite
class HelloTest extends FunSuite {
test("testSayHello") {
val result = new Hello.sayHelloTo("Scala")
assert(result == "Hello, Scala!")
}
}
Here's the error:
Error:(8, 22) not found: value Hello
val result = new Hello.sayHello("Scala")
^
In addition to the compile error, Intellij shows errors "Cannot resolve symbol" for the symbols Hello, assert, and ==. This leads me to believe that the build is set up incorrectly, but then wouldn't there be an error on the import?

The problem is this expression:
new Hello.sayHelloTo("Scala")
This creates a new instance of the class sayHelloTo defined on the value Hello. However, there is no value Hello, just the class Hello.
You want to write this:
(new Hello).sayHelloTo("Scala")
This creates a new instance of the class Hello and calls sayHelloTo on the instance.

Or you can use new Hello().sayHelloTo(...). Writing () should create a new instance and then call the method.

Related

Databricks SCALA UDF cannot load class when registering function

I have followed this guide and this question trying to implement a decryption function to use in a SQL view.
I have compiled this scala code in the example to a jar file and uploaded to the Databricks File System (DBFS):
import com.macasaet.fernet.{Key, StringValidator, Token};
import org.apache.hadoop.hive.ql.exec.UDF;
import java.time.{Duration, Instant};
class Validator extends StringValidator {
override def getTimeToLive() : java.time.temporal.TemporalAmount = {
Duration.ofSeconds(Instant.MAX.getEpochSecond());
}
}
class udfDecrypt extends UDF {
def evaluate(inputVal: String, sparkKey : String): String = {
if( inputVal != null && inputVal!="" ) {
val keys: Key = new Key(sparkKey)
val token = Token.fromString(inputVal)
val validator = new Validator() {}
val payload = token.validateAndDecrypt(keys, validator)
payload
} else return inputVal
}
}
I can declare the function as demonstrated:
%sql
CREATE OR REPLACE FUNCTION default.udfDecrypt AS 'com.nm.udf.udfDecrypt'
USING jar 'dbfs:/FileStore/jars/decryptUDF.jar';
But if I try to call it an error is thrown:
%sql
SELECT default.udfDecrypt(field, '{key}') FROM default.encrypted_test;
Error in SQL statement: AnalysisException: Can not load class 'com.nm.udf.udfDecrypt' when registering the function 'default.udfDecrypt', please make sure it is on the classpath; line 1 pos 7
I have noticed that the function can be declared using any jar file path (even one that doesn't exist) and it will still return 'OK'.
I am using Databricks for Azure.
It seems like your UDF code is missing:
package com.nm.udf;
at the top.
Update as of 2022 October, because the accepted solution did not work for me.
First off, the given Scala code is incorrect, you need to add...
import java.time.Duration
import java.time.Instant;
To the top of the code.
Secondly, after packing the .scala file to jar (using sbt package for example...), when you create the function...
CREATE OR REPLACE FUNCTION udfDecryptor AS 'udfDecrypt'
USING jar 'dbfs:/FileStore/jars/decryptUDF.jar';
Then the alias of the function must match the name of the class.
Pay attention to having the build.sbt file with the correct version and dependencies, for example...
ThisBuild / version := "0.2.0-properscala"
ThisBuild / scalaVersion := "2.12.14"
lazy val root = (project in file("."))
.settings(
name := "the_name_of_project_here"
)
// https://mvnrepository.com/artifact/org.scala-lang/scala-library
libraryDependencies += "com.macasaet.fernet" % "fernet-java8" % "1.5.0"
// https://mvnrepository.com/artifact/org.apache.hive/hive-exec
libraryDependencies += "org.apache.hive" % "hive-exec" % "3.1.2"
This is the only way I managed to solve it and made running of the code successful. Hope it helps.
There could be some hidden parts in the guidlines.
Here is a complete repo to make things in details.
https://github.com/dungruoc/databricks-fernet-udf

Why am I getting object scalatest is not a member of package org

I am trying to follow the Scala tutorial and am getting the error object scalatest is not a member of package org. All of the other instances of this error I can find here and on the web have the problem that the test file isn't under src/test, but that's not the case for me. I repeat, for the sake of those who keep marking this as a duplicate, the test file is under src/test, so that is not the problem. My build.sbt file says:
name := "TestExercise"
version := "0.1"
scalaVersion := "2.12.6"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % "test"
src/main/scala/CubeCalculator says:
object CubeCalculator extends App {
def cube(x: Int) = {
x * x * x
}
}
and src/test/scala/CubeCalculatorTest says:
import org.scalatest.FunSuite
class CubeCalculatorTest extends FunSuite {
test("CubeCalculator.cube") {
assert(CubeCalculator.cube(3) === 27)
}
}
(Cut-and-pasted straight from the tutorial.)
So what am I doing wrong? Why can't my project access scalatest?
#terminally-chill gave the answer. I am using Intellij, and File | Invalidate caches / restart, followed by a rebuild, resolved the issue.

Scala won't import package classes

I have a scala project, but the imports don't work as designed. I tried everything here, but nothing seems to fix the issue. My project is as follows:
- src
- main
- scala
- importtest
ImportTest.scala
Main.scala
build.sbt
Imported class:
#/src/main/scala/importtest/ImportTest.scala
package importtest
class ImportTest {
def run(): Unit = {
System.out.println("boo!")
}
}
My main class is:
#/src/main/scala/Main.scala
import importtest.ImportTest
object Main {
def main(): Unit = {
val i = ImportTest()
}
}
My SBT build is:
name := "ImportTest"
version := "0.1"
scalaVersion := "2.12.6"
When I try to build, I get:
Error:(5, 13) not found: value ImportTest
val i = ImportTest()
What is going wrong here? Why can't I import the ImportTest class?
Also, not sure if this helps, but IntelliJ will autocomplete the package name, but it cant autocomplete the class within the package - it marks it as unresolved.
You are initializing ImportTest() as if it was a case class.
Because its a regular class, you need to use "new".
Change the initialization to:
val i = new ImportTest()

scala.tools.nsc.IMain within Play 2.1

I googled a lot and am totally stuck now. I know, that there are similar questions but please read to the end. I have tried all proposed solutions and none did work.
I am trying to use the IMain class from scala.tools.nsc within a Play 2.1 project (Using Scala 2.10.0).
Controller Code
This is the code, where I try to use the IMain in a Websocket. This is only for testing.
object Scala extends Controller {
def session = WebSocket.using[String] { request =>
val interpreter = new IMain()
val (out,channel) = Concurrent.broadcast[String]
val in = Iteratee.foreach[String]{ code =>
interpreter.interpret(code) match {
case Results.Error => channel.push("error")
case Results.Incomplete => channel.push("incomplete")
case Results.Success => channel.push("success")
}
}
(in,out)
}
}
As soon as something gets sent over the Websocket the following error gets logged by play:
Failed to initialize compiler: object scala.runtime in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
Build.scala
object ApplicationBuild extends Build {
val appName = "escalator"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
"org.scala-lang" % "scala-compiler" % "2.10.0"
)
val main = play.Project(appName, appVersion, appDependencies).settings(
)
}
What I have tried so far
All this didn't work:
I have included fork := true in the Build.scala
A Settings object with:
embeddedDefaults[MyType]
usejavacp.value = true
The soultion proposed as answer to Question Embedded Scala REPL inherits parent classpath
I dont know what to do now.
The problem here is that sbt doesnt add scala-library to the class path.
The following workaround works.
First create a folder lib in the top project directory(the parent of app,conf etc) and copy there the scala-library.jar
Then you can use the following code to host an interpreter :
val settings = new Settings
settings.bootclasspath.value +=scala.tools.util.PathResolver.Environment.javaBootClassPath + File.pathSeparator + "lib/scala-library.jar"
val in = new IMain(settings){
override protected def parentClassLoader = settings.getClass.getClassLoader()
}
val res = in.interpret("val x = 1")
The above creates the bootclasspath by adding to the java class the scala library. It's not a problem with play framework it comes from the sbt. The same problem occures for any scala project when it runs with sbt. Tested with a simple project. When it runs from eclipse its works fine.
EDIT: Link to sample project demonstrating the above.`
I wonder if the reflect jar is missing. Try adding this too in appDependencies.
"org.scala-lang" % "scala-reflect" % "2.10.0"

Make ScalaCheck tests deterministic

I would like to make my ScalaCheck property tests in my specs2 test suite deterministic, temporarily, to ease debugging. Right now, different values could be generated each time I re-run the test suite, which makes debugging frustrating, because you don't know if a change in observed behaviour is caused by your code changes, or just by different data being generated.
How can I do this? Is there an official way to set the random seed used by ScalaCheck?
I'm using sbt to run the test suite.
Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?
If you're using pure ScalaCheck properties, you should be able to use the Test.Params class to change the java.util.Random instance which is used and provide your own which always return the same set of values:
def check(params: Test.Parameters, p: Prop): Test.Result
[updated]
I just published a new specs2-1.12.2-SNAPSHOT where you can use the following syntax to specify your random generator:
case class MyRandomGenerator() extends java.util.Random {
// implement a deterministic generator
}
"this is a specific property" ! prop { (a: Int, b: Int) =>
(a + b) must_== (b + a)
}.set(MyRandomGenerator(), minTestsOk -> 200, workers -> 3)
As a general rule, when testing on non-deterministic inputs you should try to echo or save those inputs somewhere when there's a failure.
If the data is small, you can include it in the label or error message that gets shown to the user; for example, in an xUnit-style test: (since I'm new to Scala syntax)
testLength(String x) {
assert(x.length > 10, "Length OK for '" + x + "'");
}
If the data is large, for example an auto-generated DB, you might either store it in a non-volatile location (eg. /tmp with a timestamped name) or show the seed used to generate it.
The next step is important: take that value, or seed, or whatever, and add it to your deterministic regression tests, so that it gets checked every time from now on.
You say you want to make ScalaCheck deterministic "temporarily" to reproduce this issue; I say you've found a buggy edge-case which is well-suited to becoming a unit test (perhaps after some manual simplification).
Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?
From specs2-scalacheck version 4.6.0 this is now a default behaviour:
Given the test file HelloSpec:
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
class HelloSpec extends Specification with ScalaCheck {
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
class HelloSpec extends Specification with ScalaCheck {
s2"""
a simple property $ex1
"""
def ex1 = prop((s: String) => s.reverse.reverse must_== "")
}
build.sbt config:
import Dependencies._
ThisBuild / scalaVersion := "2.13.0"
ThisBuild / version := "0.1.0-SNAPSHOT"
ThisBuild / organization := "com.example"
ThisBuild / organizationName := "example"
lazy val root = (project in file("."))
.settings(
name := "specs2-scalacheck",
libraryDependencies ++= Seq(
specs2Core,
specs2MatcherExtra,
specs2Scalacheck
).map(_ % "test")
)
project/Dependencies:
import sbt._
object Dependencies {
lazy val specs2Core = "org.specs2" %% "specs2-core" % "4.6.0"
lazy val specs2MatcherExtra = "org.specs2" %% "specs2-matcher-extra" % specs2Core.revision
lazy val specs2Scalacheck = "org.specs2" %% "specs2-scalacheck" % specs2Core.revision
}
When you run the test from the sbt console:
sbt:specs2-scalacheck> testOnly example.HelloSpec
You get the following output:
[info] HelloSpec
[error] x a simple property
[error] Falsified after 2 passed tests.
[error] > ARG_0: "\u0000"
[error] > ARG_0_ORIGINAL: "猹"
[error] The seed is X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=
[error]
[error] > '' != '' (HelloSpec.scala:11)
[info] Total for specification HelloSpec
To reproduce that specific run (i.e with the same seed)You can take the seed from the output and pass it using the command line scalacheck.seed:
sbt:specs2-scalacheck>testOnly example.HelloSpec -- scalacheck.seed X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=
And this produces the same output as before.
You can also set the seed programmatically using setSeed:
def ex1 = prop((s: String) => s.reverse.reverse must_== "").setSeed("X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=")
Yet another way to provide the Seed is pass an implicit Parameters where the seed is set:
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
import org.scalacheck.rng.Seed
import org.specs2.scalacheck.Parameters
class HelloSpec extends Specification with ScalaCheck {
s2"""
a simple property $ex1
"""
implicit val params = Parameters(minTestsOk = 1000, seed = Seed.fromBase64("X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=").toOption)
def ex1 = prop((s: String) => s.reverse.reverse must_== "")
}
Here is the documentation about all those various ways.
This blog also talks about this.
For scalacheck-1.12 this configuration worked:
new Test.Parameters {
override val rng = new scala.util.Random(seed)
}
For scalacheck-1.13 it doesn't work anymore since the rng method is removed. Any thoughts?