I have a subproject in my build.sbt with a rather long setting for initialCommands, comprising a list of imports and some definitions. I'd like to test this as part of regular CI, because otherwise I won't notice breaking changes after refactoring code. It is not clear to me how to do so.
Just running sbt console doesn't seem to cut it, because there is always a "successful" exit code even when code doesn't compile.
Moving the code out into an object defined in a special source file won't help because I need the list of imports to be present (and I don't want to cakeify my whole code base).
Moving the code out into a source file and then loading that with :load also always gives a successful exit code.
I found out about scala -e but that does strange things on my machine (see the error log below).
This is Scala 2.12.
$ scala -e '1'
cat: /release: No such file or directory
Exception in thread "main" java.net.UnknownHostException: <my-host-name-here>: <my-host-name-here>: Name or service not known
You could generate a file and run it like any other test file:
(sourceGenerators in Test) += Def.task {
val contents = """object TestRepl {
{{}}
}""".replace("{{}}", (initialCommands in console).value)
val file = (sourceManaged in Test).value / "repltest.scala"
IO.write(file, contents)
Seq(file)
}.taskValue
Related
In the book, Programming in Scala 5th Edition, the author says the following for two classes:
Neither ChecksumAccumulator.scala nor Summer.scala are scripts, because they end in a definition. A script, by contrast, must end in a result expression.
The ChecksumAccumulator.scala is as follows:
import scala.collection.mutable
class CheckSumAccumulator:
private var sum = 0
def add(b: Byte): Unit = sum += b
def checksum(): Int = ~(sum & 0XFF) + 1
object CheckSumAccumulator:
private val cache = mutable.Map.empty[String, Int]
def calculate(s: String): Int =
if cache.contains(s) then
cache(s)
else
val acc = new CheckSumAccumulator
for c<-s do
acc.add((c >> 8).toByte)
acc.add(c.toByte)
val cs = acc.checksum()
cache += (s -> cs)
cs
whereas the Summer.scala is as follows:
import CheckSumAccumulator.calculate
object Summer:
def main(args: Array[String]): Unit =
for arg <- args do
println(arg + ": " + calculate(arg))
But when I run the Summer.scala file, I get a different error than what mentioned by the author:
➜ learning-scala git:(main) ./scala3-3.0.0-RC3/bin/scala Summer.scala
-- [E006] Not Found Error: /Users/avirals/dev/learning-scala/Summer.scala:1:7 --
1 |import CheckSumAccumulator.calculate
| ^^^^^^^^^^^^^^^^^^^
| Not found: CheckSumAccumulator
longer explanation available when compiling with `-explain`
1 error found
Error: Errors encountered during compilation
➜ learning-scala git:(main)
The author mentioned that the error would be around not having a result expression.
I also tried to compile CheckSumAccumulator only and then run Summer.scala as a script without compiling it:
➜ learning-scala git:(main) ./scala3-3.0.0-RC3/bin/scalac CheckSumAccumulator.scala
➜ learning-scala git:(main) ✗ ./scala3-3.0.0-RC3/bin/scala Summer.scala
<No output, given no input>
➜ learning-scala git:(main) ✗ ./scala3-3.0.0-RC3/bin/scala Summer.scala Summer of love
Summer: -121
of: -213
love: -182
It works.
Obviously, when I compile both, and then run Summer.scala, it works as expected. However, the differentiation of Summer.scala as a script vs normal file is unclear to me.
Let's start top-down...
The most regular way to compile Scala is to use a build tool like SBT/Maven/Mill/Gradle/etc. This build tool will help with a few things: downloading dependencies/libraries, downloading Scala compiler (optional), setting up CLASS_PATH and most importantly running scalac compiler and passing all flags to it. Additionally it can package compiled class files into JARs and other formats and do much more. Most relevant part is CP and compilation flags.
If you strip off the build tool you can compile your project by manually invoking scalac with all required arguments and making sure your working directory matches package structure, i.e. you are in the right directory. This can be tedious because you need to download all libraries manually and make sure they are on the class path.
So far build tool and manual compiler invocation are very similar to what you can also do in Java.
If you want to have an ah-hoc way of running some Scala code there are 2 options. scala let's you run scripts or REPL by simply compiling your uncompiled code before it executes it.
However, there are some caveats. Essentially REPL and shell scripts are the same - Scala wraps your code in some anonymous object and then runs it. This way you can write any expression without having to follow convention of using main function or App trait (which provides main). It will compile the script you are trying to run but will have no idea about imported classes. You can either compile them beforehand or make a large script that contains all code. Of course if it starts getting too large it's time to make a proper project.
So in a sense there is no such thing as script vs normal file because they both contain Scala code. The file you are running with scala is a script if it's an uncompiled code XXX.scala and "normal" compiled class XXX.class otherwise. If you ignore object wrapping I've mentioned above the rest is the same just different steps to compile and run them.
Here is the traditional 2.xxx scala runner code snippet with all possible options:
def runTarget(): Option[Throwable] = howToRun match {
case AsObject =>
ObjectRunner.runAndCatch(settings.classpathURLs, thingToRun, command.arguments)
case AsScript if isE =>
ScriptRunner(settings).runScriptText(combinedCode, thingToRun +: command.arguments)
case AsScript =>
ScriptRunner(settings).runScript(thingToRun, command.arguments)
case AsJar =>
JarRunner.runJar(settings, thingToRun, command.arguments)
case Error =>
None
case _ =>
// We start the repl when no arguments are given.
if (settings.Wconf.isDefault && settings.lint.isDefault) {
// If user is agnostic about -Wconf and -Xlint, enable -deprecation and -feature
settings.deprecation.value = true
settings.feature.value = true
}
val config = ShellConfig(settings)
new ILoop(config).run(settings)
None
}
This is what's getting invoked when you run scala.
In Dotty/Scala3 the idea is similar but split into multiple classes and classpath logic might be different: REPL, Script runner. Script runner invokes repl.
I have an Ammonite Script that I want to deliver in a JAR.
In another project I want to use this Script - but so far with no success.
I tried according to the documentation (sol_local_build.sc):
import $ivy.`mycompany:myproject_2.12:2.1.0-SNAPSHOT`, local_build
#main
def doit():Unit =
println(local_build.curl("http://localhost:8080"))
local_build.sc is in the Script I want to use.
This is the exception I get:
sol_local_build.sc:2: '.' expected but eof found.
^
The script must be compiled on the fly.
Put your script in a standard sbt project
inside a directory, example directory name: "test1"
Put your external script (example name: "script.sc")
// script.sc
println("Hello world!")
into the resource directory ("test1\src\main\resources\script.sc") of the test1 project
Publish the projekt local, i.e. sbt publishLocal
It is published to ".ivy2\local\default\test1_2.12\0.1-SNAPSHOT\ ... " directory.
Now you can use the following ammonite script "test.sc".
It reads the "script.sc" from the jar in the local ivy repository
and writes it to the local directory (must have read/write access) and then executes an external process,
which calls the scala "interpreter" and executes the script.
// test.sc
import $ivy.`default:test1_2.12:0.1-SNAPSHOT`
val scriptCode = scala.util.Try {scala.io.Source.fromResource("script.sc").mkString} getOrElse """Println("Script-file not found!")"""
println("*" * 30)
println(scriptCode)
println("*" * 30)
println()
java.nio.file.Files.write(java.nio.file.Paths.get("script.sc"), scriptCode.getBytes(java.nio.charset.StandardCharsets.UTF_8))
val cmd = Seq("cmd.exe", "/c", "scala", "script.sc")
val output = sys.process.Process(cmd).!!
println(output)
Executing the script the Ammonite REPL, you get:
******************************
// script.sc
println("Hello world!")
******************************
Hello world!
The script has no error handling and leaves the file in the running directory.
You can speed up the execution with the "-savecompiled" compiler switch, i.e
val cmd = Seq("cmd.exe", "/c", "scala", "-savecompiled", "script.sc")
An additional .jar file is created then in the running directory.
Scala Scripts are not really interpreted, but are compiled "under the hood"
as every normal Scala programm.
Therefor all code must be reachable during compile time
and you cannot call a function inside the other script from the jar-file!
But Ammonite has a buid in multi-stage feature.
It compiles one part, executes it and then compiles the next part!
Little improved ammonite-script.
It's not error free but runs.
Maybe there is better way to get the script out of the jar.
You should ask Li Haoyi!
// test_ammo.sc
// using ammonite ops
// in subdirectoy /test1
// Ammonite REPL:
// import $exec.test1.test_ammo
// # Ammonite-multi-stage
import $ivy.`default::test1:0.1-SNAPSHOT`
//import scala.util.Properties
import scala.sys.process.Process
val scriptFileName = "script.sc"
write.over(pwd/"test1"/scriptFileName, read(resource(getClass.getClassLoader)/scriptFileName))
val cmd = Seq("cmd.exe", "/c", "scala", scriptFileName)
val output = Process(cmd).!!
println(output)
#
import $exec.script // no .sc suffix
ppp() // is a function inside script.sc
script.sc inside resources folder of project
published local with "sbt publishLocal":
// script.sc
println("Hello world!")
def ppp() = println("Hello world from ppp!")
For completeness, I could solve my problem as follows:
Just create a Scala File in this project.
Copy the Script content in
an Object.
package mycompany.myproject
object LocalBuild {
def curl(..)...
}
Add the dependencies to your sbt file (e.g. ammonite.ops)
Use it like:
$ivy.`mycompany:myproject_2.12:2.1.0-SNAPSHOT`, mycompany.myproject.LocalBuild
#main
def doit():Unit =
println(LocalBuild.curl("http://localhost:8080"))
I want use Scala like Python, so I install REPL in Sublime Text(Os is win8)
Everytime in REPL, I have to
scala> :load <my file>
, so I think it's inconvenient.
And I can't change
scala> :settings -d <路径名>
in Chinese directory.
I'm confused whether I can't change Scala script's directory with non-english language.
Thanks a lot!
If you use sbt then you can define initial commands when you launch the console.
yourproject/build.sbt:
// build.sbt
name := "initial-commands-example"
initialCommands := "import Foo._"
yourproject/script.scala:
// script.scala
object Foo {
def hello(name: String) = s"hello $name"
val msg = hello("world")
}
Inside yourproject, run sbt console, and you will have everything in Foo available inside that repl. See sbt initialCommands docs for more information.
I have a Spark Streaming application built with Maven (as jar) and deployed with the spark-submit script. The application project layout follows the standard directory layout:
myApp
src
main
scala
com.mycompany.package
MyApp.scala
DoSomething.scala
...
resources
aPerlScript.pl
...
test
scala
com.mycompany.package
MyAppTest.scala
...
target
...
pom.xml
In the DoSomething.scala object I have a method (let's call it doSomething()) that tries to execute a Perl script -- aPerlScript.pl (from the resources folder) -- using scala.sys.process.Process and passing two arguments to the script (the first one is the absolute path to a binary file used as input, the second one is the path/name of the produced output file). I call then DoSomething.doSomething().
The issue is that I was not able to access the script, not with absolute paths, relative paths, getClass.getClassLoader.getResource, getClass.getResource, I have specified the resources folder in my pom.xml. None of my attempts succeeded. I don't know how to find the stuff I put in src/main/resources.
I will appreciate any help.
SIDE NOTES:
I use an external Process instead of a Spark pipe because, at this step of my workflow, I must handle binary files as input and output.
I'm using Spark-streaming 1.1.0, Scala 2.10.4 and Java 7. I build the jar with "Maven install" from within Eclipse (Kepler)
When I use the getClass.getClassLoader.getResource "standard" method to access resources I find that the actual classpath is the spark-submit script's one.
There are a few solutions. The simplest is to use Scala's process infrastructure:
import scala.sys.process._
object RunScript {
val arg = "some argument"
val stream = RunScript.getClass.getClassLoader.getResourceAsStream("aPerlScript.pl")
val ret: Int = (s"/usr/bin/perl - $arg" #< stream).!
}
In this case, ret is the return code for the process and any output from the process is directed to stdout.
A second (longer) solution is to copy the file aPerlScript.pl from the jar file to some temporary location and execute it from there. This code snippet should have most of what you need.
object RunScript {
// Set up copy destination from the Java temporary directory. This is /tmp on Linux
val destDir = System.getProperty("java.io.tmpdir") + "/"
// Get a stream to the script in the resources dir
val source = Channels.newChannel(RunScript.getClass.getClassLoader.getResourceAsStream("aPerlScript.pl"))
val fileOut = new File(destDir, "aPerlScript.pl")
val dest = new FileOutputStream(fileOut)
// Copy file to temporary directory
dest.getChannel.transferFrom(source, 0, Long.MaxValue)
source.close()
dest.close()
}
// Schedule the file for deletion for when the JVM quits
sys.addShutdownHook {
new File(destDir, "aPerlScript.pl").delete
}
// Now you can execute the script.
This approach allows you to bundle native libraries in JAR files. Copying them out allows the libraries to be loaded at runtime for whatever JNI mischief you have planned.
This question may sound a bit stupid, but I couldn't figure out, how to start a Scala method from the command line.
I compiled the following file Test.scala :
package example
object Test {
def print() {
println("Hello World")
}
}
with scalac Test.scala.
Then, I can run the method print with scala in two steps:
C:\Users\John\Scala\Examples>scala
Welcome to Scala version 2.9.2 (Java HotSpot(TM) Client VM, Java 1.6.0_32).
Type in expressions to have them evaluated.
Type :help for more information.
scala> example.Test.print
Hello World
But what I really like to do is, to run the method directly from the command line with one command like scala example.Test.print.
How can I achieve this goal ?
UPDATE:
Suggested solution by ArikG does not work for me - What I am missing ?
C:\Users\John\Scala\Examples>scala -e 'example.Test.print'
C:\Users\John\AppData\Local\Temp\scalacmd1874056752498579477.scala:1: error: u
nclosed character literal
'example.Test.print'
^
one error found
C:\Users\John\Scala\Examples>scala -e "example.Test.print"
C:\Users\John\AppData\Local\Temp\scalacmd1889443681948722298.scala:1: error: o
bject Test in package example cannot be accessed in package example
example.Test.print
^
one error found
where
C:\Users\John\Scala\Examples>dir example
Volume in drive C has no label.
Volume Serial Number is 4C49-8C7F
Directory of C:\Users\John\Scala\Examples\example
14.08.2012 12:14 <DIR> .
14.08.2012 12:14 <DIR> ..
14.08.2012 12:14 493 Test$.class
14.08.2012 12:14 530 Test.class
2 File(s) 1.023 bytes
2 Dir(s) 107.935.760.384 bytes free
UPDATE 2 - Possible SOLUTIONs:
As ArikG correctly suggested, with scala -e "import example.Test._; print" works well with Windows 7.
See answer of Daniel to get it work without the import statement
Let me expand on this solution a bit:
scala -e 'example.Test.print'
Instead, try:
scala -cp path-to-the-target-directory -e 'example.Test.print'
Where the target directory is the directory where scala used as destination for whatever it compiled. In your example, it is not C:\Users\John\Scala\Examples\example, but C:\Users\John\Scala\Examples. The directory example is where Scala will look for classes belonging to the package example.
This is why things did not work: it expected to find the package example under a directory example, but there were no such directory under the current directory in which you ran scala, and the classfiles that were present on the current directory were expected to be on the default package.
The best way to do this is to extend App which is a slightly special class (or at least DelayedInit which underlies it is):
package example
object Test extends App {
println("Hello World")
}
It's still possible to add methods to this as well, the body of the object is executed on startup.
Here you go:
scala -e 'example.Test.print'