The entry point of some.scala is defined below:
object MyApp extends App {
println("Hello, World!")
}
If I run
$ scala some.scala
Scala quits quietly, then compile it via,
$ scalac some.scala
...
MyApp.class
MyApp$delayedInit$body.class
...
If I then run
$ scala MyApp
it works.
Does the delayedInit class above prevent case 1 from running?
From scala man page:
If -howtorun: is left as the default (guess), then the scala command
will check whether a file of the specified name exists. If it does,
then it will treat it as a script file ...
So in your case scala is processing some.scala as a script file, not much different from typing it in REPL. It will define the object MyApp but won't execute it. Try to put a single line in some.scala:
println("Hello, World!")
and run it as scala some.scala
Related
In the book, Programming in Scala 5th Edition, the author says the following for two classes:
Neither ChecksumAccumulator.scala nor Summer.scala are scripts, because they end in a definition. A script, by contrast, must end in a result expression.
The ChecksumAccumulator.scala is as follows:
import scala.collection.mutable
class CheckSumAccumulator:
private var sum = 0
def add(b: Byte): Unit = sum += b
def checksum(): Int = ~(sum & 0XFF) + 1
object CheckSumAccumulator:
private val cache = mutable.Map.empty[String, Int]
def calculate(s: String): Int =
if cache.contains(s) then
cache(s)
else
val acc = new CheckSumAccumulator
for c<-s do
acc.add((c >> 8).toByte)
acc.add(c.toByte)
val cs = acc.checksum()
cache += (s -> cs)
cs
whereas the Summer.scala is as follows:
import CheckSumAccumulator.calculate
object Summer:
def main(args: Array[String]): Unit =
for arg <- args do
println(arg + ": " + calculate(arg))
But when I run the Summer.scala file, I get a different error than what mentioned by the author:
➜ learning-scala git:(main) ./scala3-3.0.0-RC3/bin/scala Summer.scala
-- [E006] Not Found Error: /Users/avirals/dev/learning-scala/Summer.scala:1:7 --
1 |import CheckSumAccumulator.calculate
| ^^^^^^^^^^^^^^^^^^^
| Not found: CheckSumAccumulator
longer explanation available when compiling with `-explain`
1 error found
Error: Errors encountered during compilation
➜ learning-scala git:(main)
The author mentioned that the error would be around not having a result expression.
I also tried to compile CheckSumAccumulator only and then run Summer.scala as a script without compiling it:
➜ learning-scala git:(main) ./scala3-3.0.0-RC3/bin/scalac CheckSumAccumulator.scala
➜ learning-scala git:(main) ✗ ./scala3-3.0.0-RC3/bin/scala Summer.scala
<No output, given no input>
➜ learning-scala git:(main) ✗ ./scala3-3.0.0-RC3/bin/scala Summer.scala Summer of love
Summer: -121
of: -213
love: -182
It works.
Obviously, when I compile both, and then run Summer.scala, it works as expected. However, the differentiation of Summer.scala as a script vs normal file is unclear to me.
Let's start top-down...
The most regular way to compile Scala is to use a build tool like SBT/Maven/Mill/Gradle/etc. This build tool will help with a few things: downloading dependencies/libraries, downloading Scala compiler (optional), setting up CLASS_PATH and most importantly running scalac compiler and passing all flags to it. Additionally it can package compiled class files into JARs and other formats and do much more. Most relevant part is CP and compilation flags.
If you strip off the build tool you can compile your project by manually invoking scalac with all required arguments and making sure your working directory matches package structure, i.e. you are in the right directory. This can be tedious because you need to download all libraries manually and make sure they are on the class path.
So far build tool and manual compiler invocation are very similar to what you can also do in Java.
If you want to have an ah-hoc way of running some Scala code there are 2 options. scala let's you run scripts or REPL by simply compiling your uncompiled code before it executes it.
However, there are some caveats. Essentially REPL and shell scripts are the same - Scala wraps your code in some anonymous object and then runs it. This way you can write any expression without having to follow convention of using main function or App trait (which provides main). It will compile the script you are trying to run but will have no idea about imported classes. You can either compile them beforehand or make a large script that contains all code. Of course if it starts getting too large it's time to make a proper project.
So in a sense there is no such thing as script vs normal file because they both contain Scala code. The file you are running with scala is a script if it's an uncompiled code XXX.scala and "normal" compiled class XXX.class otherwise. If you ignore object wrapping I've mentioned above the rest is the same just different steps to compile and run them.
Here is the traditional 2.xxx scala runner code snippet with all possible options:
def runTarget(): Option[Throwable] = howToRun match {
case AsObject =>
ObjectRunner.runAndCatch(settings.classpathURLs, thingToRun, command.arguments)
case AsScript if isE =>
ScriptRunner(settings).runScriptText(combinedCode, thingToRun +: command.arguments)
case AsScript =>
ScriptRunner(settings).runScript(thingToRun, command.arguments)
case AsJar =>
JarRunner.runJar(settings, thingToRun, command.arguments)
case Error =>
None
case _ =>
// We start the repl when no arguments are given.
if (settings.Wconf.isDefault && settings.lint.isDefault) {
// If user is agnostic about -Wconf and -Xlint, enable -deprecation and -feature
settings.deprecation.value = true
settings.feature.value = true
}
val config = ShellConfig(settings)
new ILoop(config).run(settings)
None
}
This is what's getting invoked when you run scala.
In Dotty/Scala3 the idea is similar but split into multiple classes and classpath logic might be different: REPL, Script runner. Script runner invokes repl.
I have an Ammonite Script that I want to deliver in a JAR.
In another project I want to use this Script - but so far with no success.
I tried according to the documentation (sol_local_build.sc):
import $ivy.`mycompany:myproject_2.12:2.1.0-SNAPSHOT`, local_build
#main
def doit():Unit =
println(local_build.curl("http://localhost:8080"))
local_build.sc is in the Script I want to use.
This is the exception I get:
sol_local_build.sc:2: '.' expected but eof found.
^
The script must be compiled on the fly.
Put your script in a standard sbt project
inside a directory, example directory name: "test1"
Put your external script (example name: "script.sc")
// script.sc
println("Hello world!")
into the resource directory ("test1\src\main\resources\script.sc") of the test1 project
Publish the projekt local, i.e. sbt publishLocal
It is published to ".ivy2\local\default\test1_2.12\0.1-SNAPSHOT\ ... " directory.
Now you can use the following ammonite script "test.sc".
It reads the "script.sc" from the jar in the local ivy repository
and writes it to the local directory (must have read/write access) and then executes an external process,
which calls the scala "interpreter" and executes the script.
// test.sc
import $ivy.`default:test1_2.12:0.1-SNAPSHOT`
val scriptCode = scala.util.Try {scala.io.Source.fromResource("script.sc").mkString} getOrElse """Println("Script-file not found!")"""
println("*" * 30)
println(scriptCode)
println("*" * 30)
println()
java.nio.file.Files.write(java.nio.file.Paths.get("script.sc"), scriptCode.getBytes(java.nio.charset.StandardCharsets.UTF_8))
val cmd = Seq("cmd.exe", "/c", "scala", "script.sc")
val output = sys.process.Process(cmd).!!
println(output)
Executing the script the Ammonite REPL, you get:
******************************
// script.sc
println("Hello world!")
******************************
Hello world!
The script has no error handling and leaves the file in the running directory.
You can speed up the execution with the "-savecompiled" compiler switch, i.e
val cmd = Seq("cmd.exe", "/c", "scala", "-savecompiled", "script.sc")
An additional .jar file is created then in the running directory.
Scala Scripts are not really interpreted, but are compiled "under the hood"
as every normal Scala programm.
Therefor all code must be reachable during compile time
and you cannot call a function inside the other script from the jar-file!
But Ammonite has a buid in multi-stage feature.
It compiles one part, executes it and then compiles the next part!
Little improved ammonite-script.
It's not error free but runs.
Maybe there is better way to get the script out of the jar.
You should ask Li Haoyi!
// test_ammo.sc
// using ammonite ops
// in subdirectoy /test1
// Ammonite REPL:
// import $exec.test1.test_ammo
// # Ammonite-multi-stage
import $ivy.`default::test1:0.1-SNAPSHOT`
//import scala.util.Properties
import scala.sys.process.Process
val scriptFileName = "script.sc"
write.over(pwd/"test1"/scriptFileName, read(resource(getClass.getClassLoader)/scriptFileName))
val cmd = Seq("cmd.exe", "/c", "scala", scriptFileName)
val output = Process(cmd).!!
println(output)
#
import $exec.script // no .sc suffix
ppp() // is a function inside script.sc
script.sc inside resources folder of project
published local with "sbt publishLocal":
// script.sc
println("Hello world!")
def ppp() = println("Hello world from ppp!")
For completeness, I could solve my problem as follows:
Just create a Scala File in this project.
Copy the Script content in
an Object.
package mycompany.myproject
object LocalBuild {
def curl(..)...
}
Add the dependencies to your sbt file (e.g. ammonite.ops)
Use it like:
$ivy.`mycompany:myproject_2.12:2.1.0-SNAPSHOT`, mycompany.myproject.LocalBuild
#main
def doit():Unit =
println(LocalBuild.curl("http://localhost:8080"))
I'm studying Advanced Analytics with Spark.
Here's what happens: I follow the tutorial on spark-shell, and I put pretty long lines of codes into it. When I close the lid of my laptop, this puts my laptop to a sleep mode, and when I turn it back on, the codes are gone.
As a solution, as suggested in the book, I am trying to put my code in a .scala file, and compile and load it with JAR whenever I restart spark-shell. The book even provides a simple example to do that. https://github.com/sryza/aas/tree/master/simplesparkproject
So I git cloneed the project, ran mvn package, and ran spark-shell with spark-shell --jars target/simplesparkproject-0.0.1.jar --master local just as in the direction.
If you see the git repo for this example, the code contains an object MyApp with two functions in it.
object MyApp {
def main(args: Array[String]) {
val sc = new SparkContext(new SparkConf().setAppName("My App"))
println("num lines: " + countLines(sc, args(0)))
}
def countLines(sc: SparkContext, path: String): Long = {
sc.textFile(path).count()
}
}
From what I understood, this class and the functions should be able to be referenced in spark-shell because it was specified for the --jars option.
However, when I type MyApp on the spark-shell,
scala> MyApp
<console>:23: error: not found: value MyApp
MyApp
^
What am I doing wrong, and how can I make this work?
Just import the object and call required methods:
import com.cloudera.datascience.MyApp
MyApp.main()
I want to use Jcurses with Scala on a 64-bit Ubuntu.
Unfortunately i didn't find any tutorial about this subject. Can anybody help me!
My test program "testjcurses.scala"
import jcurses.system._
object TestJcurses {
def main(args:Array[String]) {
println("okay")
Toolkit.init()
}
}
I processed it the following way:
fsc -cp ~/software/Java/jcurses/lib/jcurses.jar:~/software/Java/jcurses/src -d . -Djava.library.path=~/software/Java/jcurses/lib testjcurses.scala
scala -cp ~/software/Java/jcurses/lib/jcurses.jar:~/software/Java/jcurses/src:. -Djava.library.path=~/software/Java/jcurses/lib TestJcurses
The result is:
okay
java.lang.NullPointerException
at jcurses.system.Toolkit.getLibraryPath(Toolkit.java:97)
at jcurses.system.Toolkit.<clinit>(Toolkit.java:37)
at TestJcurses$.main(testjcurses.scala:9)
at TestJcurses.main(testjcurses.scala)
..........
Can anybody help me?
Unfortunately you can't use ~ in bash like that — ~ is expanded to your home dir only right after an (unquoted) space (technically, at the beginning of a bash word, but "after a space" is the simple version). Look how your command line is expanded:
$ echo scala -cp ~/software/Java/jcurses/lib/jcurses.jar:~/software/Java/jcurses/src:. -Djava.library.path=~/software/Java/jcurses/lib TestJcurses
scala -cp /Users/pgiarrusso/software/Java/jcurses/lib/jcurses.jar:~/software/Java/jcurses/src:. -Djava.library.path=~/software/Java/jcurses/lib TestJcurses
As you can see, the ~ is there in the expanded version, and will arrive unchanged to your program, which will be unable to interpret it as anything since tilde expansion is a job for the shell.
Also, you shouldn't need the source directory ~/software/Java/jcurses/src in your classpath (since source files aren't needed to run the program). So try:
scala -cp ~/software/Java/jcurses/lib/jcurses.jar:. -Djava.library.path=$HOME/software/Java/jcurses/lib TestJcurses
This question may sound a bit stupid, but I couldn't figure out, how to start a Scala method from the command line.
I compiled the following file Test.scala :
package example
object Test {
def print() {
println("Hello World")
}
}
with scalac Test.scala.
Then, I can run the method print with scala in two steps:
C:\Users\John\Scala\Examples>scala
Welcome to Scala version 2.9.2 (Java HotSpot(TM) Client VM, Java 1.6.0_32).
Type in expressions to have them evaluated.
Type :help for more information.
scala> example.Test.print
Hello World
But what I really like to do is, to run the method directly from the command line with one command like scala example.Test.print.
How can I achieve this goal ?
UPDATE:
Suggested solution by ArikG does not work for me - What I am missing ?
C:\Users\John\Scala\Examples>scala -e 'example.Test.print'
C:\Users\John\AppData\Local\Temp\scalacmd1874056752498579477.scala:1: error: u
nclosed character literal
'example.Test.print'
^
one error found
C:\Users\John\Scala\Examples>scala -e "example.Test.print"
C:\Users\John\AppData\Local\Temp\scalacmd1889443681948722298.scala:1: error: o
bject Test in package example cannot be accessed in package example
example.Test.print
^
one error found
where
C:\Users\John\Scala\Examples>dir example
Volume in drive C has no label.
Volume Serial Number is 4C49-8C7F
Directory of C:\Users\John\Scala\Examples\example
14.08.2012 12:14 <DIR> .
14.08.2012 12:14 <DIR> ..
14.08.2012 12:14 493 Test$.class
14.08.2012 12:14 530 Test.class
2 File(s) 1.023 bytes
2 Dir(s) 107.935.760.384 bytes free
UPDATE 2 - Possible SOLUTIONs:
As ArikG correctly suggested, with scala -e "import example.Test._; print" works well with Windows 7.
See answer of Daniel to get it work without the import statement
Let me expand on this solution a bit:
scala -e 'example.Test.print'
Instead, try:
scala -cp path-to-the-target-directory -e 'example.Test.print'
Where the target directory is the directory where scala used as destination for whatever it compiled. In your example, it is not C:\Users\John\Scala\Examples\example, but C:\Users\John\Scala\Examples. The directory example is where Scala will look for classes belonging to the package example.
This is why things did not work: it expected to find the package example under a directory example, but there were no such directory under the current directory in which you ran scala, and the classfiles that were present on the current directory were expected to be on the default package.
The best way to do this is to extend App which is a slightly special class (or at least DelayedInit which underlies it is):
package example
object Test extends App {
println("Hello World")
}
It's still possible to add methods to this as well, the body of the object is executed on startup.
Here you go:
scala -e 'example.Test.print'