Why is the main function not running in the REPL? - scala

This is a simple program. I expected main to run in interpreted mode. But the presence of another object caused it to do nothing. If the QSort were not present, the program would have executed.
Why is main not called when I run this in the REPL?
object MainObject{
def main(args: Array[String])={
val unsorted = List(8,3,1,0,4,6,4,6,5)
print("hello" + unsorted toString)
//val sorted = QSort(unsorted)
//sorted foreach println
}
}
//this must not be present
object QSort{
def apply(array: List[Int]):List[Int]={
array
}
}
EDIT: Sorry for causing confusion, I am running the script as scala filename.scala.

What's happening
If the parameter to scala is an existing .scala file, it will be compiled in-memory and run. When there is a single top level object a main method will be searched and, if found, executed. If that's not the case the top level statements are wrapped in a synthetic main method which will get executed instead.
This is why removing the top-level QSort objects allows your main method to run.
If you're going to expand this to a full program, I advise to compile and run (use a build tool like sbt) the compiled .class files:
scalac main.scala && scala MainObject
If you're writing a single file script, just drop the main method (and its object) and write the statements you want executed in the outer scope, like:
// qsort.scala
object QSort{
def apply(array: List[Int]):List[Int]={
array
}
}
val unsorted = List(8,3,1,0,4,6,4,6,5)
print("hello" + unsorted toString)
val sorted = QSort(unsorted)
sorted foreach println
and run with: scala qsort.scala
A little context
The scala command is meant for executing both scala "scripts" (single file programs) and complex java-like programs (with a main object and a bunch of classes in the classpath).
From man scala:
The scala utility runs Scala code using a Java runtime environment.
The Scala code to run is specified in one of three ways:
1. With no arguments specified, a Scala shell starts and reads com-
mands interactively.
2. With -howtorun:object specified, the fully qualified name of a
top-level Scala object may be specified. The object should pre-
viously have been compiled using scalac(1).
3. With -howtorun:script specified, a file containing Scala code
may be specified.
If not explicitly specified, the howtorun mode is guessed from the arguments passed to the script.
When given a fully qualified name of an object, scala will guess -howtorun:object and expect a compiled object with that name on the path.
Otherwise, if the parameter to scala is an existing .scala file, -howtorun:script is guessed and the entry point is selected as described above.

Any method of an object module can be run in REPL by explicitly specifying it and giving it the arguments it requires if any. For example:
scala> object MainObject{
| def main(args: Array[String])={
| val unsorted = List(9,3,1,0,7,5,9,3,11)
| print("sorted: " + unsorted.sorted)
| }
| def fun = println("fun here")
| }
defined module MainObject
scala> MainObject.main(Array(""))
sorted: List(0, 1, 3, 3, 5, 7, 9, 9, 11)
scala> MainObject.fun
fun here
In some cases this can be useful for quick testing and troubleshooting.

Related

How to pass variable arguments to my scala program?

I am very new to scala spark. Here I have a wordcount program wherein I pass the inputfile as an argument instead of hardcoding it and reading it. But when I run the program I get an error Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException : 0
I think it's because I have not mentioned the argument I am taking in the main class but don't know how to do so.
I tried running the program as is and also tried changing the run configurations. i do not know how to pass the filename (in code) as an argument in my main class
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.types.{StructType,StructField,StringType};
import org.apache.spark.sql.Row;
object First {
def main(args : Array[String]): Unit = {
val filename = args(0)
val cf = new SparkConf().setAppName("Tutorial").setMaster("local")
val sc = new SparkContext(cf)
val input = sc.textFile(filename)
val w = input.flatMap(line => line.split(" ")).map(word=>
(word,1)).reduceByKey(_ + _)
w.collect.foreach(println)
w.saveAsTextFile(args(1))
}
}
I wish to run this program by passing the right arguments (input file and save output file as arguments) in my main class. I am using scala eclipse IDE. I do not know what changes to make in my program please help me out here as I am new.
In the run configuration for the project, there is an option right next to main called '(x)=Arguments' where you can pass in arguments to main in the 'Program Arguments' section.
Additionally, you may print args.length to see the number of arguments your code is actually receiving after doing the above.
It appears you are running Spark on Windows, so I'm not sure if this will work exactly as-is, but you can definitely pass arguments like any normal command line application. The only difference is that you have to pass the arguments AFTER specifying the Spark-related parameters.
For example, the JAR filename is the.jar and the main object is com.obrigado.MyMain, then you could run a Spark submit job like so: spark-submit --class com.obrigado.MyMain the.jar path/to/inputfile. I believe args[0] should then be path/to/inputfile.
However, like any command-line program, it's generally better to use POSIX-style arguments (or at least named arguments), and there are several good ones out there. Personally, I love using Scallop as it's easy to use and doesn't seem to interfere with Spark's own CLI parsing library.
Hopefully this fixes your issue!

Preserve method parameter names in scala macro

I have an interface:
trait MyInterface {
def doSomething(usefulName : Int) : Unit
}
I have a macro that iterates over the methods of the interface and does stuff with the method names and parameters. I access the method names by doing something like this:
val tpe = typeOf[MyInterface]
// Get lists of parameter names for each method
val listOfParamLists = tpe.decls
.filter(_.isMethod)
.map(_.asMethod.paramLists.head.map(sym => sym.asTerm.name))
If I print out the names for doSomething's parameters, usefulName has become x$1. Why is this happening and is there a way to preserve the original parameter names?
I am using scala version 2.11.8, macros paradise version 2.1.0, and the blackbox context.
The interface is actually java source in a separate sbt project that I control. I have tried compiling with:
javacOptions in (Compile, compile) ++= Seq("-target", "1.8", "-source", "1.8", "-parameters")
The parameters flag is supposed to preserve the names, but I still get the same result as before.
This has nothing to do with macros and everything to do with Scala's runtime reflection system. In a nutshell, Java 8 and Scala 2.11 both wanted to be able to look up parameter names and each implemented their reflection system to do it.
This works just fine if everything is Scala and you compile it together (duh!). Problems arise when you have a Java class that has to be compiled separately.
Observations and Problem
First thing to notice is that the -parameters flag is only since Java 8, which is about as old as Scala 2.11. So Scala 2.11 is probably not using this feature to lookup method names... Consider the following
MyInterface.java compiled with javac -parameters MyInterface.java
public interface MyInterface {
public int doSomething(int bar);
}
MyTrait.scala compiled with scalac MyTrait.scala
class MyTrait {
def doSomething(bar: Int): Int
}
Then, we can use MethodParameterSpy to inspect the parameter information name that the Java 8 -parameter flag is supposed to give us. Running it on the Java compiled interface, we get (and here I abbreviated some of the output)
public abstract int MyInterface.doSomething(int)
Parameter name: bar
but in the Scala compiled class, we only get
public abstract int MyTrait.doSomething(int)
Parameter name: arg0
Yet, Scala has no problem looking up its own parameter names. That tells us that Scala is actually not relying on this Java 8 feature at all - it constructs its own runtime system for keeping track of parameter names. Then, it comes as no surprise that this doesn't work for classes from Java sources. It generates the names x$1, x$2, ... as placeholders, the same way that Java 8 reflection generates the names arg0, arg1, ... as placeholders when we inspected a compiled Scala trait. (And if we had not passed -parameters, it would have generated those names even for MyInterface.java.)
Solution
The best solution (that works in 2.11) I can come up with to get the parameter names of a Java class is to use Java reflection from Scala. Something like
$ javac -parameters MyInterface.java
$ jar -cf MyInterface.jar MyInterface.class
$ scala -cp MyInterface.jar
scala> :pa
// Entering paste mode (ctrl-D to finish)
import java.lang.reflect._
Class.forName("MyInterface")
.getDeclaredMethods
.map(_.getParameters.map(_.getName))
// Exiting paste mode, now interpreting.
res: Array[Array[String]] = Array(Array(bar))
Of course, this will only work if you have the -parameter flag (else you get arg0).
I should probably also mention that if you don't know if your method was compiled from Java or from Scala, you can always call .isJava (For example: typeOf[MyInterface].decls.filter(_.isMethod).head.isJava) and then branch on that to either your initial solution or what I propose above.
Future
Mercifully, this is all a thing of the past in Scala 2.12. If I am correctly reading this ticket, that means that in 2.12 your code will work for Java classes compiled with -parameter and, my Java reflection hack will also work for Scala classes.
All's well that ends well?

The difference between scala script and application

What is the difference between a scala script and scala application? Please provide an example
The book I am reading says that a script must always end in a result expression whereas the application ends in a definition. Unfortunately no clear example is shown.
Please help clarify this for me
I think that what the author means is that a regular scala file needs to define a class or an object in order to work/be useful, you can't use top-level expressions (because the entry-points to a compiled file are pre-defined). For example:
println("foo")
object Bar {
// Some code
}
The println statement is invalid in the top-level of a .scala file, because the only logical interpretation would be to run it at compile time, which doesn't really make sense.
Scala scripts in contrast can contain expressions on the top-level, because those are executed when the script is run, which makes sense again. If a Scala script file only contains definitions on the other hand, it would be useless as well, because the script wouldn't know what to do with the definitions. If you'd use the definitions in some way, however, that'd be okay again, e.g.:
object Foo {
def bar = "test"
}
println(Foo.bar)
The latter is valid as a scala script, because the last statement is an expression using the previous definition, but not a definition itself.
Comparison
Features of scripts:
Like applications, scripts get compiled before running. Actually, the compiler translates scripts to applications before compiling, as shown below.
No need to run the compiler yourself - scala does it for you when you run your script.
Feeling is very similar to script languages like bash, python, or ruby - you directly see the results of your edits, and get a very quick debug cycle.
You don't need to provide a main method, as the compiler will add one for you.
Scala scripts tend to be useful for smaller tasks that can be implemented in a single file.
Scala applications on the other hand, are much better suited when your projects start to grow more complex. They allow to split tasks into different files and namespaces, which is important for maintaining clarity.
Example
If you write the following script:
#!/usr/bin/env scala
println("foo")
Scala 2.11.1 compiler will pretend (source on github) you had written:
object Main {
def main(args: Array[String]): Unit =
new AnyRef {
println("foo")
}
}
Well, I always thought this is a Scala script:
$ cat script
#!/usr/bin/scala
!#
println("Hello, World!")
Running with simple:
$ ./script
An application on the other hand has to be compiled to .class and executed explicitly using java runtime.

What are the limitations and walkarounds of Scala interpreter?

What kind of constructs need 'scalac' compile and how to make an equivalent that will work in interpreter?
Edit: I want to use scala instead of python as a scripting language. (with #!/usr/bin/scala)
You ought to be able to do anything in the REPL that you can do in outside code. Keep in mind that:
Things that refer to each other circularly need to be inside one block. So the following can't be entered as-is; you have to wrap it inside some other object:
class C(i : Int) { def succ = C(i+1) }
object C { def apply(i: Int) = new C(i) }
The execution environment is somewhat different, so benchmarking timings will not always come out the same way as if you run them from compiled code.
You enter the execution path a different way; if you want to call a main method, though, you certainly can from inside the REPL.
You can't just cut-and-paste an entire library into the REPL and have it work exactly like the library did; the REPL has a different structure than normal packages do. So drop the "package" declarations during testing.
EDIT: after reading your question again, I have to admit, that I didn't really answer it ;). But maybe it still helps.
I know of two limitations of interpreter (or REPL), when it comes to loading scala files (in order to interactively test them).
You can't load scala files with package definitions in them. REPL complains and does not load all class is the scala file to be loaded. It read that it has to do with the fact that files that are loaded into the REPL are treated as an object . . . which of course can't have any package definitions in them.
REPL is strange (or a little bit unpredictable) when there are class files of loaded scala files on the classpath. Check out this question by myself and especially my 2 last comments to the second answer.
Furthermore there is a problem with circular dependencies, that I don't know a workaround for: Suppose there is a Class A that uses Class B which again needs A to do it's job. Of course you can't define A since there is no definition of B and vice versa. Providing a dummy for one of those doesn't work either:
scala> class A {
| def alterString(s:String) = s
| def printStuff(s:String) = println(alterString(s))
| }
defined class A
scala> class B {
| val prefix = "this is a test: "
| def doJob() = new A() printStuff "1 2 3"
| }
defined class B
scala> class A {
| def alterString(s:String) = new B().prefix + s
| def printStuff(s:String) = println(alterString(s))
| }
defined class A
scala> new B().doJob()
1 2 3
scala>
Although I already provided a newer definition of A, class B still used the one that was present when I defined it.
I think everything in Scala will work from the interpreter as well (which I believe just calls the compiler under the hood anyway).
Do you suspect something to not work? Or is this a trick/interview question (I suppose anything that want to directly interact with the class loader or class files may behave differently in the two environments, but I have no idea really).

How to reload a class or package in Scala REPL?

I almost always have a Scala REPL session or two open, which makes it very easy to give Java or Scala classes a quick test. But if I change a class and recompile it, the REPL continues with the old one loaded. Is there a way to get it to reload the class, rather than having to restart the REPL?
Just to give a concrete example, suppose we have the file Test.scala:
object Test { def hello = "Hello World" }
We compile it and start the REPL:
~/pkg/scala-2.8.0.Beta1-prerelease$ bin/scala
Welcome to Scala version 2.8.0.Beta1-prerelease
(Java HotSpot(TM) Server VM, Java 1.6.0_16).
Type in expressions to have them evaluated.
Type :help for more information.
scala> Test.hello
res0: java.lang.String = Hello World
Then we change the source file to
object Test {
def hello = "Hello World"
def goodbye = "Goodbye, Cruel World"
}
but we can't use it:
scala> Test.goodbye
<console>:5: error: value goodbye is not a member of object Test
Test.goodbye
^
scala> import Test;
<console>:1: error: '.' expected but ';' found.
import Test;
There is an alternative to reloading the class if the goal is to not have to repeat previous commands. The REPL has the command
:replay
which restarts the REPL environment and plays back all previous valid commands. (The invalid ones are skipped, so if it was wrong before, it won't suddenly work.) When the REPL is reset, it does reload classes, so new commands can use the contents of recompiled classes (in fact, the old commands will also use those recompiled classes).
This is not a general solution, but is a useful shortcut to extend an individual session with re-computable state.
Note: this applies to the bare Scala REPL. If you run it from SBT or some other environment, it may or may not work depending on how SBT or the other environment packages up classes--if you don't update what is on the actual classpath being used, of course it won't work!
Class reloading is not an easy problem. In fact, it's something that the JVM makes very difficult. You do have a couple options though:
Start the Scala REPL in debug mode. The JVM debugger has some built-in reloading which works on the method level. It won't help you with the case you gave, but it would handle something simple like changing a method implementation.
Use JRebel (http://www.zeroturnaround.com/jrebel). JRebel is basically a super-charged class reloading solution for the
JVM. It can handle
member addition/removal, new/removed classes, definition changes, etc. Just about the only thing it can't handle is changes in class hierarchy (adding a super-interface, for
example). It's not a free tool, but they do offer a complementary license which is limited to Scala compilation units.
Unfortunately, both of these are limited by the Scala REPL's implementation details. I use JRebel, and it usually does the trick, but there are still cases where the REPL will not reflect the reloaded class(es). Still, it's better than nothing.
There is an command meet you requirement
:load path/to/file.scala
which will reload the scala source file and recompiled to classes , then you can replay you code
This works for me....
If your new source file Test.scala looks something like this...
package com.tests
object Test {
def hello = "Hello World"
def goodbye = "Goodbye, Cruel World"
}
You first have to load the new changes into Scala console (REPL).
:load src/main/scala/com/tests/examples/Test.scala
Then re-import the package so you can reference the new code in Scala console.
import com.tests.Test
Now enjoy your new code without restarting your session :)
scala> Test.goodbye
res0: String = Goodbye, Cruel World
If the .scala file is in the directory where you start the REPL you can ommit the full path, just put :load myfile.scala, and then import.