Evaluating Scala with twitter Eval and Scala notebook - scala

I've been using twitter Eval function to compile Scala functions. But would prefer to evaluate the function as being provided at project scala-notebook (https://github.com/Bridgewater/scala-notebook)
Is scala-notebook hooking up to the internal repl of machine its running on ?
How is it different to twitter Eval library (https://twitter.github.io/util/docs/index.html#com.twitter.util.Eval) ?

Both scala-notebook and twitter-eval use the scala-compiler tool under the hood to compile and interpret the text as scala code. So, technically there is no difference between those two with regard to how they compile the source code.
Just to shed some light on how they both do that, check out the below files:
scala-notebook: https://github.com/Bridgewater/scala-notebook/blob/master/kernel/src/main/scala/com/bwater/notebook/kernel/Repl.scala
twitter-eval: https://github.com/twitter/util/blob/develop/util-eval/src/main/scala/com/twitter/util/Eval.scala
As you can see, both of the use the scala-compiler. The compiler classes and utilities are located in the package 'scala.tools'

Related

Can Scala REPL on SBT call people's own scripts? (Like OptiML?)

I know sbt console will open an interactive Scala REPL and load in all the library dependencies so people can test Scala code right there. However, I wonder if there's anyway to use and treat in a way so that people can interact with my programs directly, instead of interacting with libraries.
For example, if I write a Vector class, how can someone call it from sbt console or any other Scala REPL interface??
Think of it as that you are trying to write a Scala library but you want to provide a simple REPL interface for people to interact with it, like R, instead of asking people to add the library as dependency.
The effect is similar as described here: http://stanford-ppl.github.io/Delite/optiml/getting_started.html
Maybe this will help.
You can use initialCommands key in sbt to do this
So in build.sbt if you put
initialCommands in console := """import my.project._
val myObj = MyObject("Hello", "World")
"""
after you type 'console', you can start using myObj or the classes in my.project
http://www.scala-sbt.org/0.13.5/docs/Howto/scala.html#initial
Yes you can, but you cannot use modified code without reloading the REPL. Just run:
sbt "~ ; console"
And then import your classes with import your.package._ and use them from there.
If you make any changes to your library code, just hit CTRL+D or :quit and it will detect file changes, compile them and enter the REPL again. You can then use the history (navigating with the arrows up/down) to execute anything from the previous session again.

Why does IntelliJ IDEA throw compilation error?

Compiling Spark gives this compile error :
To fix I modify Utils.classIsLoadable method to just return true:
def classIsLoadable(clazz: String): Boolean = {
// Try { Class.forName(clazz, false, getContextOrSparkClassLoader) }.isSuccess
true
}
I realise this is not a good fix, but so far Spark seems to be running correctly from source. Has this compile error been experienced before and is there a fix? Will returning true suffice for now , I'm not sure what impact modifying this return value may have?
I suggest compiling Spark from the command-line using Maven or SBT instead of trying to use your IDE's compiler. Many of the core Spark developers use IntelliJ for editing Spark's source code but still use the command-line compilers, largely because it's been difficult to get the project to build correctly inside IDEs. Even if you're using an external compiler, you should still be able to benefit from IntelliJ's syntax highlighting, type checking, etc.
Here's a relevant discussion from the Spark developer mailing list: http://apache-spark-developers-list.1001551.n3.nabble.com/IntelliJ-IDEA-cannot-compile-TreeNode-scala-td7090.html
Note that Spark users should be able to use IntelliJ to compile applications that depend on Spark; this issue only affects developers who want to build Spark itself.
If you're interested in fixing the build to work with IntelliJ, I recommend opening a ticket on the Spark issue tracker.

Working with Linux shared objects in Scala

How can I access *.so object and it's methods in Scala? Here is a Python example: https://github.com/soulseekah/py-libgfshare/blob/master/gfshare.py where ctypes library is used to interact with libgfshare.so. What tools do I need to use to achieve the same in Scala?
If you would like to interact with a native library which doesn't support JNI (Java Native Interface) (that is, not designed especially for interacting with Java VM), try JNA (Java Native Access). There's also Scala Native Access project on Google Code, which seems to provide more "scala-friendly" API, but it seems inactive (last commit was in 2010).
The previous answer is quite correct that JNI is the way to go but getting it all to work requires a little perseverance. An example of a multi-platform Scala interface to a real world native library can be found here. As a summary, the steps you need to take are detailed below.
First, define the Scala interface that you want to use to access your native library with. An example of a native interface declaration in Scala is:
package foo.bar
object NativeAPI {
#native def test(data: Array[Byte]): Long
}
Compile this file using scalac and then use javah to output a suitable C/C++ header file for the JNI native stub. The javah header generator (part of Java SDK) needs to be invoked as
javah -classpath <path-to-scala-libs> foo.bar.NativeAPI$
Note that the $ is added to Scala objects by the Scala compiler. The functions generated in this header will be called when you call the API from the JVM. For this example, the javah generated C header's declaration for this function would look like this:
JNIEXPORT jlong JNICALL Java_foo_bar_NativeAPI_00024_test(JNIEnv *, jobject, jbyteArray);
At this point, you need to create a C file which then maps this function from your JVM api to the native library you intend to use. The resulting C file needs to be compiled to a shared library (.so in Linux) which references the native library you want to call. The C interface into the JVM is described here.
For this example, lets call this library libjni-native.so and assume it references a 3rd party library called libfoo.so.0. If both these libraries are available in the dynamic library search path of your OS, then you need to instruct the JVM to load the library and call the function as follows:
System.loadLibrary("libjni-native.so")
val data = new Array[Byte](100)
// Populate 'data'
NativeAPI.test(data)
On Linux and OSX machines, the dynamic linker will know that libfoo.so.0 (.dylib for OSX) is a dependency of libjni-native.so and will load it automatically. You can now call the test function from Scala. You should now be able to make a call to foo.bar.Native.test() and have the native function executed.
In the real world, you probably need to bundle the .so libraries into a JAR file for distribution. To do this, you can place the shared libraries in a suitable directory in the resources directory of your project. These libraries need to be copied from the JAR file to a temporary directory at run time and then loaded using System.load("/tmppath/libjni-native.so"). LibLoader from the example shows one way how this can be achieved.

How can I perform dynamic reconfiguration in Scala (like Dyre or XMonad)?

A fairly common method of configuration for Haskell applications is having the program as a library, with a main function provided with a bunch of optional parameters for configuration. Upon being run, the executable itself looks for a dotfile containing a main function using this default function, which it then compiles and run instead. This sort of configuration scheme allows the user to add arbitrarily complex functionality without recompiling the entire program. Examples of this are the Dyre library and the XMonad window manager. How can this be done in Scala cleanly? It appears that SBT does something similarly internally.
Using SBT externally would require having the sources of the whole program somewhere, and lacks the cleanliness of just having a single dotfile. Typesafe config, Configrity, Bee Config, and fig all seem to only be meant for normal string based configuration.
https://github.com/typesafehub/config is a great config library.
supports files in three formats: Java properties, JSON, and a human-friendly JSON superset

Why is it faster to call external scala compiler than use the runtime interpreter library?

Why is it faster to call external scala compiler than use the runtime interpreter library?
In the code below it takes almost 2s to warm up the interpreter.
val out = new PrintStream(new FileOutputStream("/dev/null"))
val flusher = new java.io.PrintWriter(out)
val interpret = {
val settings = new scala.tools.nsc.GenericRunnerSettings(println _)
settings.usejavacp.value = true
new scala.tools.nsc.interpreter.IMain(settings, flusher)
}
interpret.interpret(" ") // <-- warming up
interpret.interpret(" Hello World ")
In the other hand, when running Scala compiler from command line like in a shell session:
scala HelloWorld.scala
it takes less than 0.5s to print a Hello World.
I am trying to parse+execute some Java, Scala or similar code given in a string during runtime (it is a script interpreter, i.e. it will be run only one time during my app execution).
Scala code would be better obviously, but only if it can be as fast as the Java option.
Is there any faster alternative than nsc.interpreter and external compiler to execute code from a string at runtime?
The best I could found was Janino; it is faster than Scala compiler and does not require the JDK (a very interesting feature).
As a last resource, how fast are Java Scripting Engines compared to a reflected or bytecode-compiled Java code? I found that, at least, they can be compiled: Compiling oft-used scripts.
Chosen solution:
runtimecompilescala.
There are many things left unstated (like memory settings), but you're comparing apples and oranges.
The command-line script runner is not a REPL session; instead, it wraps your code in a simple object with a main method, compiles and runs that.
By contrast, each interpreted line (or compilable thing) in the REPL is wrapped in an object (with the session history imported so you can refer to past results).
Even modulo REPL start-up, this has performance consequences, see this issue.
The simple wrap-it logic for the script runner is built into the parser. Here is how the script runner runs the compilation. Or, it looks like this is how -e is handled.
Edit: your comment to your question implies that you really want fsc compile server behavior. Fire up fsc and use the compile client.