Is it possible to use Scala's import without specifying a main function in an object, and without using the package keyword in the source file with the code you wish to import?
Some explanation: In Python, I can define some functions in some file "Lib.py", write
from Lib import *
in some other file "Run.py" in the same directory, use the functions from Lib in Run, and then run Run with the command python Run.py. This workflow is ideal for small scripts that I might write in an hour.
In Scala, it appears that if I want to include functions from another file, I need to start wrapping things in superfluous objects. I would rather not do this.
Writing Python in Scala is unlikely to yield satisfactory results. Objects are not "superfluous" -- it's your program that is not written in an object oriented way.
First, methods must be inside objects. You can place them inside a package object, and they'll then be visible to anything else that is inside the package of the same name.
Second, if one considers solely objects and classes, then all package-less objects and classes whose class files are present in the classpath, or whose scala files are compiled together, will be visible to each other.
This is as minimal as I could get it:
[$]> cat foo.scala
object Foo {
def foo(): Boolean = {
return true
}
}
// vim: set ts=4 sw=4 et:
[$]> cat bar.scala
object Bar extends App {
import Foo._
println(foo)
}
// vim: set ts=4 sw=4 et:
[$]> fsc foo.scala bar.scala
[$]> export CLASSPATH=.:$CLASSPATH # Or else it can't find Bar.
[$]> scala Bar
true
When you just write simple scripts, use Scala's REPL. There, you can define functions and call them without having any enclosing object or package, and without a main method.
Objects/classes don't have to be in packages, though it's highly recommended. That said, you can also treat singleton objects like packages, i.e., as namespaces for standalone functions, and import their contents as if they were packages.
If you define your application as an object that extends App, then you don't have to define a main method. Just write your code in the body of the object, and the App trait (which extends thespecial DelayedInit trait) will provide a main method that will execute your code.
If just want to write a script, you can forgo the object altogether and just write code without any container, then pass your source file to the interpreter (REPL) in non-interactive mode.
Related
I have a package object defined in both main and the test code tree as shown below. When I execute the program with sbt run the one in the main code tree takes effect. Whereas when I run the test cases (sbt test) the package object defined in the test code tree takes effect. For eg
src/main/scala/com/example/package.scala
package object core {
val foo = "Hello World"
}
src/test/scala/com/example/package.scala
package object core {
val foo = "Goodbye World"
}
on sbt run the value of com.example.core.foo is Hello World. on sbt test the value of com.example.core.foo is Goodbye World
Is this just a quirk of SBT or is it a well defined scala/sbt trait?. I currently use this behaviour for dependency injection by defining my module bindings for production and test in their corresponding package objects. This is an advisable approach?
Scala looks for package objects in your current path, so it's a well defined behavior. Since your code in test and main resides in different places it finds different val foos.
The way you are using this mechanism is very similar to using implicits. General advice with implicits and implicit resolution is not to abuse it. I think in this case it's not the best way of providing dependencies.
You always have to consider what scope you are in - if you are using a class defined in main in test scope how do you use foo from main, and how do you use foo from test - whenever you need one or the other. You have to think already about how it will work and consider various scenarios. What if your test class is in a different package, which foo would you get, does it depend on where your tested class is declared?
Make dependency injection more explicit and don't spend mental cycles on it, or run a chance to get someone confused.
The following example fails because the definition for Stuff can't be found:
package com.example
import javax.script.ScriptEngineManager
object Driver5 extends App {
case class Stuff(s: String, d: Double)
val e = new ScriptEngineManager().getEngineByName("scala")
println(e.eval("""import Driver5.Stuff; Stuff("Hello", 3.14)"""))
}
I'm unable to find any import statement that allows me to use my own classes inside of the eval statement. Am I doing something wrong? How does one import classes to be used during eval?
EDIT: Clarified example code to elicit more direct answers.
The Scripting engine does not know the context. It surely can't access all the local variables and imports in the script, since they are not available in the classfiles. (Well, variable names may be optionally available as a debug information, but it is virtually impossible to use them for this purpose.)
I am not sure if there is a special API for that. Imports are different across various languages, so bringing an API that should fit them all can be difficult.
You should be able to add the imports to the eval-ed String. I am not sure if there is a better way to do this.
I know programmers are supposed to wrap their code in an application object:
object Hello extends App {
println("Hello, World")
}
It is required in Eclipse, if I ever want to get any output. However, when I tried to write some code (very casually) in Emacs, I write like this:
class Pair[+T](val first: T, val second: T)
trait Friend[-T] {
def befriend(someone: T)
}
def makeFriendWith(s: Student, f: Friend[Student]) {
f.befriend(s)
}
It seems like there is no universal object or class that wraps over the function makeFriendWith. Is Scala like JavaScript, everything is attached to a global object? If not, what is this function attached to?
Also why can this work in console (I complied it with scala command and it worked) but does not work in Eclipse? What's the use of the Application object?
Scala doesn't have top-level defs, but your script can be run by either the REPL or the scala script runner.
The precise behavior of your script depends on which way you run it.
The REPL can run scripts line-by-line or whole hog. (Compare :paste and :paste -raw versus :load or -i init.script and the future option -I init.script.)
There is an issue about sensitive scripting. The script runner should realize if you're trying to run an App.
There is another effort to make scripting a compiler phase that is easily customized. Scroll to Scripter.scala for code comments about its current heuristics.
In short, your defs must be wrapped in a top-level entity, but exactly how that happens is context-dependent.
There was a recent effort to make an alternative baked-in wrapping scheme available for the REPL.
None of this is mandated by the language spec, any more than special rules pertaining to sbt build files are defined by the language.
You can define methods like this only in the console, which (behind the scenes) automatically wraps them in an anonymous class for you.
Outside of the console, there's no such luxury.
As a JVM language, Scala cannot truly create any top-level entities other than classes and interfaces.
It does, however, have the notion of a "package object" which creates the illusion of value entites (val, var and def) not enclosed in a class or trait.
See http://www.scala-lang.org/docu/files/packageobjects/packageobjects.html for information on package objects.
You can run code like this directly in Eclipse: use Scala worksheet. IntelliJ IDEA Scala plugin supports it as well.
What is the difference between a scala script and scala application? Please provide an example
The book I am reading says that a script must always end in a result expression whereas the application ends in a definition. Unfortunately no clear example is shown.
Please help clarify this for me
I think that what the author means is that a regular scala file needs to define a class or an object in order to work/be useful, you can't use top-level expressions (because the entry-points to a compiled file are pre-defined). For example:
println("foo")
object Bar {
// Some code
}
The println statement is invalid in the top-level of a .scala file, because the only logical interpretation would be to run it at compile time, which doesn't really make sense.
Scala scripts in contrast can contain expressions on the top-level, because those are executed when the script is run, which makes sense again. If a Scala script file only contains definitions on the other hand, it would be useless as well, because the script wouldn't know what to do with the definitions. If you'd use the definitions in some way, however, that'd be okay again, e.g.:
object Foo {
def bar = "test"
}
println(Foo.bar)
The latter is valid as a scala script, because the last statement is an expression using the previous definition, but not a definition itself.
Comparison
Features of scripts:
Like applications, scripts get compiled before running. Actually, the compiler translates scripts to applications before compiling, as shown below.
No need to run the compiler yourself - scala does it for you when you run your script.
Feeling is very similar to script languages like bash, python, or ruby - you directly see the results of your edits, and get a very quick debug cycle.
You don't need to provide a main method, as the compiler will add one for you.
Scala scripts tend to be useful for smaller tasks that can be implemented in a single file.
Scala applications on the other hand, are much better suited when your projects start to grow more complex. They allow to split tasks into different files and namespaces, which is important for maintaining clarity.
Example
If you write the following script:
#!/usr/bin/env scala
println("foo")
Scala 2.11.1 compiler will pretend (source on github) you had written:
object Main {
def main(args: Array[String]): Unit =
new AnyRef {
println("foo")
}
}
Well, I always thought this is a Scala script:
$ cat script
#!/usr/bin/scala
!#
println("Hello, World!")
Running with simple:
$ ./script
An application on the other hand has to be compiled to .class and executed explicitly using java runtime.
Because of an issue with package name aux under Windows, I am moving a helper class within the package hierarchy of my library from
de.sciss.scalainterpreter.aux
to
de.sciss.scalainterpreter
The class is private to the library, i.e. private[scalainterpreter] object Helper.
Now using the Typesafe Migration-Manager, obviously it reports that the change is not compatible:
Found 2 binary incompatibiities
===============================
* class de.sciss.scalainterpreter.aux.Helper does not have a correspondent
in new version
* object de.sciss.scalainterpreter.aux.Helper does not have a correspondent
in new version
But I suspect that if client code does not call into either object, the interfaces are still compatible, and thus I can use a minor version increase to indicate the change, and allow those two versions to be used interchangeably.
Correct?
You are not specifying if Helper was already package private before the move. So I'll treat both cases:
If it was already package private:
I suspect that the migration manager reports an incompatibility only because it must stay conservative: packages are open in scala (like in java), which means that client code might very well define a class package scalainterpreter.
So by moving Helper, you would indeed break that class.
However let's be pragmatic: de.sciss.scalainterpreter.aux is your package (and so should be their sub-packages), and nobody should define their own classes there.
With this additional prerequiste, moving Helper is indeed a binary compatible change toward client scala code.
As for client java code, it's a bit different because even if Helper is package private, its visibility is still public as far as the JVM is concerned, and thus the java compiler will happily let client code access Helper (thus client java code might very well already access Helper, despite it being declared package private).
If it was not package private before the move:
Well, tough luck. Client code could very well already access Helper, and the move will certainly break that. As a side note, you can employ a little trick to make the change source-compatible, but alas not binary-compatible. Just add the following file:
package de.sciss
package object scalainterpreter {
object aux {
val Helper = _root_.de.sciss.scalainterpreter.Helper
}
}
With the above, you can still access Helper as de.sciss.scalainterpreter.aux.Helper, and it still compiles under windows (unlike defining a package aux, which does not compile because of the reserved meaning as a file name).
But again, this is not binary compatible, only source compatible.
It's easy to see how inlining can break client code, since the inlined code essentially bleeds into the client interface. This example really asks for a linkage error; we can experiment and do things like javap | grep Helper, but at some level you have to let scalac do its job.
package lib {
object Lib {
//import util.Helper
#inline def result = Helper.help
}
//package util {
private [lib] object Helper {
#inline def help = "Does this help?"
}
//}
}
Sample innocently bystanding client:
package client
object Test {
import lib.Lib
def main(args: Array[String]) {
println(Lib.result)
}
}
Changing package of package-private class:
$ scala -cp "classes;target" client.Test
Does this help?
apm#halyard ~/tmp/taking-it-private
$ vi lib.scala
apm#halyard ~/tmp/taking-it-private
$ rm -rf classes/*
apm#halyard ~/tmp/taking-it-private
$ smalac -d classes -optimise lib.scala
apm#halyard ~/tmp/taking-it-private
$ smala -cp "classes;target" client.Test
java.lang.ClassNotFoundException: lib.util.Helper$
Javap shows why. [Namely, the call is inlined but it still wants to init the module.]
I haven't followed the discussions, but for example there are links at:
https://github.com/scala/scala/pull/1133
and other discussions on the ML about what expectations about binary compatibility are valid.
https://groups.google.com/forum/?fromgroups=#!topic/scala-internals/sJ-xnWL_8PE
Simply put, no reason why it wouldn't be. Linkage happens around signatures; since the object in question is scoped to the compilation unit, clients cannot (or rather, should not) be using it, and binary compatibility is therefore not an issue.