I have this scala code
object S extends App{
println("This is trait program")
}
When I execute scala S.scala it executes fine.
Now I want to know how can it execute code without compile and creating of class file.
Scala is a compiled language, and it needs to compile the code and the .class file is needed for execution.
Maybe you are thinking in using the REPL, where you can interactively code: https://www.scala-lang.org/documentation/getting-started.html#run-it-interactively
But, under the hood, the REPL is compiling your code, and executing the compiled .class
The command scala that you are launching is used to launch Scala REPL, and if you provide a file as an argument, it'll execute it will execute the content of the files as if it was bulk pasted in a REPL.
It's true that Scala is a compiled language, but it does not mean that a .class file is necessary. All that the Scala compiler needs to do is generate relevant JVM byte code and call JVM with that byte code. This does not mean that it explicitly has to create a .class file in directory from where you called it. It can do it all using memory and temporary storage and just call JVM with generated byte code.
If you are looking to explicitly generate class files with Scala that you can later execute by calling java manually, you should use Scala compiler CLI (command: scalac).
Please note that Scala compiler has interfaces to check and potentially compile Scala code on the fly, which is very useful for IDEs (checkout IntelliJ and Ensime).
Just call main() on the object (which inherits this method from App):
S.main(Array())
main() expects an Array[String], so you can just provide an empty array.
Scala is a compiled language in terms of source code to java bytecode transition, however some tricks may be taken to make it resemble an interpreted language. A naive implementation is that when run scala myscript.scala, it follows these steps:
scalac Myscript.scala. It generates S.class (which is the entry class that contains main method) and potentially other class files.
scala -cp . S. This runs/interprets from the main entry of the class
file. -cp . specifies the classpath; and S is the entry class without file extension .class. Note that run/interprets means interpreting (java) bytecode (rather than Scala/Java source code), which is done by JVM runtime.
Remove all the temporarily generated class files. This procedure is optional as long as the users are not aware of the temporary files (i.e., transparent to users).
That is to say, scala acts as a driver that may handle 0) initialization 1) compilation(scalac) 2) execute/run (scala) 3) cleanup.
The actual procedures may be different (e.g., due to performance concerns some files are only in memory/cached, or not generated, or not deleted, by using lower-level APIs of scala driver, etc.) but the general idea should be similar.
On Linux machines, you might find some evidences in /tmp folder. For me,
$ tree /tmp
/tmp
├── hsperfdata_hongxu
│ └── 64143
└── scala-develhongxu
├── output-redirects
│ ├── scala-compile-server-err.log
│ └── scala-compile-server-out.log
└── scalac-compile-server-port
└── 34751
4 directories, 4 files
It is also noteworthy that this way of running Scstepsala is not full-fledged. For example, package declarations are not supported.
# MyScript.scala
package hw
object S extends App {
println("Hello, world!")
}
And it emit error:
$ scala Myscript.scala
.../Myscript.scala:1: error: illegal start of definition
package hw
^
one error found
Others have also mentioned the REPL (read–eval–print loop), which is quite similar. Essentially, almost every language can have an (interactive) interpreter. Here is a text from wikipedia:
REPLs can be created to support any language. REPL support for compiled languages is usually achieved by implementing an interpreter on top of a virtual machine which provides an interface to the compiler. Examples of REPLs for compiled languages include CINT (and its successor Cling), Ch, and BeanShell
However interpreted languages (Python, Ruby, etc.) are typically superior due to their dynamic nature and runtime VMs/interpreters.
Additionally, the gap between compiled and interpreted is not that big. And you can see Scala actually has some interpreted features (at least it appears) since it makes you feel that you can execute like a script language.
Related
I have a file called Parser.fs with module Parser at the top of the file. It compiles. I have another module in the same directory, Main, that looks like this:
module Main
open Parser
let _ = //do stuff
I tried to compile Main.fs with $ fsharpc Main.fs (idk if there's another way to compile). The first error is module or namespace 'Parser' is not defined, all other errors are because of the fact that the functions in Parser are not in scope.
I don't know if it matters, but I did try compiling Main after Parser, and it still didn't work. What am I doing wrong?
F#, unlike Haskell, does not have separate compilation. Well, it does at the assembly level, but not at the module level. If you want both modules to be in the same assembly, you need to compile them together:
fsharpc Parser.fs Main.fs
Another difference from Haskell: order of compilation matters. If you reverse the files, it won't compile.
Alternatively, you could compile Parser into its own assembly:
fsharpc Parser.fs -o:Parser.dll
And then reference that assembly when compiling Main:
fsharpc Main.fs -r:Parser.dll
That said, I would recommend using an fsproj project file (analog of cabal file). Less headache, more control.
I'm currently using sbt-native-packager to generate a start script for my scala application. I'm using packageArchetype.java_application. I create the script in sbt:
sbt clean myproject/stage
and then "install" the application by copying the created lib and bin directories to the installation directory. I'm not distributing it to anyone, so I'm not creating an executable jar or tarball or anything like that. I'm just compiling my classes, and putting my jar and all the library dependency jars in one place so the start script can execute.
Now I want to add a second main class to my application, so I want a second start script to appear in target/universal/stage/bin when I run sbt stage. I expect it will be the same script but with a different name and app_mainclass set to the different class. How would I do this?
The sbt-native-packager generated script allows you to pass in a -main argument to specify the main class you want to run. Here's what I do for a project named foo:
Create a run.sh script with whatever common options you want that calls the sbt-native-packager generated script:
#!/bin/bash
./target/universal/stage/bin/foo -main "$#"
Then I create a separate script for each main class I want to run. For example first.sh:
#!/bin/bash
export JAVA_OPTS="-Xms512m -Xmx512m"
./run.sh com.example.FirstApp -- "$#"
and second.sh:
#!/bin/bash
export JAVA_OPTS="-Xms2048m -Xmx2048m -XX:+UseConcMarkSweepGC -XX:+UseParNewGC"
./run.sh com.example.SecondApp -- "$#"
Having multiple main classes is... supported now (Q4 2016, native package 1.2.0)
See "SBT Native Packager 1.2.0" by Muki Seiler
Single Project — Multiple Apps
A major pain point for beginners is the start script creation. The bash and bat start scripts are only generated when there is a either
Exactly one main class
Explicitly set main class with
mainClass in Compile := Some(“com.example.MainClass”)
For 1.2.x we will extends the implementation and support multiple main classes by default.
Native packager will generate a start script for each main class found on the classpath.
SBT provides them via the discoveredMainClasses in Compile task.
If there is only one main class, SBT will assign it to the mainClass in Compile setting. This leads to three cases:
Exactly one main class.
In this case native-packager will behave like previous versions and just generate a single start script, using the executableScriptName setting for the script name.
Multiple main classes and mainClass in Compile := None.
This is the default behaviour defined by SBT. In this case native-packager will generate the same start script for each main class.
Multiple main classes and mainClass in Compile := Some(…).
The user has set a specific main class, which will lead to a main start script being generated using the executableScriptName setting. For all other main classes native-packager generates forwarder scripts.
Having multiple main classes not supported now. As a workaround you could use single main class and check command line args.
Starting your app:
myApp prog1
In your main class:
def main(args: Array[String]): Unit = {
if(args[0] == "prog1")
Programm1.start()
else
Programm2.start()
}
I've inherited a Fortran 77 code which implements several subroutines which are run through a program block which requires a significant amount of user-input via an interactive command prompt every time the program is run. Since I'd like to automate running the code, I moved all the subroutines into a module and wrote a wrapper code through F2PY. Everything works fine after a 2-step compilation:
gfortran -c my_module.f90 -o my_module.o -ffixed-form
f2py -c my_module.o -m my_wrapper my_wrapper.f90
This ultimately creates three files: my_module.o, my_wrapper.o, my_module.mod, and my_wrapper.so. The my_wrapper.so is the module which I import into Python to access the legacy Fortran code.
My goal is to include this code to use in a larger package of scientific codes, which already has a setup.py using distutils to build a Cython module. Totally ignoring the Cython code for the moment, how am I supposed to translate the 2-step build into an extension in the setup.py? The closes I've been able to figure out looks like:
from numpy.distutils.core import setup, Extension
wrapper = Extension('my_wrapper', ['my_wrapper.f90', ])
setup(
libraries = [('my_module', dict(sources=['my_module.f90']],
extra_f90_compile_args=["-ffixed-form", ])))],
ext_modules = [wrapper, ]
)
This doesn't work, though. My compiler throws many warnings on the my_module.f90, but it still compiles (it throws no warnings if I use the compiler invocation above). When it tries to compile the wrapper though, it fails to find the my_module.mod, even though it is successfully created.
Any thoughts? I have a feeling I'm missing something trivial, but the documentation just doesn't seem fleshed out enough to indicate what it might be.
It might be a little late, but your problem is that you are not linking in my_module when building my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90'], libraries=['my_module'])
setup(
libraries = [('my_module', dict(sources=['my_module.f90'],
extra_f90_compile_args=["-ffixed-form"]))],
ext_modules = [wrapper]
)
If your only use of my_module is for my_wrapper, you could simply add it to the sources of my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90', 'my_module.f90'],
extra_f90_compile_args=["-ffixed-form"])
setup(
ext_modules = [wrapper]
)
Note that this will also export everything in my_module to Python, which you probably don't want.
I am dealing with such a two-layer library structure outside of Python, using cmake as the top level build system. I have it setup so that make python calls distutils to build the Python wrappers. The setup.pys can safely assume that all external libraries are already built and installed. This strategy is advantageous if one wants to have general-purpose libraries that are installed system-wide, and then wrapped for different applications such as Python, Matlab, Octave, IDL,..., which all have different ways to build extensions.
I'm trying to compile the program with 2 simplest classes:
class BaseClass
placed in BaseClass.scala and
class Test extends BaseClass
placed in Test.scala. Issuing command scalac Test.scala fails, cause BaseClass is not found.
I don't want to compile classes one by one or using scalac *.scala.
The same operation in java works: javac Test.java. Where am I wrong?
Let's see first what Java does:
dcs#dcs-132-CK-NF79:~/tmp$ ls *.java
BaseClass.java Test.java
dcs#dcs-132-CK-NF79:~/tmp$ ls *.class
ls: cannot access *.class: No such file or directory
dcs#dcs-132-CK-NF79:~/tmp$ javac -cp . Test.java
dcs#dcs-132-CK-NF79:~/tmp$ ls *.class
BaseClass.class Test.class
So, as you can see, Java actually compiles BaseClass automatically when you do that. Which begs the question: how can it do that? Can does it know what file to compile?
Well, when you write extends BaseClass in Java, you actually know a few things. You know the directory where these files are found, from the package name. It also knows BaseClass is either in the current file, or in a file called BaseClass.java. If you doubt either of these facts, try moving the file from directory or renaming it, and see if Java can compile it.
So, why can't Scala do the same? Because it assumes neither thing! Scala's files can be in any directory, irrespective of the package they declare. In fact, a single Scala file can even declare more than one package, which would make the directory rule impossible. Also, a Scala class can be in any file whatsoever, irrespective of its name.
So, while Java dictates to you what directory the file should be in and what the file is called, and then reaps the benefit by letting you omit filenames from the command line of javac, Scala let you organize your code in whatever way seems best to you, but requires you to tell it where that code is.
Take your pick.
You need to compile BaseClass.scala first:
$ scalac Test.scala
Test.scala:1: error: not found: type BaseClass
class Test extends BaseClass
^
one error found
$ scalac BaseClass.scala
$ scalac Test.scala
$
EDIT So, the question is now why you have to compile the files one by one? Well, because the Scala compiler just doesn't do this kind of dependency handling. Its authors probably expect you to use a build tool like sbt or Maven so that they don't have to bother.
FSC recompiles my .scala files every time even there is no need - I can compile it twice without editing anything between attempts and it recompiles them!
For example, I have 2 files
Hello.scala
class Hello{
print("hello")
}
And Tokens.scala:
abstract class Token(val str: String, val start: Int, val end: Int)
{override def toString = getClass.getSimpleName + "(" + "[" + start + "-" + end + "]" + str + ")"}
class InputToken(str: String, start: Int, end: Int)
extends Token(str, start, end)
class ParsedToken(str: String, start: Int, end: Int, val invisible: Boolean)
extends Token(str, start, end)
When I ask ant to compile project from scratch I see following output:
ant compile
init:
[mkdir] Created dir: D:\projects\Test\build\classes
[mkdir] Created dir: D:\projects\Test\build\test\classes
compile:
[fsc] Base directory is `D:\projects\Test`
[fsc] Compiling source files: somepackage\Hello.scala, somepackage\Tokens.scala to D:\projects\Test\build\classes
BUILD SUCCESSFUL
Than I don't edit anything and ask ant compile again:
ant compile
init:
[mkdir] Created dir: D:\projects\Test\build\classes
[mkdir] Created dir: D:\projects\Test\build\test\classes
compile:
[fsc] Base directory is `D:\projects\Test`
[fsc] Compiling source files: somepackage\Tokens.scala to D:\projects\Test\build\classes
BUILD SUCCESSFUL
As you can see, fsc acts smart in case of Hello.scala (no recompilation) and acts dumb in case of Tokens.scala. I suggest that the problem is somehow related with inheritance but that is all.
So what is wrong?
Tokens.scala is recompiled because there isn't a class file matching its basename. That is, it doesn't produce a Tokens.class file. When deciding if a source file should be compiled, fsc looks for a classfile with the same basename and if the class file does not exist or the modification time on the source file is later than that of the class file, the source file will be rebuilt. If you can, I suggest that you look into Simple Build Tool, its continuous compile mode accurately tracks source->classfile mapping and won't recompile Tokens.scala
For extra laughs, think about what the compiler might do if you have a different source file that has class Tokens in it.
Although scala allows arbitrary public classes/objects in any source file, there's still quite a bit of tooling that assumes you will somewhat follow the java convention and at least have one class/object in the file with the same name as the source file basename.
I don't like much posting stuff written by others, but I think this question merits a more complete answer that what was strictly asked.
So, first of all, fsc recompiles everything by default, period. It is ant, not fsc, which is leaving Hello.scala out, because the file name matches the class name. It is not leaving Tokens.scala out because there is no class called Tokens compiled -- so, in the absence of a Tokens.class, it recompiled Tokens.scala.
That is the wrong thing to do with Scala. Scala differs in one fundamental aspect from Java in that, because of technical limitations on JVM, a change in a trait requires recompilation of every class, object or instantiation that uses it.
Now, one can fix the ant task to do a smarter thing starting with Scala 2.8. I'm taking this information from blogtrader.net by Caoyuan, of Scala plugin for Netbeans fame. You define the Scala task on the build target like below:
<scalac srcdir="${src.dir}"
destdir="${build.classes.dir}"
classpathref="build.classpath"
force="yes"
addparams="-make:transitive -dependencyfile ${build.dir}/.scala_dependencies"
>
<src path="${basedir}/src1"/>
<!--include name="compile/**/*.scala"/-->
<!--exclude name="forget/**/*.scala"/-->
</scalac>
It tells ant to recompile everything, as ant simply isn't smart enough to figure out what needs to be recompiled or not. It also tells Scala to build a file containing compilation dependencies, and use a transitive dependency algorithm to figure out what needs to be recompiled or not.
You also need to change the init target to include the build directory in the build classpath, as Scala will need that to recompile other classes. It should look like this:
<path id="build.classpath">
<pathelement location="${scala-library.jar}"/>
<pathelement location="${scala-compiler.jar}"/>
<pathelement location="${build.classes.dir}"/>
</path>
For more details, please refer to Caoyuan's blog.