How to import in F# - import

I have a file called Parser.fs with module Parser at the top of the file. It compiles. I have another module in the same directory, Main, that looks like this:
module Main
open Parser
let _ = //do stuff
I tried to compile Main.fs with $ fsharpc Main.fs (idk if there's another way to compile). The first error is module or namespace 'Parser' is not defined, all other errors are because of the fact that the functions in Parser are not in scope.
I don't know if it matters, but I did try compiling Main after Parser, and it still didn't work. What am I doing wrong?

F#, unlike Haskell, does not have separate compilation. Well, it does at the assembly level, but not at the module level. If you want both modules to be in the same assembly, you need to compile them together:
fsharpc Parser.fs Main.fs
Another difference from Haskell: order of compilation matters. If you reverse the files, it won't compile.
Alternatively, you could compile Parser into its own assembly:
fsharpc Parser.fs -o:Parser.dll
And then reference that assembly when compiling Main:
fsharpc Main.fs -r:Parser.dll
That said, I would recommend using an fsproj project file (analog of cabal file). Less headache, more control.

Related

SwiftPM: How to setup Swift module.map referring to two connected C libraries

I'm trying to build a Swift Package Manager system package (a module.modulemap)
making available two system C libraries where one includes the other.
That is, one (say libcurl) is a base module and the other C library is including
that (like so: #include "libcurl.h"). On the regular C side this works, because
the makefiles pass in proper -I flags and all is good (and I could presumably
do the same in SPM, but I'd like to avoid extra flags to SPM).
So what I came up with is this module map:
module CBase [system] {
header "/usr/include/curl.h"
link "curl"
export *
}
module CMyLib [system] {
use CBase
header "/usr/include/mylib.h"
link "mylib"
export *
}
I got importing CBase in a Swift package working fine.
But when I try to import CMyLib, the compiler complains:
error: 'curl.h' file not found
Which is kinda understandable because the compiler doesn't know where to look
(though I assumed that use CBase would help).
Is there a way to get this to work w/o having to add -Xcc -I flags to the
build process?
Update 1: To a degree this is covered in
Swift SR-145
and
SE-0063: SwiftPM System Module Search Paths.
The recommendation is to use the Package.swift pkgConfig setting. This seems to work OK for my specific setup. However, it is a chicken and egg if there is no .pc file. I tried embedding an own .pc file in the package, but the system package directory isn't added to the PKG_CONFIG_PATH (and hence won't be considered during the compilation of a dependent module). So the question stands: how to accomplish that in an environment where there libs are installed, but w/o a .pc file (just header and lib).

how does scala interpreter execute source file directly without creating class file

I have this scala code
object S extends App{
println("This is trait program")
}
When I execute scala S.scala it executes fine.
Now I want to know how can it execute code without compile and creating of class file.
Scala is a compiled language, and it needs to compile the code and the .class file is needed for execution.
Maybe you are thinking in using the REPL, where you can interactively code: https://www.scala-lang.org/documentation/getting-started.html#run-it-interactively
But, under the hood, the REPL is compiling your code, and executing the compiled .class
The command scala that you are launching is used to launch Scala REPL, and if you provide a file as an argument, it'll execute it will execute the content of the files as if it was bulk pasted in a REPL.
It's true that Scala is a compiled language, but it does not mean that a .class file is necessary. All that the Scala compiler needs to do is generate relevant JVM byte code and call JVM with that byte code. This does not mean that it explicitly has to create a .class file in directory from where you called it. It can do it all using memory and temporary storage and just call JVM with generated byte code.
If you are looking to explicitly generate class files with Scala that you can later execute by calling java manually, you should use Scala compiler CLI (command: scalac).
Please note that Scala compiler has interfaces to check and potentially compile Scala code on the fly, which is very useful for IDEs (checkout IntelliJ and Ensime).
Just call main() on the object (which inherits this method from App):
S.main(Array())
main() expects an Array[String], so you can just provide an empty array.
Scala is a compiled language in terms of source code to java bytecode transition, however some tricks may be taken to make it resemble an interpreted language. A naive implementation is that when run scala myscript.scala, it follows these steps:
scalac Myscript.scala. It generates S.class (which is the entry class that contains main method) and potentially other class files.
scala -cp . S. This runs/interprets from the main entry of the class
file. -cp . specifies the classpath; and S is the entry class without file extension .class. Note that run/interprets means interpreting (java) bytecode (rather than Scala/Java source code), which is done by JVM runtime.
Remove all the temporarily generated class files. This procedure is optional as long as the users are not aware of the temporary files (i.e., transparent to users).
That is to say, scala acts as a driver that may handle 0) initialization 1) compilation(scalac) 2) execute/run (scala) 3) cleanup.
The actual procedures may be different (e.g., due to performance concerns some files are only in memory/cached, or not generated, or not deleted, by using lower-level APIs of scala driver, etc.) but the general idea should be similar.
On Linux machines, you might find some evidences in /tmp folder. For me,
$ tree /tmp
/tmp
├── hsperfdata_hongxu
│   └── 64143
└── scala-develhongxu
├── output-redirects
│   ├── scala-compile-server-err.log
│   └── scala-compile-server-out.log
└── scalac-compile-server-port
└── 34751
4 directories, 4 files
It is also noteworthy that this way of running Scstepsala is not full-fledged. For example, package declarations are not supported.
# MyScript.scala
package hw
object S extends App {
println("Hello, world!")
}
And it emit error:
$ scala Myscript.scala
.../Myscript.scala:1: error: illegal start of definition
package hw
^
one error found
Others have also mentioned the REPL (read–eval–print loop), which is quite similar. Essentially, almost every language can have an (interactive) interpreter. Here is a text from wikipedia:
REPLs can be created to support any language. REPL support for compiled languages is usually achieved by implementing an interpreter on top of a virtual machine which provides an interface to the compiler. Examples of REPLs for compiled languages include CINT (and its successor Cling), Ch, and BeanShell
However interpreted languages (Python, Ruby, etc.) are typically superior due to their dynamic nature and runtime VMs/interpreters.
Additionally, the gap between compiled and interpreted is not that big. And you can see Scala actually has some interpreted features (at least it appears) since it makes you feel that you can execute like a script language.

Why does the scala-ide not allow multiple package definitions at the top of a file?

In scala it is common practice to stack package statements to allow shorter imports, but when I load a file using stacked packages into the scala ide and I attempt to use an import starting with the same organization I get a compiler error from what appears to be the presentation compiler. The code compiles fine in sbt outside of the IDE.
An example code snippet is as follows:
package com.coltfred
package util
package time
import com.github.nscala_time.time.Imports._
On the import I get the error object github is not a member of package com.coltfred.util.com.
If I move the import to a single line the error will go away, but we've used this practice frequently in our code base so changing them all to be single line package statements would be a pain.
Why is this happening and is there anything I can do to fix it?
Edit:
I used the eclipse-sbt plugin to generate the eclipse project file for this. The directory structure is what it should be and all of the dependencies are in the classpath.
Edit 2:
It turns out there was a file in the test tree of the util package (which should have been in the same package), but had a duplicate package statement at the top. I didn't check the test tree because it shouldn't affect the compilation of the main tree, but apparently I was wrong.
Not sure why the Scala IDE is not liking this, but you can force the import to start at the top level using _root_:
import _root_.com.github.nscala_time.time.Imports._
See if that avoids irritating the IDE.
This is a common annoyance that annoyed paulp into an attempt to fix it. His idea was that a dir that doesn't contribute class files shouldn't be taken as a package. If you can take util as scala.util, you should do so in preference to foo.util where that util is empty.
The util dir is the usual suspect, because who doesn't have a util dir lying around, and in particular, ./util?
apm#mara:~/tmp/coltfred$ mkdir -p com/coltfred/util/time
apm#mara:~/tmp/coltfred$ mkdir -p com/coltfred/util/com
apm#mara:~/tmp/coltfred$ vi com/coltfred/util/time/test.scala
apm#mara:~/tmp/coltfred$ scalac com/coltfred/util/time/test.scala
./com/coltfred/util/time/test.scala:5: error: object github is not a member of package com.coltfred.util.com
import com.github.nscala_time.time._
^
one error found
apm#mara:~/tmp/coltfred$ cat com/coltfred/util/time/test.scala
package com.coltfred
package util
package time
import com.github.nscala_time.time._
class Test
apm#mara:~/tmp/coltfred$
To debug, find out where the offending package is getting loaded from.

Translate F2PY compile steps into setup.py

I've inherited a Fortran 77 code which implements several subroutines which are run through a program block which requires a significant amount of user-input via an interactive command prompt every time the program is run. Since I'd like to automate running the code, I moved all the subroutines into a module and wrote a wrapper code through F2PY. Everything works fine after a 2-step compilation:
gfortran -c my_module.f90 -o my_module.o -ffixed-form
f2py -c my_module.o -m my_wrapper my_wrapper.f90
This ultimately creates three files: my_module.o, my_wrapper.o, my_module.mod, and my_wrapper.so. The my_wrapper.so is the module which I import into Python to access the legacy Fortran code.
My goal is to include this code to use in a larger package of scientific codes, which already has a setup.py using distutils to build a Cython module. Totally ignoring the Cython code for the moment, how am I supposed to translate the 2-step build into an extension in the setup.py? The closes I've been able to figure out looks like:
from numpy.distutils.core import setup, Extension
wrapper = Extension('my_wrapper', ['my_wrapper.f90', ])
setup(
libraries = [('my_module', dict(sources=['my_module.f90']],
extra_f90_compile_args=["-ffixed-form", ])))],
ext_modules = [wrapper, ]
)
This doesn't work, though. My compiler throws many warnings on the my_module.f90, but it still compiles (it throws no warnings if I use the compiler invocation above). When it tries to compile the wrapper though, it fails to find the my_module.mod, even though it is successfully created.
Any thoughts? I have a feeling I'm missing something trivial, but the documentation just doesn't seem fleshed out enough to indicate what it might be.
It might be a little late, but your problem is that you are not linking in my_module when building my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90'], libraries=['my_module'])
setup(
libraries = [('my_module', dict(sources=['my_module.f90'],
extra_f90_compile_args=["-ffixed-form"]))],
ext_modules = [wrapper]
)
If your only use of my_module is for my_wrapper, you could simply add it to the sources of my_wrapper:
wrapper = Extension('my_wrapper', sources=['my_wrapper.f90', 'my_module.f90'],
extra_f90_compile_args=["-ffixed-form"])
setup(
ext_modules = [wrapper]
)
Note that this will also export everything in my_module to Python, which you probably don't want.
I am dealing with such a two-layer library structure outside of Python, using cmake as the top level build system. I have it setup so that make python calls distutils to build the Python wrappers. The setup.pys can safely assume that all external libraries are already built and installed. This strategy is advantageous if one wants to have general-purpose libraries that are installed system-wide, and then wrapped for different applications such as Python, Matlab, Octave, IDL,..., which all have different ways to build extensions.

Scalac doesn't find dependent classes

I'm trying to compile the program with 2 simplest classes:
class BaseClass
placed in BaseClass.scala and
class Test extends BaseClass
placed in Test.scala. Issuing command scalac Test.scala fails, cause BaseClass is not found.
I don't want to compile classes one by one or using scalac *.scala.
The same operation in java works: javac Test.java. Where am I wrong?
Let's see first what Java does:
dcs#dcs-132-CK-NF79:~/tmp$ ls *.java
BaseClass.java Test.java
dcs#dcs-132-CK-NF79:~/tmp$ ls *.class
ls: cannot access *.class: No such file or directory
dcs#dcs-132-CK-NF79:~/tmp$ javac -cp . Test.java
dcs#dcs-132-CK-NF79:~/tmp$ ls *.class
BaseClass.class Test.class
So, as you can see, Java actually compiles BaseClass automatically when you do that. Which begs the question: how can it do that? Can does it know what file to compile?
Well, when you write extends BaseClass in Java, you actually know a few things. You know the directory where these files are found, from the package name. It also knows BaseClass is either in the current file, or in a file called BaseClass.java. If you doubt either of these facts, try moving the file from directory or renaming it, and see if Java can compile it.
So, why can't Scala do the same? Because it assumes neither thing! Scala's files can be in any directory, irrespective of the package they declare. In fact, a single Scala file can even declare more than one package, which would make the directory rule impossible. Also, a Scala class can be in any file whatsoever, irrespective of its name.
So, while Java dictates to you what directory the file should be in and what the file is called, and then reaps the benefit by letting you omit filenames from the command line of javac, Scala let you organize your code in whatever way seems best to you, but requires you to tell it where that code is.
Take your pick.
You need to compile BaseClass.scala first:
$ scalac Test.scala
Test.scala:1: error: not found: type BaseClass
class Test extends BaseClass
^
one error found
$ scalac BaseClass.scala
$ scalac Test.scala
$
EDIT So, the question is now why you have to compile the files one by one? Well, because the Scala compiler just doesn't do this kind of dependency handling. Its authors probably expect you to use a build tool like sbt or Maven so that they don't have to bother.