Help with Haskell dynamic plugin load - plugins

I'm a Haskell beginner, and I'm trying to use the dynamic loading in the 'plugins' package. I'm kind of lost. Here is a minimal program with two files.
Main.hs:
module Main (main) where
import System.Plugins
main :: IO ()
main = do
putStrLn "Loading"
mv <- dynload "Plug.o" [] [] "thing" -- also try 'load' here
putStrLn "Loaded"
case mv of
LoadFailure msgs -> putStrLn "fail" >> print msgs
LoadSuccess _ v -> putStrLn "success" >> print (v::Integer)
And Plug.hs:
module Plug (thing) where
thing :: Integer
thing = 1234000
I compile Plug with ghc -c Plug.hs which produces Plug.o. I then compile Main.hs with ghc -o Main Main.hs, and run Main. I also try replacing load with dynload, and running with runhaskell. Only one of these four combinations works. What am I doing wrong?
with dynload
compiled → prints "Loaded", then seg faults
runhaskell → prints "Loading", then "Main.hs: Prelude.undefined"
with load
compiled → successful, prints the Integer
runhaskell → prints "Loading", hangs for 5-10 seconds, disappears
I'm on Mac OS X. GHC version 7.0.2. What am I doing wrong?
thanks,
Rob
Update
I can fix the compiled dynload by changing Plug.hs to the following...
module Plug (thing) where
import Data.Dynamic
thing :: Dynamic
thing = toDyn (1234000::Integer)
It would be nice if it didn't seg fault on error. I guess it doesn't have enough meta data in Plug.o to check the type. Anyway, that leaves the runhaskell cases. I filed a bug for those.

I've tried your example under Ubuntu 10.10 with GHC 6.12.1 and the results are: both dynload and load with running both complied or through runhaskell gives me "Prelude.undefined" error, so I think you should report a bug to the developers.
I cannot see any special cases nor conditions in their module's haddock documentation, so I don't think you doing anything wrong.

You might want to take a look at a similar problem with the GHC-API on haskell ghc dynamic compliation only works on first compile.

Related

How to load multiple modules implementing the same behaviour

I do not understand how is one supposed to use multiple modules that each implement the same behaviour since i get this error at compile time:
Function is already imported from
In my case i have two modules implementing gen_event behaviour and i am trying to import them in a third module.
I get the error message whenever i am trying to compile this code:
-module(mgr).
-import(h1,[init/1]). // implements gen_event
-import(h2,[init/1]). // implements gen_event
You can't do that. Import is a simple trick to avoid to write the complete "definition" of a function. It does nothing but just says to the compiler : when you see init(P) in this module, replace with h1:init(P).
Thus it is not possible to import several function with the same name/arity.
For short names, I do not see any benefit to use import.
If you are using module:function with long names, and you want to shorten the lines in the code, it is possible to use macros instead, and there are no limitation (but also few chance that the function name are the same :o):
-define(Func1(Var1,...,VarN), module1:func(Var1,...,VarN)).
-define(Func2(Var1,...,VarN), module2:func(Var1,...,VarN)).
...
?Func1(A1,...,AN);
...
?Func2(B1,...,BN);
Edit
The next example illustrates how it works, first I create the module mod1 as follow:
-module (mod1).
-export ([test/1]).
test(P) ->
case P of
1 -> ok;
2 -> mod2:test()
end.
and I test it in the shell:
1> c(mod1).
{ok,mod1}
2> mod1:test(1).
ok
3> mod1:test(2).
** exception error: undefined function mod2:test/0
4> % this call failed because mod2 was not defined.
4> % lets define it and compile.
mod2 is created as:
-module (mod2).
-export ([test/0]).
test() ->
io:format("now it works~n").
continue in the shell:
4> c(mod2).
{ok,mod2}
5> mod1:test(1).
ok
6> mod1:test(2).
now it works
ok
7>
As you can see, it is not necessary to modify mod1, but only to create and compile mod2 (note that it would be the same if mod2 already exists but the function test/0 is not exported).
If you want to verify that your code is not using undefined function, you can use external tools. As I am using rebar3 to manage my projects, I use the command rebar3 xref to perform this check. Note that calling an undefined function is a simple warning, it is meaningful in the context of application upgrading. This verification is not bullet proof: it is done at build time, this does not guarantee that the modules yo need will be present, with the right version on a production system: it opens a lot more interesting questions about versioning, code loading...

Instantiate a module in OCaml dynamically

I have several modules implementing the same interface. I want to load only one of this module depending on one argument given on the command line.
I was thinking to use first-class module but the problem is that I want to execute some functions before that the module is instanciated.
For the moment I have this :
module Arch = (val RetrolixAbstractArch.get_arch() : RetrolixAbstractArch.AbstractArch)
let get_arch () =
let arch = Options.get_arch() in
if arch = "" then
Error.global_error "During analysis of compiler's architecture"
"No architecture specified"
else
if arch = "mips" then
( module MipsArch : AbstractArch)
else
Error.global_error "During analysis of compiler's architecture"
(Printf.sprintf "Architecture %s not supported or unknown" arch)
But since the command line is not parsed yet, Options.get_arch give me the empty string.
I would like to realize the command line parsing before that this function is executed (without adding the parssing in the function). Is it possible ? Should I find an other way to achieve this ?
It is possible, but you must use local modules. This is a minor issue, that basically requires only few refactoring.
let arch_of_name = function
| "mips" -> (module MipsArch : AbstractArch)
| "arm" -> (module Arm)
| _ -> invalid_arg "unknown arch"
let main () =
...
let arch_name = get_arch () in
let module Arch = (val arch_of_name arch_name) in
(* here you can use module Arch as usual *)
Another approach is to functorize your modules with arch structure and instantiate your functors as soon as you know the architecture. You can see a full fledged example here (see function target_of_arch that creates first-class module for particular architecture).
If your AbstractArch interface doesn't contain type definitions, then you can use other abstractions instead of modules: records of functions or objects. They may work more smoothly, and may even allow you to overload the arch instance dynamically (by making the arch instance to be a reference, although I wouldn't suggest this, as it is quite unclean, imo).

How to use logFunc with openSimpleConn in Persistent Postgresql module?

I am learning Yesod and was looking for postgresql usage examples in ghci when I ran into this
How to perform database queries in GHCi in Yesod Application
pcon <- openSimpleConn con
The package has changed since this answer was given and now openSimpleConn requires a LogFunc in addition to the Connection string. Reading the docs for openSimpleConn and LogFunc does not yield any examples regarding where to get a LogFunc or how to use one (I am still new to Haskell)
Assuming it wants some kind of logging function, I tried doing
pcon <- openSimpleConn runStdoutLoggingT con
But this was met with
<interactive>:22:9: Not in scope: ‘runStdoutLoggingT’
At which point I decided I needed some help.
So, my questions are, what is a LogFunc and what is the proper way to get and use one?
The simplest implementation you can use it \_ _ _ _ -> return (), i.e., ignore all arguments and do nothing. For more details on what's going on, check out the monad-logger package.

How does the “scala.sys.process” from Scala 2.9 work?

I just had a look at the new scala.sys and scala.sys.process packages to see if there is something helpful here. However, I am at a complete loss.
Has anybody got an example on how to actually start a process?
And, which is most interesting for me: Can you detach processes?
A detached process will continue to run when the parent process ends and is one of the weak spots of Ant.
UPDATE:
There seem to be some confusion what detach is. Have a real live example from my current project. Once with z-Shell and once with TakeCommand:
Z-Shell:
if ! ztcp localhost 5554; then
echo "[ZSH] Start emulator"
emulator \
-avd Nexus-One \
-no-boot-anim \
1>~/Library/Logs/${PROJECT_NAME}-${0:t:r}.out \
2>~/Library/Logs/${PROJECT_NAME}-${0:t:r}.err &
disown
else
ztcp -c "${REPLY}"
fi;
Take-Command:
IFF %#Connect[localhost 5554] lt 0 THEN
ECHO [TCC] Start emulator
DETACH emulator -avd Nexus-One -no-boot-anim
ENDIFF
In both cases it is fire and forget, the emulator is started and will continue to run even after the script has ended. Of course having to write the scripts twice is a waste. So I look into Scala now for unified process handling without cygwin or xml syntax.
First import:
import scala.sys.process.Process
then create a ProcessBuilder
val pb = Process("""ipconfig.exe""")
Then you have two options:
run and block until the process exits
val exitCode = pb.!
run the process in background (detached) and get a Process instance
val p = pb.run
Then you can get the exitcode from the process with (If the process is still running it blocks until it exits)
val exitCode = p.exitValue
If you want to handle the input and output of the process you can use ProcessIO:
import scala.sys.process.ProcessIO
val pio = new ProcessIO(_ => (),
stdout => scala.io.Source.fromInputStream(stdout)
.getLines.foreach(println),
_ => ())
pb.run(pio)
I'm pretty sure detached processes work just fine, considering that you have to explicitly wait for it to exit, and you need to use threads to babysit the stdout and stderr. This is pretty basic, but it's what I've been using:
/** Run a command, collecting the stdout, stderr and exit status */
def run(in: String): (List[String], List[String], Int) = {
val qb = Process(in)
var out = List[String]()
var err = List[String]()
val exit = qb ! ProcessLogger((s) => out ::= s, (s) => err ::= s)
(out.reverse, err.reverse, exit)
}
Process was imported from SBT. Here's a thorough guide on how to use the process library as it appears in SBT.
https://github.com/harrah/xsbt/wiki/Process
Has anybody got an example on how to
actually start a process?
import sys.process._ // Package object with implicits!
"ls"!
And, which is most interesting for me:
Can you detach processes?
"/path/to/script.sh".run()
Most of what you'll do is related to sys.process.ProcessBuilder, the trait. Get to know that.
There are implicits that make usage less verbose, and they are available through the package object sys.process. Import its contents, like shown in the examples. Also, take a look at its scaladoc as well.
The following function will allow easy use if here documents:
def #<<< (command: String) (hereDoc: String) =
{
val process = Process (command)
val io = new ProcessIO (
in => {in.write (hereDoc getBytes "UTF-8"); in.close},
out => {scala.io.Source.fromInputStream(out).getLines.foreach(println)},
err => {scala.io.Source.fromInputStream(err).getLines.foreach(println)})
process run io
}
Sadly I was not able to (did not have the time to) to make it an infix operation. Suggested calling convention is therefore:
#<<< ("command") {"""
Here Document data
"""}
It would be call if anybody could give me a hint on how to make it a more shell like call:
"command" #<<< """
Here Document data
""" !
Documenting process a little better was second on my list for probably two months. You can infer my list from the fact that I never got to it. Unlike most things I don't do, this is something I said I'd do, so I greatly regret that it remains as undocumented as it was when it arrived. Sword, ready yourself! I fall upon thee!
If I understand the dialog so far, one aspect of the original question is not yet answered:
how to "detach" a spawned process so it continues to run independently of the parent scala script
The primary difficulty is that all of the classes involved in spawning a process must run on the JVM, and they are unavoidably terminated when the JVM exits. However, a workaround is to indirectly achieve the goal by leveraging the shell to do the "detach" on your behalf. The following scala script, which launches the gvim editor, appears to work as desired:
val cmd = List(
"scala",
"-e",
"""import scala.sys.process._ ; "gvim".run ; System.exit(0);"""
)
val proc = cmd.run
It assumes that scala is in the PATH, and it does (unavoidably) leave a JVM parent process running as well.

How to reload a class or package in Scala REPL?

I almost always have a Scala REPL session or two open, which makes it very easy to give Java or Scala classes a quick test. But if I change a class and recompile it, the REPL continues with the old one loaded. Is there a way to get it to reload the class, rather than having to restart the REPL?
Just to give a concrete example, suppose we have the file Test.scala:
object Test { def hello = "Hello World" }
We compile it and start the REPL:
~/pkg/scala-2.8.0.Beta1-prerelease$ bin/scala
Welcome to Scala version 2.8.0.Beta1-prerelease
(Java HotSpot(TM) Server VM, Java 1.6.0_16).
Type in expressions to have them evaluated.
Type :help for more information.
scala> Test.hello
res0: java.lang.String = Hello World
Then we change the source file to
object Test {
def hello = "Hello World"
def goodbye = "Goodbye, Cruel World"
}
but we can't use it:
scala> Test.goodbye
<console>:5: error: value goodbye is not a member of object Test
Test.goodbye
^
scala> import Test;
<console>:1: error: '.' expected but ';' found.
import Test;
There is an alternative to reloading the class if the goal is to not have to repeat previous commands. The REPL has the command
:replay
which restarts the REPL environment and plays back all previous valid commands. (The invalid ones are skipped, so if it was wrong before, it won't suddenly work.) When the REPL is reset, it does reload classes, so new commands can use the contents of recompiled classes (in fact, the old commands will also use those recompiled classes).
This is not a general solution, but is a useful shortcut to extend an individual session with re-computable state.
Note: this applies to the bare Scala REPL. If you run it from SBT or some other environment, it may or may not work depending on how SBT or the other environment packages up classes--if you don't update what is on the actual classpath being used, of course it won't work!
Class reloading is not an easy problem. In fact, it's something that the JVM makes very difficult. You do have a couple options though:
Start the Scala REPL in debug mode. The JVM debugger has some built-in reloading which works on the method level. It won't help you with the case you gave, but it would handle something simple like changing a method implementation.
Use JRebel (http://www.zeroturnaround.com/jrebel). JRebel is basically a super-charged class reloading solution for the
JVM. It can handle
member addition/removal, new/removed classes, definition changes, etc. Just about the only thing it can't handle is changes in class hierarchy (adding a super-interface, for
example). It's not a free tool, but they do offer a complementary license which is limited to Scala compilation units.
Unfortunately, both of these are limited by the Scala REPL's implementation details. I use JRebel, and it usually does the trick, but there are still cases where the REPL will not reflect the reloaded class(es). Still, it's better than nothing.
There is an command meet you requirement
:load path/to/file.scala
which will reload the scala source file and recompiled to classes , then you can replay you code
This works for me....
If your new source file Test.scala looks something like this...
package com.tests
object Test {
def hello = "Hello World"
def goodbye = "Goodbye, Cruel World"
}
You first have to load the new changes into Scala console (REPL).
:load src/main/scala/com/tests/examples/Test.scala
Then re-import the package so you can reference the new code in Scala console.
import com.tests.Test
Now enjoy your new code without restarting your session :)
scala> Test.goodbye
res0: String = Goodbye, Cruel World
If the .scala file is in the directory where you start the REPL you can ommit the full path, just put :load myfile.scala, and then import.