Does the scala ! subprocess command waits for the subprocess to finish? - scala

So, I have the following kind of code running in each map tasks on Spark.
#volatile var res = (someProgram + fileName) !
var cmdRes = ("rm " + fileName) !;
The filenames for each map tasks are unique. The basic idea is that once the first command finishes, the second command deletes the file. However, I notice that the program sometimes complain that the file does not exist. It seems that the subprocess call is not synchronous, that is, it does not wait for the subprocess to complete. Is that correct. And if it indeed the case, how can we correct that?

As you can see in the docs, the ! method blocks until exit. The docs say that it:
Starts the process represented by this builder, blocks until it exits, and returns the exit code.
It's possible that you should be checking the exit code to interpret the result and to deal with exceptional cases.
When creating process commands by concatenation, you are often better off using the Seq extensions (as opposed to the String ones) to create a ProcessBuilder. The docs even include this helper, which might help you:
// This uses ! to get the exit code
def fileExists(name: String) = Seq("test", "-f", name).! == 0

Related

How to issue a command that produces infinite output and return immediately

When I write the following code (in ammonite but i don't think it matters)
("tail -f toTail.txt" lineStream) foreach(println(_)), the program give me the last line as intend but then hang, and even if i write more in the file, nothing come out.
How does the API support process that have unbounded output ?
I try to write val myStream = ("tail -f toTail.txt" lineStream_!)
but it still does not return write away
Here is what the scala doc says:
lineStream: returns immediately like run, and the output being generated is provided through a Stream[String]. Getting the next element of that Stream may block until it becomes available.
Hence i don't understand why it blocks
By the way i am having exactly the same behavior with the Ammonite API
If type %%("tail", "-f", "toTail.txt") once again the method just hang and does not return immediately.
There is no issue with the ProcessBuilder (at least not one that stems from your use case). From the ProcessBuilder documentation:
Starting Processes
To execute all external commands associated with a ProcessBuilder, one may use one of four groups of methods. Each of these methods have various overloads and variations to enable further control over the I/O. These methods are:
run: the most general method, it returns a scala.sys.process.Process immediately, and the external command executes concurrently.
!: blocks until all external commands exit, and returns the exit code of the last one in the chain of execution.
!!: blocks until all external commands exit, and returns a String with the output generated.
lineStream: returns immediately like run, and the output being generated is provided through a Stream[String]. Getting the next element of that Stream may block until it becomes available. This method will throw an exception if the return code is different than zero -- if this is not desired, use the lineStream_! method.
The documentation clearly states that lineStream might block until the next line becomes available. Since the nature of tail -f is an infinite stream of lines lineBreak the program will block waiting for the next line to appear.
For the following assume I have a file: /Users/user/tmp/sample.txt and it's contents are:
boom
bar
cat
Why lineStream_! isn't wrong
import scala.language.postfixOps
import scala.sys.process._
object ProcessBuilder extends App {
val myStream: Stream[String] = ("tail /Users/user/tmp/sample.txt" lineStream_!)
println("I'm after tail!")
myStream.filter(_ != null).foreach(println)
println("Finished")
System.exit(0)
}
Outputs:
I'm after tail!
boom
bar
cat
Finished
So you see that the lineStream_! returned immediately. Because the nature of the command is finite.
How to return immediately from a command that produces infinite output:
Let's try this with tail -f. You need more control over your process. Again, as the documentation states:
If one desires full control over input and output, then a scala.sys.process.ProcessIO can be used with run.
So for an example:
import java.io.{BufferedReader, InputStreamReader}
import scala.language.postfixOps
import scala.sys.process._
object ProcessBuilder extends App {
var reader: BufferedReader = _
try {
var myStream: Stream[String] = Stream.empty
val processIO = new ProcessIO(
(os: java.io.OutputStream) => ??? /* Send things to the process here */,
(in: java.io.InputStream) => {
reader = new BufferedReader(new InputStreamReader(in))
myStream = Stream.continually(reader.readLine()).takeWhile(_ != "ff")
},
(in: java.io.InputStream) => ???,
true
)
"tail -f /Users/user/tmp/sample.txt".run(processIO)
println("I'm after the tail command...")
Thread.sleep(2000)
println("Such computation performed while tail was active!")
Thread.sleep(2000)
println("Such computation performed while tail was active again!")
println(
s"Captured these lines while computing: ${myStream.print(System.lineSeparator())}")
Thread.sleep(2000)
println("Another computation!")
} finally {
Option(reader).foreach(_.close())
}
println("Finished")
System.exit(0)
}
Outputs:
I'm after the tail command...
Such computation performed while tail was active!
Such computation performed while tail was active again!
boom
bar
cat
It still returns immediately and now it just hangs there waiting for more input. If I do echo 'fff' >> sample.txt from the tmp directory, the program outputs:
Another computation!
Finished
You now have the power to perform any computation you want after issuing a tail -f command and the power to terminate it depending on the condition you pass to the takeWhile method (or other methods that close the input stream).
For more details on ProcessIO check the documentation here.
I think, this has to do with how you are adding data to the file, not with the ProcessBuilder. If you are using it in an editor for example to add data, it rewrites the entire file every time you save, with a different inode, and tail isn't detecting that.
Try doing tail -F instead of tail -f, that should work.

D receives location in args?

I'm pretty new to looking at D (like...yesterday, after looking for Kotlin benchmarks...) and currently trying to decide if it's a language I want to cope with.
I'm trying to pass some arguments from command line and I'm a little surprised. Let's say I pass "-Foo -Bar".
My program is quite simple:
import std.stdio;
void main(string [] args) {
foreach(arg; args) {
writeln(arg);
}
}
Coming from Java, I expected it to print
-Foo
-Bar
But my D program seems to receive its location as the first argument?
The output is:
/home/(username)/Java_Projects/HelloD/hellod
-Foo
-Bar
I tried searching for this, but all Google hits refer to Java's -D switch...
So, is this intended behaviour? If yes, does anyone know why?
That's normal in D, inherited from C and C++. The first argument is the name of the program so you can use it to determine which function you want in a multi-use program.
The busybox unix toolset https://busybox.net/ uses this (well, at least used to, I'm not sure if it has changed) so one program, busybox, can be called as various unix commands like ls or cp.
Using args[0], it can tell which one it was called as, though they all point to the same binary program, and respond accordingly.
TIP: if you're not interested in this, you can loop just your args with foreach(arg; args[1 .. $]) {}

What does Some(string.!!) mean in Scala?

I have
val str = s"""/bin/bash -c 'some command'"""
job = Some(str.!!)
It is meant to execute the bash command I assume.
Can someone explain this syntax?
Googling for '.!!' doesn't help much neither does 'dot exclamation exclamation' so I hope someone can explain this one and/or point me to the doc.
The job doesn't run and I'm trying to debug the code, but when i put this in a
try {
command = Some(str.!!)
}
catch {
case e:Exception =>
println(e.toString)
}
e is actually not an Exception for some reason...
Trying to figure what this really does and how to find what is happening.
There is an implicit conversion from String to ProcessBuilder. When you import scala.sys.process._ then scala will automatically perform the conversion when needed, thus making the method !! available on String instances. You can find the methods of ProcessBuilder here: http://www.scala-lang.org/api/current/index.html#scala.sys.process.ProcessBuilder
The documentation for !! says that "If the exit code is non-zero, an exception is thrown." It appears that bash in this case does return 0, so the command was for some reason successful.

sbt parsers and dynamic completions

I am trying to build a git command parser in sbt.
The goal of the parser is not so much to validate the actual git command but rather to provide auto-completion within the sbt console.
The parser relies on bash completion scripts, so it's fair to say that generating the completions is fairly expensive as a process has to spawn every time. That's why I'd like to minimize the number of call made to the bash-completion process.
I have a working solution, that looks like this:
def autoCompleteParser(state: State) = {
val extracted = Project.extract(state)
import extracted._
val dir = extracted.get(baseDirectory)
def suggestions(args: Seq[String]): Seq[String] = {
// .. calling Process and collecting the completions into a Seq[String]
}
val gitArgsParser: Parser[Seq[String]] = {
def loop(previous: Seq[String]): Parser[Seq[String]] =
token(Space) ~> NotSpace.examples(suggestions(previous): _*).flatMap(res => loop(previous :+ res))
loop(Vector())
}
gitArgsParser
}
val test = Command("git-auto-complete")(autoCompleteParser _)(autoCompleteAction)
However I have two problems:
the completion process is called for every character, which is more than I'd like
the potential completions seems to be passed as a parameter to another round of completions, which means even more calls to the external process.
My question is the following: how do I tell sbt to reuse/cache the completions he has got for the rest of an argument without calling the process for each character? For example:
completions for 'git a' are:
dd m nnotate pply rchive
Then completion for 'git ad' are:
d
without the need to call the suggestions method again. I have tried to implement an ExampleSource, but I could not obtain the behavior I was looking for from it.
Any pointer would be welcome. And if someone understands why the potential completions seems to be passed into another completions round, that would help me a lot too.

How does the “scala.sys.process” from Scala 2.9 work?

I just had a look at the new scala.sys and scala.sys.process packages to see if there is something helpful here. However, I am at a complete loss.
Has anybody got an example on how to actually start a process?
And, which is most interesting for me: Can you detach processes?
A detached process will continue to run when the parent process ends and is one of the weak spots of Ant.
UPDATE:
There seem to be some confusion what detach is. Have a real live example from my current project. Once with z-Shell and once with TakeCommand:
Z-Shell:
if ! ztcp localhost 5554; then
echo "[ZSH] Start emulator"
emulator \
-avd Nexus-One \
-no-boot-anim \
1>~/Library/Logs/${PROJECT_NAME}-${0:t:r}.out \
2>~/Library/Logs/${PROJECT_NAME}-${0:t:r}.err &
disown
else
ztcp -c "${REPLY}"
fi;
Take-Command:
IFF %#Connect[localhost 5554] lt 0 THEN
ECHO [TCC] Start emulator
DETACH emulator -avd Nexus-One -no-boot-anim
ENDIFF
In both cases it is fire and forget, the emulator is started and will continue to run even after the script has ended. Of course having to write the scripts twice is a waste. So I look into Scala now for unified process handling without cygwin or xml syntax.
First import:
import scala.sys.process.Process
then create a ProcessBuilder
val pb = Process("""ipconfig.exe""")
Then you have two options:
run and block until the process exits
val exitCode = pb.!
run the process in background (detached) and get a Process instance
val p = pb.run
Then you can get the exitcode from the process with (If the process is still running it blocks until it exits)
val exitCode = p.exitValue
If you want to handle the input and output of the process you can use ProcessIO:
import scala.sys.process.ProcessIO
val pio = new ProcessIO(_ => (),
stdout => scala.io.Source.fromInputStream(stdout)
.getLines.foreach(println),
_ => ())
pb.run(pio)
I'm pretty sure detached processes work just fine, considering that you have to explicitly wait for it to exit, and you need to use threads to babysit the stdout and stderr. This is pretty basic, but it's what I've been using:
/** Run a command, collecting the stdout, stderr and exit status */
def run(in: String): (List[String], List[String], Int) = {
val qb = Process(in)
var out = List[String]()
var err = List[String]()
val exit = qb ! ProcessLogger((s) => out ::= s, (s) => err ::= s)
(out.reverse, err.reverse, exit)
}
Process was imported from SBT. Here's a thorough guide on how to use the process library as it appears in SBT.
https://github.com/harrah/xsbt/wiki/Process
Has anybody got an example on how to
actually start a process?
import sys.process._ // Package object with implicits!
"ls"!
And, which is most interesting for me:
Can you detach processes?
"/path/to/script.sh".run()
Most of what you'll do is related to sys.process.ProcessBuilder, the trait. Get to know that.
There are implicits that make usage less verbose, and they are available through the package object sys.process. Import its contents, like shown in the examples. Also, take a look at its scaladoc as well.
The following function will allow easy use if here documents:
def #<<< (command: String) (hereDoc: String) =
{
val process = Process (command)
val io = new ProcessIO (
in => {in.write (hereDoc getBytes "UTF-8"); in.close},
out => {scala.io.Source.fromInputStream(out).getLines.foreach(println)},
err => {scala.io.Source.fromInputStream(err).getLines.foreach(println)})
process run io
}
Sadly I was not able to (did not have the time to) to make it an infix operation. Suggested calling convention is therefore:
#<<< ("command") {"""
Here Document data
"""}
It would be call if anybody could give me a hint on how to make it a more shell like call:
"command" #<<< """
Here Document data
""" !
Documenting process a little better was second on my list for probably two months. You can infer my list from the fact that I never got to it. Unlike most things I don't do, this is something I said I'd do, so I greatly regret that it remains as undocumented as it was when it arrived. Sword, ready yourself! I fall upon thee!
If I understand the dialog so far, one aspect of the original question is not yet answered:
how to "detach" a spawned process so it continues to run independently of the parent scala script
The primary difficulty is that all of the classes involved in spawning a process must run on the JVM, and they are unavoidably terminated when the JVM exits. However, a workaround is to indirectly achieve the goal by leveraging the shell to do the "detach" on your behalf. The following scala script, which launches the gvim editor, appears to work as desired:
val cmd = List(
"scala",
"-e",
"""import scala.sys.process._ ; "gvim".run ; System.exit(0);"""
)
val proc = cmd.run
It assumes that scala is in the PATH, and it does (unavoidably) leave a JVM parent process running as well.