Execute external command - scala

I do not know whether it is a Scala or Play! question. I want to execute some external command from my Play application, get the output from the command and show a report to user based on the command output. Can anyone help?
For example, when I enter my-command from shell it shows output like below, which I want to capture and show in web:
Id Name IP
====================
1 A x.y.z.a
2 B p.q.r.s
Please, do not worry about format and parsing of the output. Functionally, I am looking something like PHP exec. I know about java Runtime.getRuntime().exec("command") but is there any Scala/Play version to serve the purpose?

The method !! of the Scala process package does what you need, it executes the statement and captures the text output. For example:
import scala.sys.process._
val cmd = "uname -a" // Your command
val output = cmd.!! // Captures the output

scala> import scala.sys.process._
scala> Process("cat temp.txt")!
This assumes there is a temp file in your home directory. ! is for actual execution of the command. See scala.sys.process for more info.

You can use the Process library: for instance
import scala.sys.process.Process
Process("ls").!!
to get the list of files in the folder as a string. The !! get the output of the command

Related

Spark-shell -i path/to/filename alternative

We have:
spark-shell -i path/to/script.scala
to run a scala script, is it possible to add something like this to the spark-defaults.conf file so that it always loads the scala script on start up of the spark-shell and thus does not have to be added to the command line.
I would like to use this to store import _, credentials and user defined functions that I use regularly so that I don't have to enter the commands every time I start spark-shell.
Thanks,
Shane
You can go to spark directory /bin, create file spark-shell-new.cmd and paste
spark-shell -i path/to/script.scala then run spark-shell-new in cmd like a default spark-shell.
You can do something like this
:load <path_to_script>
Write all the required lines of code in that script

Execute the scala script through spark-shell in silent mode

Need to execute the scala script through spark-shell with silent mode. When I am using spark-shell -i "file.scala", after the execution, I am getting into the scala interactive mode. I don't want to get into there.
I have tried to execute the spark-shell -i "file.scala". But I don't know how to execute the script in silent mode.
spark-shell -i "file.scala"
after execution, I get into
scala>
I don't want to get into the scala> mode
Updating (October 2019) for a script that terminates
This question is also about running a script that terminates, that is, a "scala script" that run by spark-shell -i script.scala > output.txt that stopts by yourself (internal instruction System.exit(0) terminates the script). See this question with a good example.
It also needs a "silent mode", it is expected to not pollute the output.txt.
Suppose Spark v2.2+.
PS: there are a lot of cases (typically small tools and module/algorithm tests) where Spark interpreter can be better than compiler... Please, "let's compile!" is not an answer here.
spark-shell -i file.scala keeps the interpreter open
in the end, so System.exit(0) is required to be at the end of your script. The most appropriate solution is to place your code in try {} and put System.exit(0) in finally {} section.
If logging is requiered you can use something like this:
spark-shell < file.scala > test.log 2>&1 &
If you have limitations on editing file and you can't add System.exit(0), use:
echo :quit | scala-shell -i file.scala
UPD
If you want to suppress everything in output except printlns you have to turn off logging for spark-shell. The sample of configs is here. Disabling any kind of logging in $SPARK-HOME/conf/log4j.properties should allow you to see only pritnlns. But I would not follow this approach with printlns. Using general Logging with log4j should be used instead of printlns. You can configure it so obtain the same results as with printlns. It boils down to configuring a pattern. This answer provides an example of a pattern that solves your issue.
The best way is definitively to compile your scala code to a jar and use spark-submit but if you're simply looking for a quick iteration loop, you can simply issue a :quit after parsing your scala code:
echo :quit | scala-shell -i yourfile.scala
Adding onto #rluta's answer. You can place the call to spark-shell command inside a shell script. Say the below in a shell script:
spark-shell < yourfile.scala
But this would require you to keep the lines of code within a line in case a statement is written on different lines.
OR
echo :quit | spark-shell -i yourfile.scala
This should

Currently testing file name in pytest [duplicate]

What I'm trying to do
I'm writing a small framework in python using pytest, and as part of the teardown I am taking a screenshot.
Now, I want that screenshot to be named according to the running test, not conftest.py
So, for example, my code right now is:
driver.save_screenshot(os.path.basename(__file__)+'.png')
The problem
This prints the name "conftest.py".
I want to print the calling test name if possible. I am guessing using requests?
You can access the current file path via request.node.fspath. Example:
# conftest.py
import pytest
#pytest.fixture
def currpath(request):
return str(request.node.fspath)

Printing to Console in scalding script

I am trying to display some content on the console in a scalding script. When I run the same logic in the scalding shell I get the desired output and when I run the script I get an error:
scripttest.scala:4: error: value dump is not a member of com.twitter.scalding.typed.TypedPipe[String]
The script is
import com.twitter.scalding._
class scripttest(args:Args) extends Job(args){
val hello = TypedPipe.from(TextLine("tutorial/data/hello.txt"))
hello.dump
}
When I ran the same logic in console, it ran successfully.
The output in console:
Hello world
Goodbye world
Please explain why this occurs and how to print to console in a scalding script.
After looking closely at the documentation, you will see in section "2.6 REPL Reference", subsection "2.6.1 Enrichments available on TypedPipe/Grouped/CoGrouped objects" :
.dump: Print the contents to stdout (uses .toIterator)
hence dump is available only in the REPL.
I don't see a "scalding way" to write on the console, nor do I think it would make sense: you are running a pipeline, so the only "guaranteed milestone" is the end of the pipeline, when you can just write your results into a file, as is done on all the tutorial scripts.
If it's just a matter of printing "hello my job started", remember it's just a Scala file and use println (for more advanced logging, Logback is your friend).
To run the script locally, after having cloned the repository:
> ./sbt assembly
> ./scripts/scald.rb --local MyScript.scala
The first line will run all tests and build "scald.rb", the script used in the second line to run your scalding script.

Simultaneously displaying and capturing standard out in IPython?

I'm interested in implementing a behavior in IPython that would be like a combination of ! and !!. I'm trying to use an IPython terminal as an adjunct to my (Windows) shell. For a long running command (e.g., a build script) I would like to be able to watch the output as it streams by as ! does. I would also like to capture the output of the command into the output history as !! does, but this defers printing anything until all output is available.
Does anyone have any suggestions as to how to implement something like this? I'm guessing that a IPython.utils.io.Tee() object would be useful here, but I don't know enough about IPython to hook this up properly.
Here is a snippet of code I just tried in iPython notebook v2.3, which seems to do what was requested:
import sys
import IPython.utils.io
outputstream = IPython.utils.io.Tee("outputfile.log", "w", channel="stdout")
outputstream.write("Hello worlds!\n")
outputstream.close()
logstream=open("outputfile.log", "r")
sys.stdout.write("Read back from log file:\n")
sys.stdout.write(logstream.read())
The log file is created in the same directory as the iPython notebook file, and the output from running this cell is displayed thus:
Hello worlds!
Read back from log file:
Hello worlds!
I haven't tried this in the iPython terminal, but see no reason it wouldn't work as well there.
(Researched and answered as part of the Oxford participation in http://aaronswartzhackathon.org)