I'm writing a bot in Scala for a game that uses text input and output. So I want to work with a process interactively - that is, my code receives output from the process, works with it, and only then sends its next input to the process. So I want to give a function access to the inputStreams and the outputStream simultaneously.
This doesn't seem to fit into any of the factories in scala.sys.process.BasicIO or the constructor for scala.sys.process.ProcessIO (three functions, each of which has access to only one stream).
Here's how I'm doing it at the moment.
private var rogue_input: OutputStream = _
private var rogue_output: InputStream = _
private var rogue_error: InputStream = _
Process("python3 /home/robin/IdeaProjects/Rogomatic/python/rogue.py --rogomatic").run(
new ProcessIO(rogue_input = _, rogue_output = _, rogue_error = _)
)
try {
private val rogue_scanner = new Scanner(rogue_output)
private val rogue_writer = new PrintWriter(rogue_input, true)
// Play the game
} finally {
rogue_input.close()
rogue_output.close()
rogue_error.close()
}
This works, but it doesn't feel very Scala-like. Is there a more idiomatic way to do this?
So I want to work with a process interactively - that is, my code receives output from the process, works with it, and only then sends its next input to the process.
In general, this is traditionally solved by expect. There exist libraries and tools inspired by expect for various languages, including for Scala: https://github.com/Lasering/scala-expect.
The README of the project gives various examples. While I don't know exactly what your rouge.py expects in terms of stdin/stdout interactions, here's a quick "hello world" example showing how you could interact with a Python interpreter (using the Ammonite REPL, which has conveniently library importing capabilities):
import $ivy.`work.martins.simon::scala-expect:6.0.0`
import work.martins.simon.expect.core._
import work.martins.simon.expect.core.actions._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
val timeout = 5 seconds
val e = new Expect("python3 -i -", defaultValue = "?")(
new ExpectBlock(
new StringWhen(">>> ")(
Sendln("""print("hello, world")""")
)
),
new ExpectBlock(
new RegexWhen("""(.*)\n>>> """.r)(
ReturningWithRegex(_.group(1).toString)
)
)
)
e.run(timeout).onComplete(println)
What the code above does is it "expects" >>> to be sent to stdout, and when it finds that, it will send print("hello, world"), followed by a newline. From then, it reads and returns everything until the next prompt (>>>) using a regex.
Amongst other debug information, the above should result in Success(hello, world) being printed to your console.
The library has various other styles, and there may also exist other similar libraries out there. My main point is that an expect-inspired library is likely what you're looking for.
Related
So I have some code which uses xlwings for writing data in Excel file, xlsm.
after i've done writing, I press a certain button to calculate.
sometimes, an error/message pops in the Excel, which is fine, but i want to catch this message to python, and write it later to a log/print it.
also, i need to interact with this message, in this case to press "Ok" in the message box
Attached image of the message box
So guys, I've been able to solve this with an external python library.
here is the code:
from pywinauto import application as autoWin
app = autoWin.Application()
con = app.connect(title = 'Configuration Error')
msgText = con.Dialog.Static2.texts()[0]
con.Dialog.Button.click()
con.Dialog.Button.click()
print(msgText)
basically, what it does, is connecting to the app, and searching for the title.
in this case "Configuration Error"
it needs to perform double click in order to press "Ok" to close the message.
Secondly, it gets the text from the message, and can forward it wherever i want.
important part to remember though, because this should be an automated task, it should run concurrently, which means Threading.
so, a simple Thread class below:
class ButtonClicker(Thread):
def __init__(self):
Thread.__init__(self)
self._stop_event = Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
def run(self) -> None:
while True:
time.sleep(3)
try:
app = autoWin.Application()
con = app.connect(title='Configuration Error')
msg_data = con.Dialog.Static2.texts()[0]
while True:
con.Dialog.Button.click()
# con.Dialog.Button.click()
# print(msg_data)
return msg_data
except Exception as e:
print('Excel didnt stuck')
break
and of course to actually use it:
event_handle = ButtonClicker()
event_handle.start()
some manipulation is needed in order to work in different codes/scenarios, but at least I hope i will help others in the future, because this seems to be very common question.
#Danny's solution, i.e. pywinauto and Thread, works perfectly in my local machine, but it seems can't catch the message box when Excel is running in server mode, e.g. in my case, the automation is triggered in local and started by a system service installed in the server.
pywinauto.findwindows.ElementNotFoundError:
{'title': '<my-wanted-title>', 'backend': 'win32', 'visible_only': False}
It is finally solved with another python third-party library pywin32, so providing a backup solution here.
'''
Keep finding message box with specified title and clicking button to close it,
until stopped by the main thread.
'''
import time
from threading import Thread, Event
import win32gui
import win32con
class ButtonClicker(Thread):
def __init__(self, title:str, interval:int):
Thread.__init__(self)
self._title = title
self._interval = interval
self._stop_event = Event()
def stop(self):
'''Stop thread.'''
self._stop_event.set()
#property
def stopped(self):
return self._stop_event.is_set()
def run(self):
while not self.stopped:
try:
time.sleep(self._interval)
self._close_msgbox()
except Exception as e:
print(e, flush=True)
def _close_msgbox(self):
# find the top window by title
hwnd = win32gui.FindWindow(None, self._title)
if not hwnd: return
# find child button
h_btn = win32gui.FindWindowEx(hwnd, None,'Button', None)
if not h_btn: return
# show text
text = win32gui.GetWindowText(h_btn)
print(text)
# click button
win32gui.PostMessage(h_btn, win32con.WM_LBUTTONDOWN, None, None)
time.sleep(0.2)
win32gui.PostMessage(h_btn, win32con.WM_LBUTTONUP, None, None)
time.sleep(0.2)
if __name__=='__main__':
t = ButtonClicker('Configuration Error', 3)
t.start()
time.sleep(10)
t.stop()
Is it possible with Spark to "wrap" and run an external process managing its input and output?
The process is represented by a normal C/C++ application that usually runs from command line. It accepts a plain text file as input and generate another plain text file as output. As I need to integrate the flow of this application with something bigger (always in Spark), I was wondering if there is a way to do this.
The process can be easily run in parallel (at the moment I use GNU Parallel) just splitting its input in (for example) 10 part files, run 10 instances in memory of it, and re-join the final 10 part files output in one file.
The simplest thing you can do is to write a simple wrapper which takes data from standard input, writes to file, executes an external program, and outputs results to the standard output. After that all you have to do is to use pipe method:
rdd.pipe("your_wrapper")
The only serious considerations is IO performance. If it is possible it would be better to adjust program you want to call so it can read and write data directly without going through disk.
Alternativelly you can use mapPartitions combined with process and standard IO tools to write to the local file, call your program and read the output.
If you end up here based on the question title from a Google search, but you don't have the OP restriction that the external program needs to read from a file--i.e., if your external program can read from stdin--here is a solution. For my use case, I needed to call an external decryption program for each input file.
import org.apache.commons.io.IOUtils
import sys.process._
import scala.collection.mutable.ArrayBuffer
val showSampleRows = true
val bfRdd = sc.binaryFiles("/some/files/*,/more/files/*")
val rdd = bfRdd.flatMap{ case(file, pds) => { // pds is a PortableDataStream
val rows = new ArrayBuffer[Array[String]]()
var errors = List[String]()
val io = new ProcessIO (
in => { // "in" is an OutputStream; write the encrypted contents of the
// input file (pds) to this stream
IOUtils.copy(pds.open(), in) // open() returns a DataInputStream
in.close
},
out => { // "out" is an InputStream; read the decrypted data off this stream.
// Even though this runs in another thread, we can write to rows, since it
// is part of the closure for this function
for(line <- scala.io.Source.fromInputStream(out).getLines) {
// ...decode line here... for my data, it was pipe-delimited
rows += line.split('|')
}
out.close
},
err => { // "err" is an InputStream; read any errors off this stream
// errors is part of the closure for this function
errors = scala.io.Source.fromInputStream(err).getLines.toList
err.close
}
)
val cmd = List("/my/decryption/program", "--decrypt")
val exitValue = cmd.run(io).exitValue // blocks until subprocess finishes
println(s"-- Results for file $file:")
if (exitValue != 0) {
// TBD write to string accumulator instead, so driver can output errors
// string accumulator from #zero323: https://stackoverflow.com/a/31496694/215945
println(s"exit code: $exitValue")
errors.foreach(println)
} else {
// TBD, you'll probably want to move this code to the driver, otherwise
// unless you're using the shell, you won't see this output
// because it will be sent to stdout of the executor
println(s"row count: ${rows.size}")
if (showSampleRows) {
println("6 sample rows:")
rows.slice(0,6).foreach(row => println(" " + row.mkString("|")))
}
}
rows
}}
scala> :paste "test.scala"
Loading test.scala...
...
rdd: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[62] at flatMap at <console>:294
scala> rdd.count // action, causes Spark code to actually run
-- Results for file hdfs://path/to/encrypted/file1: // this file had errors
exit code: 255
ERROR: Error decrypting
my_decryption_program: Bad header data[0]
-- Results for file hdfs://path/to/encrypted/file2:
row count: 416638
sample rows:
<...first row shown here ...>
...
<...sixth row shown here ...>
...
res43: Long = 843039
References:
https://www.scala-lang.org/api/current/scala/sys/process/ProcessIO.html
https://alvinalexander.com/scala/how-to-use-closures-in-scala-fp-examples#using-closures-with-other-data-types
I'm current creating some Gatling simulation to test a REST API. I don't really understand Scala.
I've created a scenario with several exec and pause;
object MyScenario {
val ccData = ssv("cardcode_fr.csv").random
val nameData = ssv("name.csv").random
val mobileData = ssv("mobile.csv").random
val emailData = ssv("email.csv").random
val itemData = ssv("item_fr.csv").random
val scn = scenario("My use case")
.feed(ccData)
.feed(nameData)
.feed(mobileData)
.feed(emailData)
.feed(itemData)
.exec(
http("GetCustomer")
.get("/rest/customers/${CardCode}")
.headers(Headers.headers)
.check(
status.is(200)
)
)
.pause(3, 5)
.exec(
http("GetOffers")
.get("/rest/offers")
.queryParam("customercode", "${CardCode}")
.headers(Headers.headers)
.check(
status.is(200)
)
)
}
And I've a simple Simulation :
class MySimulation extends Simulation {
setUp(MyScenario.scn
.inject(
constantUsersPerSec (1 ) during (1)))
.protocols(EsbHttpProtocol.httpProtocol)
.assertions(
global.successfulRequests.percent.is(100))
}
The application I'm trying to simulate is a multilocation mobile App, so I've prepared a set of samples data for each Locale (US, FR, IT...)
My REST API handles all the locales, therefore I want to make the simulation concurrently execute several instances of MyScenario, each with a different locale sample, to simulate the global load.
Is it possible to execute my simulation without having to create/duplicate the scenario and change the val ccData = ssv("cardcode_fr.csv").random for each one?
Also, each locale has its own load, how can I create a simulation that takes a single scenario and executes it several times concurrently with a different load and feeders?
Thanks in advance.
From what you've said, I think this may be a good approach:
Start by grouping your data in such a way that you can look up each item you want to send based on the current locale. For this, I would recommend using a Map that matches a locale string (such as "FR") to the item that matches that locale for the field you're looking to fill in. Then, at the start of each iteration of the scenario, you just pick which locale you want to use for the current iteration from a list. It would look something like this:
val locales = List("US", "FR", "IT")
val names = Map( "US" -> "John", "FR" -> "Pierre", "IT" -> "Guillame")
object MyScenario {
//These two lines pick a random locale from your list
val random_index = rand.nextInt(locales.length);
val currentLocale = locales(random_index);
//This line gets the name
val name = names(currentLocale)
//Do the rest of your logic here
}
This is a very simplified example - you'll have to figure out how you actually want to retrieve the data from files and put it into a Map structure, as I assume you don't want to hard code every item for every field into your code.
I'm trying to parse a macro similar to this one:
annoying!({
hello({
// some stuff
});
})
Trying to do this with a procedural macro definition similar to the following, but I'm getting a behaviour I didn't expect and I'm not sure I'm doing something I'm not supposed to or I found a bug. In the following example, I'm trying to find the line where each block is,
for the first block (the one just inside annoying!) it reports the correct line, but for the inner block, when I try to print them it's always 1, no matter where the code is etc.
#![crate_type="dylib"]
#![feature(macro_rules, plugin_registrar)]
extern crate syntax;
extern crate rustc;
use macro_result::MacroResult;
use rustc::plugin::Registry;
use syntax::ext::base::{ExtCtxt, MacResult};
use syntax::ext::quote::rt::ToTokens;
use syntax::codemap::Span;
use syntax::ast;
use syntax::parse::tts_to_parser;
mod macro_result;
#[plugin_registrar]
pub fn plugin_registrar(registry: &mut Registry) {
registry.register_macro("annoying", macro_annoying);
}
pub fn macro_annoying(cx: &mut ExtCtxt, _: Span, tts: &[ast::TokenTree]) -> Box<MacResult> {
let mut parser = cx.new_parser_from_tts(tts);
let lo = cx.codemap().lookup_char_pos(parser.span.lo);
let hi = cx.codemap().lookup_char_pos(parser.span.hi);
println!("FIRST LO {}", lo.line); // real line for annoying! all cool
println!("FIRST HI {}", hi.line); // real line for annoying! all cool
let block_tokens = parser.parse_block().to_tokens(cx);
let mut block_parser = tts_to_parser(cx.parse_sess(), block_tokens, cx.cfg());
block_parser.bump(); // skip {
block_parser.parse_ident(); // hello
block_parser.bump(); // skip (
// block lines
let lo = cx.codemap().lookup_char_pos(block_parser.span.lo);
let hi = cx.codemap().lookup_char_pos(block_parser.span.hi);
println!("INNER LO {}", lo.line); // line 1? wtf?
println!("INNER HI {}", hi.line); // line 1? wtf?
MacroResult::new(vec![])
}
I think the problem might be the fact that I'm creating a second parser to parse the inner block, and that might be making the Span types inside it go crazy, but I'm not sure that's the problem or how to keep going from here. The reason I'm creating this second parser is so I can recursively parse what's inside each of the blocks, I might be doing something I'm not supposed to, in which case a better suggestion would be very welcome.
I believe this is #15962 (and #16472), to_tokens has a generally horrible implementation. Specifically, anything non-trivial uses ToSource, which just turns the code to a string, and then retokenises that (yes, it's not great at all!).
Until those issues are fixed, you should just handle the original tts directly as much as possible. You could approximate the right span using the .span of the parsed block (i.e. return value of parse_block), which will at least focus the user's attention on the right area.
I have to generate a big file on the fly. Reading to the database and send it to the client.
I read some documentation and i did this
val streamContent: Enumerator[Array[Byte]] = Enumerator.outputStream {
os =>
// new PrintWriter() read from database and for each record
// do some logic and write
// to outputstream
}
Ok.stream(streamContent.andThen(Enumerator.eof)).withHeaders(
CONTENT_DISPOSITION -> s"attachment; filename=someName.csv"
)
Im rather new to scala in general only a week so don't guide for my reputation.
My questions are :
1) Is this the best way? I found this if i have a big file, this will load in memory, and also don't know what is the chunk size in this case, if it will send for each write() is not to convenient.
2) I found this method Enumerator.fromStream(data : InputStream, chunkedSize : int) a little better cause it has a chunk-size, but i don't have an inputStream cause im creating the file on the fly.
There's a note in the docs for Enumerator.outputStream:
Not [sic!] that calls to write will not block, so if the iteratee that is being fed to is slow to consume the input, the OutputStream will not push back. This means it should not be used with large streams since there is a risk of running out of memory.
If this can happen depends on your situation. If you can and will generate Gigabytes in seconds, you should probably try something different. I'm not exactly sure what, but I'd start at Enumerator.generateM(). For many cases though, your method is perfectly fine. Have a look at this example by Gaƫtan Renaudeau for serving a Zip file that's generated on the fly in the same way you're using it:
val enumerator = Enumerator.outputStream { os =>
val zip = new ZipOutputStream(os);
Range(0, 100).map { i =>
zip.putNextEntry(new ZipEntry("test-zip/README-"+i+".txt"))
zip.write("Here are 100000 random numbers:\n".map(_.toByte).toArray)
// Let's do 100 writes of 1'000 numbers
Range(0, 100).map { j =>
zip.write((Range(0, 1000).map(_=>r.nextLong).map(_.toString).mkString("\n")).map(_.toByte).toArray);
}
zip.closeEntry()
}
zip.close()
}
Ok.stream(enumerator >>> Enumerator.eof).withHeaders(
"Content-Type"->"application/zip",
"Content-Disposition"->"attachment; filename=test.zip"
)
Please keep in mind that Ok.stream has been replaced by Ok.chunked in newer versions of Play, in case you want to upgrade.
As for the chunk size, you can always use Enumeratee.grouped to gather a bunch of values and send them as one chunk.
val grouper = Enumeratee.grouped(
Traversable.take[Array[Double]](100) &>> Iteratee.consume()
)
Then you'd do something like
Ok.stream(enumerator &> grouper >>> Enumerator.eof)