A reactive task is sometimes seen in the IOI programming competition. Unlike batch tasks, reactive solutions take input from another program as well as outputting it. The program typically 'query' the judge program a certain number of times, then output a final answer.
An example
The client program accepts lines one by one, and simply echoes it back. When it encountered a line with "done", it exists immediately.
The client program in Java looks like this:
import java.util.*;
class Main{
public static void main (String[] args){
Scanner in = new Scanner(System.in);
String s;
while (!(s=in.nextLine()).equals("done"))
System.out.println(s);
}
}
The judge program gives the input and processes output from the client program. In this example, it feeds it a predefined input and checks if the client program has echoed it back correctly.
A session might go like this:
Judge Client
------------------
Hello
Hello
World
World
done
I'm having trouble writing the judge program and having it judge the client program. I'd appreciate if someone could write a judge program for my example.
You get programs to talk to each other via the command prompt.
On windows, you'd write:
java judge | java client
So it's piping the output of judge to the input of client.
That is to say, as long as judge is writing to the standard output stream (which it will) and client is reading from the standard input stream (which yours is) then it will work.
Related
Is it possible to tail call eBPF codes that use different modes?
For example, if I coded a code that printk("hello world") using kprobe,
would I be able to tail call a XDP code afterwards or vice versa?
I programmed something on eBPF that uses a socket buffer and seems like when I try to tail call another code that uses kprobe, it doesn't load the program.
I wanted to tail call a code that uses XDP_PASS after using a BPF.SOCKET_FILTER mode but seems like tail call isn't working.
I've been trying to figure this out but I can't find any documentations regarding tail calling codes that use different modes :P
Thanks in advance!
No, it is not.
Have a look at kernel commit 04fd61ab36ec, which introduced tail calls: the comment in the first piece of code (in internal kernel header bpf.h), defining the struct bpf_array, sets a owner_prog_type member, and explains the following in a comment:
/* 'ownership' of prog_array is claimed by the first program that
* is going to use this map or by the first program which FD is stored
* in the map to make sure that all callers and callees have the same
* prog_type and JITed flag
*/
So once the program type associated with a BPF program array, used for tail calls, has been defined, it is not possible to use it with other program types. Which makes sense, since different program types work with different context (packet data VS traced function context VS ...), can use different helpers, have return functions with different meanings, necessitate different checks from the verifier, ... So it's hard to see how jumping from one type to another would work. How could you start with processing a network packet, and all of a sudden jump to a piece of code that is supposed to trace some internals of the kernel? :)
Note that it is also impossible to mix JIT-ed and non-JIT-ed programs, as indicated by the owner_jited of the struct.
Is it possible to catch signals received (specifically SIGSEGV, SIGABRT) by child processes of a program without actually modifying it (or with minimal modification)?
The program I'm talking about is a pretty complex tool of which I don't have low-level (implementation details) knowledge of. I do have access to its source code. I can start it using a command like:
$ ./tool_name start # tool_name is an executable created after compiling and building its source code
It forks many child processes and I want to see if those child processes are being killed by a signal or not.
What I have thought about is to create a simple C program and call above command through that (using system()). Write a signal handler for above signals I'm looking for, and do other stuffs. Is it a right way to keep track of signals received by child processes? Is there a better way to do the same?
I'm writing some Scala code that needs to make use of a external command line program for string translation. The external program takes many minutes to start up, then listens for data on stdin (terminated by newline), converts the data, and prints the converted data to stdout (again terminated by newline). It will remain alive forever until it receives a SIGINT.
For simplicity, let's assume the external command runs like this:
$ convert
input1
output2
input2
output2
$
convert, input1, and input2 were all typed by me; output1 and output2 were written by the program to stdout. I typed Control-C at the end to return to the shell.
In my Scala code, I'd like to start up this external program, and keep it running in the background (because it is costly to startup, but cheap to keep running once it's initialized), while providing three methods to the rest of my program with an API like:
def initTranslation(): Unit
def translate(input: String): String
def stopTranslation(): Unit
initTranslation should start up the external program and keep it running in the background.
translate should put the input argument on the stdin of the external program (followed by newline), wait for output (followed by newline), and then return the output.
stopTranslation should send SIGINT to the external program.
I've worked with Java and Scala external process management before, but don't have too much experience with Java pipes, but am not 100% sure how to hook this all up. In particular, I've read that there are subtle gotchas with regards to deadlocks when I/O pipes get hooked up in situations similar to this. I'm sure I'll need some Thread to watch start up and watch over the background process in initTranslation, some piping to send a String to stdin followed by blocking to wait for receiving data and a newline on stdout in translate, then some sort of termination of the external program in stopTranslation.
I'd like to achieve this with as much pure Scala as possible, though I realize that this may require some bits of the Java I/O library. I also do not want to use any third party Scala or Java libraries (anything outside java.*, javax.* or scala.*)
What would these three methods look like?
It turns out that this is quite a bit easier than I first expected. I had been misled by various posts and recommendations (off SO) which had suggested that this would be more complex.
Caveats to this solution:
All Java. Yes, I know I mentioned that I'd rather use the Scala standard library, but this is sufficiently succinct that I think it warrants an answer.
Limited error handling - among other things, if the external program explodes and reports errors to stderr, I'm not handling that. Certainly, that could be added on later.
Usage of var for storage of local variables. Clearly, var is frowned upon for best-practice Scala use, but this example illustrates the object state needed, and you can structure your variables in your own programs as you like.
No thread-safety. If you need thread-safety, because multiple threads might call any of the following methods, use some synchronization constructs (like the synchronized keyword in the translate method) to protect yourself.
Solution:
import java.io.BufferedReader
import java.io.InputStreamReader
import java.lang.Process
import java.lang.ProcessBuilder
var process: Process = _
var outputReader: BufferedReader = _
def initTranslation(): Unit = {
process = new ProcessBuilder("convert").start()
outputReader = new BufferedReader(new InputStreamReader(process.getInputStream()))
}
def translate(input: String): String = {
// write path to external program
process.getOutputStream.write(cryptoPath.getBytes)
process.getOutputStream.write(System.lineSeparator.getBytes)
process.getOutputStream.flush()
// wait for input from program
outputReader.readLine()
}
def stopTranslation(): Unit = {
process.destroy()
}
OUTPUT TO "logfile.txt".
FOR EACH ...:
...
PUT "Some log data". OUTPUT CLOSE. OUTPUT TO "logfile.txt" APPEND.
...
END.
Haven't found an appropriate statement to save file at some point. I don't wanna use UNBUFFERED APPEND because it is supposedly slower. Maybe there is built-in logging tools? Maybe STREAMS could help me? Problem in my solution that I have to specify log filename each time i open it with OUTPUT TO statement. A nested procedure may not have a clue about filename.
The question as it stands is still ambiguous.
If you want a way to route the output through a standard "service" similar to what LOG-MANAGER does, you can do that by using
static members of a class,
by using an API in a persistent procedure and PUBLISHing to it,
by using an API in a session super-procedure and calling it's API
STREAMS will give you a way to segregate output for a single procedure or class to a single file, and keep that output from getting mingled with the production output, however it's limited to the current program, which means it's not a general solution as an application-wide logging facility.
There is no "save" option.
However... you can force output to be flushed with:
put control null(0).
"Supposedly slower" is awfully vague. Yes, there is potentially more IO with unbuffered output. But whether or not that really matters depends heavily on what you are doing and how it will be used. It is very unlikely that it actually matters.
A STREAM would certainly help to keep things organized and make it so that you don't have to know the name of the file in nested procedures.
Yes, there are built in logging tools. Look at the LOG-MANAGER system handle.
The code in the question would be better written as:
define stream logStream.
output stream logStream to value( "log.txt" ) append unbuffered.
for each customer no-lock:
put stream logStream custName skip.
/* put stream logStream control null(0). */ /* if you want to try fooling with buffered output... */
end.
output stream logStream close.
Is there any possibility to pause/resume the work of embedded python interpreter in place, where I need? For example:
C++ pseudo-code part:
main()
{
script = "python_script.py";
...
RunScript(script); //-- python script runs till the command 'stop'
while(true)
{
//... read values from some variables in python-script
//... do some work ...
//... write new value to some other variables in python-script
ResumeScript(script); //-- python script resumes it's work where
// it was stopped. Not from begin!
}
...
}
Python script pseudo-code part:
#... do some init-work
while true:
#... do some work
stop # - here script stops and C++-function RunScript()
# returns control to C++-part
#... After calling C++-function ResumeScript
# the work continues from this line
Is this possible to do with Python/C API?
Thanks
I too have recently been searching for a way to manually "drive" an embedded language and I came across this question and figured I'd share a potential workaround.
I would implement the "blocking" behavior either through a socket, or some kind of messaging system. Instead of actually stopping the whole python interpreter, just have it block when it is waiting for C++ to do it's evaluations.
C++ will start the embedded runtime, then enter a loop of some sort that waits for python to "throw the signal" that it's ready. For instance C++ listens on port 5000, starts python, python does work, connects to port 5000 on localhost, then C++ sees the connection and grabs the data from python, performs work on it, then shuffles the data back over the socket to python, where python then receives the data and leaves the blocking loop.
I still need a way to fully pause the virtual runtime, but in your case you could achieve the same thing with a socket and some blocking behavior that uses the socket to coordinate the two pieces of code.
Good luck :)
EDIT: You may be able to hook this "injection" functionality used in this answer to completely stop python. Just modify it to inject a wait-loop perhaps.
Stopping embedded Python