Read serial port: how to interrupt read blocking operation - perl

I have a PERL app that launches a thread for serial communication...
I am using lib use Device::SerialPort;
while($bRun)
{
print "Waiting for serial port data...";
my ($count,$msg)=$SerialPort->read(255); # will read _up to_ 255 chars
if ($count > 0)
{
#Process
}
#sleep(1);
}
But it must be possible to stop that thread even during read operation.
I already have a method to put bRun to false but how to unblock thread if it is in the read operation?
Thanks a lot
Alex

http://search.cpan.org/dist/Device-SerialPort/SerialPort.pm isn't very clear on how to do non-blocking I/O. It sounds like it uses perl's select internally to see if characters are available.
I haven't written a multi-threaded perl program before. Can you send a signal to another thread, which would interrupt a system call?
If you just want to stop the thread, not get it to break out of a read and do something else, that should be easier. Just kill it normally.

Related

Tcl/Tk - How to keep other buttons useable while separate function still running?

I'm very new to Tcl/Tk and have been dealing with an issue for the last couple of days. Basically I have a server written in C and a client GUI written in Tcl/Tk. So far it doesn't do a ton. To test it, I start up the server so that it's listening for connections, then run my GUI. When I click one of the buttons, the GUI should open up a separate toplevel window with a text widget embedded in it. (This part works.) Then, my client connects to the server and gives it a couple of settings, and through this the server decides what info to send back. The server's response is what gets printed to that second window's text widget.
What I'm trying to add in now is a Stop button. Right now, my server is set up to wait a couple of seconds, then write the same message to the client. This is set up inside a loop that is waiting to hear a "Stop" command from my client. I have a Stop button in the GUI with a command set up to write that command to the server when clicked. However, all of my buttons get frozen as soon as I hit the begin button and messages are written to the client.
Basically, how can I keep allowing my server to write to my client while still keeping the rest of my GUI usable? I want my client to write a new line to the text widget on my separate window whenever it receives a new message from the server, but I still want the main GUI window that has all my command buttons to behave independently.
In general, it depends on whether what you are doing is CPU-intensive (where reading from a plain file counts as CPU-intensive) or I/O-intensive (where running things in another process counts as I/O-intensive; database calls often count as CPU-intensive here despite not really needing to). I'm only going to mention summaries of what's going on as you aren't quite providing enough information.
For I/O-intensive code, you want to structure your code to be event-driven. Tcl has good tools for this, in that fileevent works nicely on sockets, terminals and pipelines on all supported platforms. The coroutine system of Tcl 8.6 can help a lot with preventing the callbacks required from turning your code into a tangled mess!
For CPU-intensive code, the main option is to run in another thread. That thread won't be able to touch the GUI directly (which in turn will be free to be responsive), but will be able to do all the work and send messages back to the main thread with whatever UI updates it wants done. (Technically you can do this with I/O-intensive code too, but it's more irritating than using a coroutine.) Farming things out to a subprocess is just another variation on this where the communications are more expensive (but much isolation is enforced by the OS).
If you're dealing with sockets, you're probably I/O-intensive. Assume that until you show otherwise. Here's a simple example:
proc gets_async {sock} {
set sock [lindex $args end]
fileevent $sock readable [info coroutine]
while {[gets $sock data] < 0 && [fblocked $sock]} {
yield
}
fileevent $sock readable {}
return $data
}
proc handler {socket} {
set n 0
while {![eof $socket]} {
# Write to the server
puts $socket "this is message [incr n] to the server"
# Read from the server
puts [gets_async $socket]
}
close $socket
}
proc launchCommunications {host port} {
set sock [socket $host $port]
fconfigure $sock -blocking 0 -encoding utf-8
coroutine comms($host:$port) handler $socket
}
Note that gets_async is much like coroutine::util gets in Tcllib.

Using `chan pending output` instead of writable fileevent

Yo, I've written a server with a simple protocol: the client sends a line, the server sends a line back in response, repeat. To prevent a client from filling Tcl's output buffer by sending lots of lines but not accepting data back, can I just check chan pending output instead of using the writable fileevent?
proc respond {stream msg} {
if {[chan pending output $stream] <= 1024} {
puts $stream $msg
} else {
#close $stream
}
}
For output, chan pending output will correctly describe the number of bytes waiting in the output queue. Normally, that value will be bounded by the -buffersize value that you chan configure (or fconfigure) it to have.
That value will only be exceeded when the channel is non-blocking; with a blocking channel, when the value would go over it, instead there's a blocking write to the underlying device (socket, pipe, file, serial line, whatever) so by the time you could see that it went over, it's back under the limit again.
But if you're using non-blocking channels, you really should use chan event (or fileevent). Luckily for the actual writes, Tcl will actually do this for you automatically; the single most useful thing you could want from a writable event is already there. In practice, the most common actual use of a writable event is in detecting when an async socket connection becomes ready for service.
So what you are doing will work, but you'll have to think carefully about what to do if the output buffer is “getting full”; the idea that a message can need to be delayed is a place where a simple abstraction tends to become leaky. With 8.6's coroutines, you could (probably) do a transparent suspend or something like that, but getting that sort of thing right can take a little thought. (For example, a GUI client might need to show a busy indicator and put things into a state where the user can't enter more requests.)

trapping SIGABRT from perl on VMS

Given kill.pl:
$SIG{INT} = sub { print "int\n" };
$SIG{TERM} = sub { print "term\n" };
$SIG{ABRT} = sub { print "abort\n" };
print "sleeping...\n";
sleep 60;
And kill.com:
$ perl kill.pl
And launching+aborting like so:
submit /log_file=kill.log kill.com
delete /entry=XXXXXX/noconfirm
The signal handlers do not get called. Similar code works on Linux when the process is killed.
kill.log just shows:
(19:58)$ perl kill.pl
sleeping...
%JBC-F-JOBABORT, job aborted during execution
I read the vmsperl documentation and tried some things from http://perldoc.perl.org/sigtrap.html. Is there a way to do this?
Note that if I call:
#kill.com
And do a CTRL+C, SIGINT is handled by kill.pl.
I added the perl tag in case someone knows if there is a way to tell perl to trap every signal which might be the one I'm interested in. My attempt was:
$SIG{$_} = \&subroutine for keys(%SIG);
You're not sending a signal to the process -- you're instructing the queue manager to delete the process, which it does. I think the easiest way to do what you want is to use Perl to send the signal. Submit your job as before and use:
$ show system/batch
to find the pid of the job. You'll see something like this when the queue manager has assigned an entry of 572:
Pid Process Name State Pri I/O CPU Page flts Pages
00003EA1 BATCH_572 HIB 1 259 0 00:00:00.05 511 626 B
Send your signal like so to pid 0x3ea1, noting that the job notification indicates it completed rather than aborted:
$ perl -e "kill 'ABRT', 0x3ea1;"
$
Job KILL (queue SYS$BATCH, entry 572) completed
Look at your log file and you'll see these two lines at the end:
sleeping...
abort
Is this an a VAX or Alpha system? I believe your 'delete' call may not be throwing an abort signal to your running job. Been too long since I've used it, but can't remember a tool that would throw a specific signal to a batch job - LIB$SIGNAL went from a process, not to it. You should try trapping the remaining signals from the 'error-signals' list on the sigtrap doc.

Perl IPC - FIFO and daemons & CPU Usage

I have a mail parser perl script which is called every time a mail arrives for a user (using .qmail). It extracts a calendar attachment out of the mail and places the "path" of the file in a FIFO queue implemented using the Directory::Queue module.
Another perl script which reads the path of the calendar attachment and performs certain file operations on the local system as well as on the remote CalDAV server, is being run as a daemon, as explained here. So basically this script looks like:
my $declarations
sub foo {
.
.
}
sub bar {
.
.
}
while ($keep_running) {
for(keep-checking-the-queue-for-new-entries) {
sub caldav_logic1 {
.
.
}
sub caldav_logic2 {
.
.
}
}
}
I am using Proc::Daemon for running the script as a daemon. Now the problem is, this process has almost 100% CPU usage. What are the suggested ways to implement the daemon in a more standard, safer way ? I am using pretty much the same code as mentioned in the link mentioned for usage of Proc::Daemon.
I bet it is your for loop and checking for new queue entries.
There are ways to watch a directory for file changes. These ways are OS dependent but there might be a Perl module that wraps them up for you. Use that instead of busy looping. Even with a sleep delay, the looping is inefficient when you can have your program told exactly when to wake up by an OS event.
File::ChangeNotify looks promising.
Maybe you don't want truly continuous polling. Is keep-checking-the-queue-for-new-entries a CPU-intensive part of the code, even when the queue is empty? That would explain why your processor is always busy.
Try putting a sleep 1 statement at the very top (or very bottom) of the while loop to let the processor rest between queue checks. If that doesn't degrade the program performance too much (i.e., if everyone can tolerate waiting an extra second before the company calendars get updated) and if the CPU usage still seems high, try sleep 2, sleep 5, etc.
cpan Linux::Inotify2
The kernel knows when files change and sends this information to your program which runs the sub. Maybe this will be better because the program will run the sub only when the file is changed.

How do I call a perl process that is already running from another script?

Problem:
scriptA.cgi is sitting in an infinite loop and handling an open socket to a Flash client.
scriptB.cgi is called from the web, does what it needs to do and then needs to inform scriptA to send a message to the client.
Is this possible? I'm stuck on how to have scriptB identify the instance of scriptA that is sitting there with the socket connection, rather than launching one of its own.
all thoughts appreciated.
If the communication needs are simple, this is a good application for signals.
Edited to store process id from scriptA and read it in scriptB -- scripts A and B have to agree on a name.
# script B
do_scriptB_job();
if (open(my $PID_FILE, "<", "scriptA.pid.file")) {
$process_id_for_scriptA = <$PID_FILE>;
close $PID_FILE;
kill 'USR1', $process_id_for_scriptA; # makes scriptA run the SIGUSR1 handler
}
# script A
open(my $PID_FILE, ">", "scriptA.pid.file");
print $PID_FILE $$;
close $PID_FILE;
my $signaled = 0;
$SIG{"USR1"} = \sub { $signaled = 1 } # simple SIGUSR1 handler, set a variable
while ( in_infinite_loop ) {
if ($signaled) {
# this block runs only if SIGUSR1 was received
# since last time this block was run
send_a_message_to_the_client();
$signaled = 0;
} else {
do_something_else();
}
}
unlink "scriptA.pid.file"; # cleanup
When script A receives a SIGUSR1 signal, the script will be interrupted to run the USR1 signal handler, setting $signaled. The thread of execution will then resume and the script can use the information.
Have scriptA store it's pid somwhere (in a db with some kind of id), then scriptB can look up the pid in the db and send a signal to scriptA.
Edit:
Answering question asked in comment
The pid of the process can be found using perls builtin variables $$ or $PID or $PROCESS_ID depending on how old your perl is.
See perlvar for details.
I hope this is the ID you where looking for. If not you'll have to find a way to separate the different scriptA instances. (Perhaps by session id, or socket. Here I cant help you further)
Other people have mentioned how to get the PID (if you didn't fork() it yourself, just have the other-process write it... somewhere... that both processes know how to get it. or walk the process table, but that's a horrible solution and completely unscalable beyond a singleton).
Since you note that any thoughts are welcome, note that perldoc perlipc explains a variety of mechanisms you might use for the actual communication:
NAME
perlipc - Perl interprocess communication (signals, fifos, pipes, safe
subprocesses, sockets, and semaphores)
DESCRIPTION
The basic IPC facilities of Perl are built out of the good old Unix
signals, named pipes, pipe opens, the Berkeley socket routines, and SysV
IPC calls. Each is used in slightly different situations.
Domain sockets: http://www.perl.com/doc/FMTEYEWTK/IPC/unix.html
?
I was tempted to answer, 'send signals' or 'use some kind of IPC to talk between apps' but, a far easier and scalable approach is to use a sqlite (or other) database that all scripts can talk to,
ScriptA.cgi would poll the database by doing something like 'SELECT event FROM events WHERE clientID=?'.
ScriptB.cgi would simply insert a row into the events table with the right clientID.
That avoids all of the 'find the pid' mess and also mean that you don't get the blocking IO problems you would get with named pipes or if one script crashed.