I have a Perl script which listens to a port and filters messages, and, based on them, proposes to take action or ignore event.
I'd like to make it show a notification window (not a dialogue window) with buttons 'take action' and 'ignore', which would go after a certain timeout.
So far I have something like this:
my #react = ("somecommand", "someoptions); # based on what regex a message matched
my $cmd = "xmessage";
my $cmd_args = "-print -timeout 7 -buttons React,Dismiss $message"; # raw message from port
open XMSG, "$cmd $cmd_args |";
while (<XMSG>) {
if ($_ eq "React\n") {
do something...
}
}
But it would handle only one notification at once, and the next message would not appear until the previous one is dismissed, reacted to or timed out, so it's quite a bad decision. I cannot do anything until I get return code from xmessage, and I can't get xmessage run a command. Well I probably can if I introduce event IDs and listen to a socket where xmessage prints, but it would make things too complicated, I guess.
So I wonder is there a library or an utility for Linux to draw notify-like windows with buttons which would each trigger a command?
I'm sorry I didn't see this one when it first was posted. There are several gui toolkits which could do something along these lines. Prima is a toolkit built especially for Perl and has no external library dependencies.
For when you just need a popup dialog, there is the Ask module which delegates the task of popping up windows to any available library.
In case anyone's interested, I've ended up writing a small Tcl/Tk program for that, the full code (all 48 lines) can be found here: http://cloudcabin.org/read/twobutton_notify, and you can ignore the text in Russian around it.
Related
I'm very new to Tcl/Tk and have been dealing with an issue for the last couple of days. Basically I have a server written in C and a client GUI written in Tcl/Tk. So far it doesn't do a ton. To test it, I start up the server so that it's listening for connections, then run my GUI. When I click one of the buttons, the GUI should open up a separate toplevel window with a text widget embedded in it. (This part works.) Then, my client connects to the server and gives it a couple of settings, and through this the server decides what info to send back. The server's response is what gets printed to that second window's text widget.
What I'm trying to add in now is a Stop button. Right now, my server is set up to wait a couple of seconds, then write the same message to the client. This is set up inside a loop that is waiting to hear a "Stop" command from my client. I have a Stop button in the GUI with a command set up to write that command to the server when clicked. However, all of my buttons get frozen as soon as I hit the begin button and messages are written to the client.
Basically, how can I keep allowing my server to write to my client while still keeping the rest of my GUI usable? I want my client to write a new line to the text widget on my separate window whenever it receives a new message from the server, but I still want the main GUI window that has all my command buttons to behave independently.
In general, it depends on whether what you are doing is CPU-intensive (where reading from a plain file counts as CPU-intensive) or I/O-intensive (where running things in another process counts as I/O-intensive; database calls often count as CPU-intensive here despite not really needing to). I'm only going to mention summaries of what's going on as you aren't quite providing enough information.
For I/O-intensive code, you want to structure your code to be event-driven. Tcl has good tools for this, in that fileevent works nicely on sockets, terminals and pipelines on all supported platforms. The coroutine system of Tcl 8.6 can help a lot with preventing the callbacks required from turning your code into a tangled mess!
For CPU-intensive code, the main option is to run in another thread. That thread won't be able to touch the GUI directly (which in turn will be free to be responsive), but will be able to do all the work and send messages back to the main thread with whatever UI updates it wants done. (Technically you can do this with I/O-intensive code too, but it's more irritating than using a coroutine.) Farming things out to a subprocess is just another variation on this where the communications are more expensive (but much isolation is enforced by the OS).
If you're dealing with sockets, you're probably I/O-intensive. Assume that until you show otherwise. Here's a simple example:
proc gets_async {sock} {
set sock [lindex $args end]
fileevent $sock readable [info coroutine]
while {[gets $sock data] < 0 && [fblocked $sock]} {
yield
}
fileevent $sock readable {}
return $data
}
proc handler {socket} {
set n 0
while {![eof $socket]} {
# Write to the server
puts $socket "this is message [incr n] to the server"
# Read from the server
puts [gets_async $socket]
}
close $socket
}
proc launchCommunications {host port} {
set sock [socket $host $port]
fconfigure $sock -blocking 0 -encoding utf-8
coroutine comms($host:$port) handler $socket
}
Note that gets_async is much like coroutine::util gets in Tcllib.
To get started, I am inexperienced scripting in perl, or using gtk, but I've been googling and researching how to for the past two or so weeks. It was difficult just figuring out where I could find the PMs for gtk on windows, and then even more so getting it to some semblance of 'working'. However, there have of course still been problems.
Skipping the above, I have two problems. For a slight bit of relevant background, I am trying to port an mirc script over to xchat, but to do that I obviously need to learn a whole 'nother language.. but anyway. The two problems are thus:
The window consists of several buttons, labels, and text areas. However, the window is.. 'frozen' in time unless one clicks on the title bar and holds. Clicking a button does nothing, not even to show it has been clicked, unless of course the title bar is clicked and held.
I have no idea how to initialize multiple instances of the same window. I have of course tried researching, but it's either not out there or I just haven't found it yet. To be specific.. My mirc script requires that multiple instances be allowed to exist, but I need the buttons for the specific instance to only affect that instance.. and so on.
In regards to problem 1, I do not know if the .xml glade file is important, so I won't post it immediately. I will however post the code that calls it:
my $glade_file = "window3.xml";
my $glade = Gtk2::Builder->new();
$glade->add_from_file($glade_file);
sub charopen {
my $window = $glade->get_object('window1');
$glade->connect_signals(undef, $window);
my $hp_cur = $glade->get_object('HP_Cur');
$window->set("title"=>$_[0][1]);
$hp_cur->set("label"=>$ini->val($_[0][1],"HPC"));
$window->show();
}
Graphical interface design relies on event processing. To work properly, it is important to reserve a thread to process user events (keyboard, mouse clicks...). That is the aim of calling Gtk2->main() when user interface is ready to accept user interaction.
To make the event thread exiting the event loop, an event callback method may invoke Gtk2->main_quit()
The Gtk2::Builder creates Gtk widget hierarchy from XML. To get multiple instance of the same window, you have to create a builder for each one.
Then your event callback methods has to get information about which window has sent the event, and the $user_data parameter may be used in that aim.
Here is a code proposal with a simple button click callback which use Perl reference to a hash so you can pass as many information as you want between window creator code and event callbacks:
sub createWindow($)
my $windowTitle = $_[0];
my $windowBuilder = Gtk2::Builder->new();
$windowBuilder->add_from_file($glade_file);
my $window = $windowBuilder->get_object('window1');
my $hp_cur = $windowBuilder->get_object('HP_Cur');
# Create hash with data (alternative: use Class::Struct for better code)
my %window_user_data = {
"title" => $windowTitle,
"window" => $window,
"hp_cur" => $hp_cur };
# Pass hash reference as user data
$windowBuilder->connect_signals(\%window_user_data);
# prepare interface: set data model into view and then...
$window->show();
}
# Click callback method defined on a button in window
sub button_click_callback($$) {
my $button = $_[0];
my $window_user_data_ref = $_[1];
# get back data model from view
print "Click received from button on "
. $window_user_data_ref->{"title"} . "\n";
}
There is another way to handle callbacks per window but it requires more Perl skills: you can design a Perl package to create an object instance for a specific window, and use $windowbuilder->connect_signals ($user_data, $windowcallbackinstance). In that case, such an object is called controller, and you have built your graphical interface based on Model-View-Controller (MVC) pattern.
This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.
This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!
I have a mail parser perl script which is called every time a mail arrives for a user (using .qmail). It extracts a calendar attachment out of the mail and places the "path" of the file in a FIFO queue implemented using the Directory::Queue module.
Another perl script which reads the path of the calendar attachment and performs certain file operations on the local system as well as on the remote CalDAV server, is being run as a daemon, as explained here. So basically this script looks like:
my $declarations
sub foo {
.
.
}
sub bar {
.
.
}
while ($keep_running) {
for(keep-checking-the-queue-for-new-entries) {
sub caldav_logic1 {
.
.
}
sub caldav_logic2 {
.
.
}
}
}
I am using Proc::Daemon for running the script as a daemon. Now the problem is, this process has almost 100% CPU usage. What are the suggested ways to implement the daemon in a more standard, safer way ? I am using pretty much the same code as mentioned in the link mentioned for usage of Proc::Daemon.
I bet it is your for loop and checking for new queue entries.
There are ways to watch a directory for file changes. These ways are OS dependent but there might be a Perl module that wraps them up for you. Use that instead of busy looping. Even with a sleep delay, the looping is inefficient when you can have your program told exactly when to wake up by an OS event.
File::ChangeNotify looks promising.
Maybe you don't want truly continuous polling. Is keep-checking-the-queue-for-new-entries a CPU-intensive part of the code, even when the queue is empty? That would explain why your processor is always busy.
Try putting a sleep 1 statement at the very top (or very bottom) of the while loop to let the processor rest between queue checks. If that doesn't degrade the program performance too much (i.e., if everyone can tolerate waiting an extra second before the company calendars get updated) and if the CPU usage still seems high, try sleep 2, sleep 5, etc.
cpan Linux::Inotify2
The kernel knows when files change and sends this information to your program which runs the sub. Maybe this will be better because the program will run the sub only when the file is changed.
I have a locked down "kiosk" terminal server.
This terminal server has a perl script as its .Xsession, and launches a Tk interface. When that Tk interface is done, the perl script launches "process2" and lets the user interact with "process2" (which is a graphical application).
If a user tampers with "process2", and make it crash, the user might be able to access the underlying desktop, therefore I would want to check if "process2" is running, and if "process2" is not running on $display, I would want to just execute logout (which would logout the display the perl script is currently running as).
Since the system is running 10 instances of "process2" to 10 different users simultanuosly, I cant just check if "process2" is running on the system with "ps" or someting like that. I need to check if "process2" is running under that specific display $display.
Note that all 10 users log on as the same username in all sessions, so I cannot check all processes run by a specific user, that would return all 10 instances too.
Like:
system("process2 &");
while(1) {
sleep(1);
if (is_it_running("process2", $display) == false) {
system("logout &");
}
}
Its the function "is_it_running" that I need to get to know how it should look.
$display can either contain the raw display number, like this: ":1.0", or it can contain the display number parsed out, like this: "1".
If you use fork and exec instead of system("...&"), you can store the Process IDs of your child processes and more directly check their status. See also perlipc.
Why not just run process2 in the foreground? Then your perl script won't get control back until it's done executing, at which point it can exit:
system("process2");
system("logout");
Of course, if that's the entire script, maybe a bash script would make more sense.
I solved it after many attempts.
Did a piped open
$pidofcall = open(HANDLE, "process2|");
Then I did whatever I did need to do, and I made the server send a signal to me if it loses connection with process2. If I did need to bang out, I simply did a "goto killprocess;" Then I simply had:
killprocess:
kill(9,$pidofcall);
close(HANDLE);
$mw->destroy;
system("logout");