Interprocess Mutex In Perl - perl

I have a Perl CGI program that executes under mod_perl. Within the program, I would like to prevent a resource from accessing by multiple processes at the same time.
# Semaphore Initialization Code
# 10023 is unique id, and this id will be same across different apache process.
# 1, Only one semaphore being created.
# 0722, as all process will be execute under apache account. Hence, they will all having '7' privilege.
my $sem = new IPC::Semaphore(10023, 1, 0722 | IPC_CREAT); # Code(1)
# Set 0th (one and only one) semaphore's value to 1, As I want to use this semaphore as mutex.
$sem->setval(0, 1); # Code(2)
The problem is :
How can I make Code(1) create a new semaphore ONLY when the semaphore if the 10023 id has never being created before, either by same process or other processes?
How can I execute Code(2) ONLY the first time I create the semaphore with 10023 id? A semaphore shall be only initialized ONE TIME.
Another approach is to create an empty file for locking purpose. However, this will end up of having thousands of temporary files.
link text

Adding the IPC_EXCL flag causes the underlying semget to either create a new semaphore or fail. You can use this to get the effect you want.
This should work for you:
#Attempt to create (but not get existing) semaphore
my $sem = IPC::Semaphore->new(10023, 1, 0722 | IPC_CREAT | IPC_EXCL);
if ($sem) {
#success, semaphore created, proceed to set.
print "new semaphore\n";
$sem->setval(0, 1);
}
else {
#obtain the semaphore normally
print "existing semaphore\n";
$sem = IPC::Semaphore->new(10023, 1, 0722); #no IPC_CREAT here
die "could not obtain semaphore?" unless $sem;
}

Related

Perl: Value of global variable not getting updated when changed in child

Code Snippet:
my $kill=0;
my #array1 = ("abc", "def","ghi");
&runSmokesAndMonitor;
sub runSmokesAndMonitor {
foreach my $smokeTestVarDirName (#array1) {
if ($pid =fork()) {
print "parent\n"; ### Have some other action items as well here in parent
}
else {
$kill++;
print "Value of kill is $kill\n";
exit 0;
}
}
}
Here, I am getting output:
parent
Value of kill is 1
parent
Value of kill is 1
parent
Value of kill is 1
Required/Expected: (As $kill is global variable, so values of $kill must have updated wherever new value would have assigned)
parent
Value of kill is 1
parent
Value of kill is 2
parent
Value of kill is 3
parent
Why is the output not as expected, and how can i achieve it?
A child created by fork is a new process with its own address space. Global variables are per process only, not global per user or per system or even global between all instances of a software running in the world. That's why changes to a global variable are only reflected in the current process.
If you need to share information between processes you need IPC (inter process communication), i.e. things like sockets, pipes, shared memory etc - see perlipc for more. There are ways to make sharing variables across processes easier, like IPC::Shareable.

PERL - Net::Websocket::Server with external periodic event

In my server program I need to have ability to iterate every 5 minutes through all opened connections and see which is really "active" or not.
I know that the best approach is to use "heart beat", but then, the server need to have somehow ability to check weather the connection is "off" in order to delete the associated "user parameters" that is attached to the connection.
My first approach was to use "Async" module, but this works in a separate process - so I cannot delete any element from the main process unless I use a technique to invoke a subroutine from the main process called from the child process (I don't know how, any help will be warmly welcomed).
Another possibility using Async is create a static client that is all the time on (also in the server) and sending "commands" to the server, but to me it looks "too exaggerating" to create such "wasting memory" client in the server, and also "eat" CPU time (I think much more than simple event like equivalent to "setTimeout" in JS).
Yet another approach is to use EV: But when I call EV::run it will NOT RUN anything ELSE than this "periodic event" - means that it will not reach the next line where the ->Start for the server is.
Placing it after the ->Start will make this event useless too: As the server works the program will not go behind the ->Start line.
Using EV::run EV::RUN_NOWAIT; will make the server work, but the EV will somehow not work, for a strange reason (Anyone know how can I still make it work?)
I prefer to not use Net::Websocket::EV, because as per their script, it doesn't do the handshake automatically, and many things (as well as SSL connection that I have) I will need to do manually and for me it is change a lot in my program.
PROBLEM SUMMARY:
How to make the code in EV run every 5 minutes, together with the server?
my %CON; # Connections (And user data) hash
my $__ConChk=EV::periodic 0, 300, 0, sub {
my #l=keys %CON;
for(my $i=0 ; $i<#l ; $i++) {
if($CON{$l[$i]}{"T"}+3600<time()) { # I give one hour time to be completely offline (for different reasons)
$CON{$l[$i]}{"C"}->disconnect(); delete $CON{$l[$i]};
}
}
};
EV::run EV::RUN_NOWAIT; # This is not working - Seems to be ignored!
Net::WebSocket::Server->new(
listen => $ssl, # Earlier preset
silence_max=>60, # Time to just shut the connection off, but don't delete user data
on_connect => sub {
my($serv,$conn)=#_;
my $cid; # Connection ID (for the hash)
$conn->on(
handshake => sub {
my($conn,$handshake)=#_;
# Create user data in $CON{$cid}
},
binary => sub {
$CON{$cid}{"T"}=time();
# Handling of single incomming message (command)
},
disconnect => sub {
# Do NOT DELETE THE ENTRY!! Maybe the connection drop due to instability!!
}
);
}
)->start; # This will run - but ignoring EV::run - what to do....?
undef $__ConChk;

trapping SIGABRT from perl on VMS

Given kill.pl:
$SIG{INT} = sub { print "int\n" };
$SIG{TERM} = sub { print "term\n" };
$SIG{ABRT} = sub { print "abort\n" };
print "sleeping...\n";
sleep 60;
And kill.com:
$ perl kill.pl
And launching+aborting like so:
submit /log_file=kill.log kill.com
delete /entry=XXXXXX/noconfirm
The signal handlers do not get called. Similar code works on Linux when the process is killed.
kill.log just shows:
(19:58)$ perl kill.pl
sleeping...
%JBC-F-JOBABORT, job aborted during execution
I read the vmsperl documentation and tried some things from http://perldoc.perl.org/sigtrap.html. Is there a way to do this?
Note that if I call:
#kill.com
And do a CTRL+C, SIGINT is handled by kill.pl.
I added the perl tag in case someone knows if there is a way to tell perl to trap every signal which might be the one I'm interested in. My attempt was:
$SIG{$_} = \&subroutine for keys(%SIG);
You're not sending a signal to the process -- you're instructing the queue manager to delete the process, which it does. I think the easiest way to do what you want is to use Perl to send the signal. Submit your job as before and use:
$ show system/batch
to find the pid of the job. You'll see something like this when the queue manager has assigned an entry of 572:
Pid Process Name State Pri I/O CPU Page flts Pages
00003EA1 BATCH_572 HIB 1 259 0 00:00:00.05 511 626 B
Send your signal like so to pid 0x3ea1, noting that the job notification indicates it completed rather than aborted:
$ perl -e "kill 'ABRT', 0x3ea1;"
$
Job KILL (queue SYS$BATCH, entry 572) completed
Look at your log file and you'll see these two lines at the end:
sleeping...
abort
Is this an a VAX or Alpha system? I believe your 'delete' call may not be throwing an abort signal to your running job. Been too long since I've used it, but can't remember a tool that would throw a specific signal to a batch job - LIB$SIGNAL went from a process, not to it. You should try trapping the remaining signals from the 'error-signals' list on the sigtrap doc.

How to implement a file based persistent stack?

I have an API to login to a system. It doesn't support concurrent login with the same user id (I guess due to license). However this code can be called by different processes/clients launched by different users from another system, in my case, a ClearCase trigger.
my $conn = new BuildForge::Services::Connection('ccbuildforged01', 3966);
my $token = $conn->authUser('bldforge', 'password');
I have two choices.
The $token returned can be shared by different clients. So how can I persistent this $token?
I have 10 license, so can create 10 users. How can I create a file based persistent stack for all client to share these user ids?
I googled a bit and found this:
A single, simple file and a lock seems all you need. You push by lock,append,unlock. You pop by lock,seek,read,truncate,unlock.
Can someone give me a code sample?
I would maintain ten files (say 1.conf though 10.conf) with the user information.
To get an available user id, look for a .conf file with no corresponding .pid file (e.g. 1.pid). If you find one, try to get an exclusive lock on the file, and then create a corresponding .pid file with an exclusive lock on it. (If any of these fail, look for another file.)
When you are done, release the lock on the .conf file, then release the lock and delete the .pid file.
If you want to avoid a possible race condition, you could have a queue.lock file that you try to lock exclusively before looking for an available user id. If it's already locked, wait for the lock to be released.
Why we need the extra .pid file? Isn't lock on the .conf file enough?
Using the following code, if I start two instance of this program at the same time, the 2nd one wait for the 1st to unlock, then lock the first file id01.txt. It's waiting to read. How can I ask it go to the next one if a file is locked?
use FileHandle;
use Fcntl qw(:flock);
for ($count = 1; $count <= 8; $count++) {
if (open SELF, "< id0$count.txt");
if (flock(SELF, LOCK_EX)) {
# Exclusive lock
print "Locked id0$count.txt...\n";
sleep(10);
close SELF;
} else {
next
}
} else {
next;
}
print "Unlocked id0$count.txt...\n";
}

How do I call a perl process that is already running from another script?

Problem:
scriptA.cgi is sitting in an infinite loop and handling an open socket to a Flash client.
scriptB.cgi is called from the web, does what it needs to do and then needs to inform scriptA to send a message to the client.
Is this possible? I'm stuck on how to have scriptB identify the instance of scriptA that is sitting there with the socket connection, rather than launching one of its own.
all thoughts appreciated.
If the communication needs are simple, this is a good application for signals.
Edited to store process id from scriptA and read it in scriptB -- scripts A and B have to agree on a name.
# script B
do_scriptB_job();
if (open(my $PID_FILE, "<", "scriptA.pid.file")) {
$process_id_for_scriptA = <$PID_FILE>;
close $PID_FILE;
kill 'USR1', $process_id_for_scriptA; # makes scriptA run the SIGUSR1 handler
}
# script A
open(my $PID_FILE, ">", "scriptA.pid.file");
print $PID_FILE $$;
close $PID_FILE;
my $signaled = 0;
$SIG{"USR1"} = \sub { $signaled = 1 } # simple SIGUSR1 handler, set a variable
while ( in_infinite_loop ) {
if ($signaled) {
# this block runs only if SIGUSR1 was received
# since last time this block was run
send_a_message_to_the_client();
$signaled = 0;
} else {
do_something_else();
}
}
unlink "scriptA.pid.file"; # cleanup
When script A receives a SIGUSR1 signal, the script will be interrupted to run the USR1 signal handler, setting $signaled. The thread of execution will then resume and the script can use the information.
Have scriptA store it's pid somwhere (in a db with some kind of id), then scriptB can look up the pid in the db and send a signal to scriptA.
Edit:
Answering question asked in comment
The pid of the process can be found using perls builtin variables $$ or $PID or $PROCESS_ID depending on how old your perl is.
See perlvar for details.
I hope this is the ID you where looking for. If not you'll have to find a way to separate the different scriptA instances. (Perhaps by session id, or socket. Here I cant help you further)
Other people have mentioned how to get the PID (if you didn't fork() it yourself, just have the other-process write it... somewhere... that both processes know how to get it. or walk the process table, but that's a horrible solution and completely unscalable beyond a singleton).
Since you note that any thoughts are welcome, note that perldoc perlipc explains a variety of mechanisms you might use for the actual communication:
NAME
perlipc - Perl interprocess communication (signals, fifos, pipes, safe
subprocesses, sockets, and semaphores)
DESCRIPTION
The basic IPC facilities of Perl are built out of the good old Unix
signals, named pipes, pipe opens, the Berkeley socket routines, and SysV
IPC calls. Each is used in slightly different situations.
Domain sockets: http://www.perl.com/doc/FMTEYEWTK/IPC/unix.html
?
I was tempted to answer, 'send signals' or 'use some kind of IPC to talk between apps' but, a far easier and scalable approach is to use a sqlite (or other) database that all scripts can talk to,
ScriptA.cgi would poll the database by doing something like 'SELECT event FROM events WHERE clientID=?'.
ScriptB.cgi would simply insert a row into the events table with the right clientID.
That avoids all of the 'find the pid' mess and also mean that you don't get the blocking IO problems you would get with named pipes or if one script crashed.