I have a perl script which calls another perl script using backticks. I want to instead call this script and have it daemonize. How do I go about doing this?
edit:
I dont care to communicate back with the process/daemon. I'll most likely just stick it in an sqlite3 table or something.
You refer to backticks, thus I suppose that you want to communicate with the daemon after it's started? Since daemons does not use STDOUT, you will have to think of some other way of passing information to and from it.
The Perl interprocess communication man page (perlipc) has several good examples of this, especially the section "Complete dissociation of child from parent".
The Proc::Daemon contains convenient functions for daemonizing a process.
Related
I have a Perl Script, that has (as of now) 10 subs and growing..
Each sub is making a different LWP-Call and is working with a variable I set in the first sub.
As each sub takes some time, I'm looking for a smart way to make the script run faster.
What would you recommend?
Should I:
Put each sub into a seperate script?
Call the subs (except the first) at once?
Use a different solution (that I don't know of)?
Thanks in advance!
EDIT:
The script is fetching some xml- and soap-data from the web, then extracting the necesary parts.
You probably want check the LWP::Parallel module.
ParallelUserAgent is an extension to the existing libwww module. It
allows you to take a list of URLs (it currently supports HTTP, FTP,
and FILE URLs. HTTPS might work, too) and connect to all of them in
parallel, then wait for the results to come in.
You dont have to split up the file if you dont want to. Just take a list of things to run on STDIN, then execute your script multiple times putting each process in the background or use cron if its an on going thing.
I would recommend using the Operating System's processes until you have a good reason not to do so anymore.
I've had success using Coro and there is already a module to do LWP requests.
http://metacpan.org/pod/LWP::Protocol::Coro::http
Coro is cooperative routines so not multithreaded. There is also an AnyEvent variation
I am new to Perl and I have the following problem.
I have a log output and I have found where this log output comes from. I mean the subroutine in some module that prints it.
Now e.g. in Java via Eclipse I would use e.g. Call hierarchy and other utilities to see how/when/who calls the method and figure out how to reproduce what I need and debug.
How can I do this in Perl? Via e.g. grep? If I grep e.g. for the module name I get hundrends of lines ranging from use A require A C::B::A B::A C::B::A::some_routine C::B::A::some_other_routine etc.
On top of this I am worried that perhaps the routine I am interested in is not called directly but some script e.g. runs the module that is of interest to me via some obscure (to me due to my ignorance in Perl) manner.
So how would I go debug something in Perl in the most efficient way? What do you Perl gurus suggest for me to do and become more efficient?
Run the program under the Perl debugger:
perl -d scriptname arguments...
Set a breakpoint in the function you care about, and when the program stops at the breakpoint use the T debugger command to display a stack trace, which will show where the function was called from.
From your comments, I'm not sure this actually addresses what you're looking for. Maybe what you want is a cross-reference of the Perl application? See the FAQ How do I cross-reference my Perl programs?
Most of the time getting a stack trace (along with some debugging info) is a good start. One can use standard Carp module to generate stack traces:
use Carp;
print_to_log(Carp::longmess("We're here"));
Or there's an object-oriented module for that as well.
To get a dump of the call stack without modifying any code, you can use the perl command line to run your program under Carp::Always:
perl -MCarp::Always my_program.pl
Should one really use external commands while coding in Perl? I see several disadvantages of it. It's not system independent plus security risks might also be there. What do you think? If there is no way and you have to use the shell commands from Perl then what is the safest way to execute that particular command (like checking pid, uid etc)?
It depends on how hard it is going to be to replicate the functionality in Perl. If I needed to run the m4 macro processor on something, I'd not think of trying to replicate that functionality in Perl myself, and since there's no module on http://search.cpan.org/ that looks suitable, it would appear others agree with me. In that case, then, using the external program is sensible. On the other hand, if I needed to read the contents of a directory, then the combination of readdir() et al plus stat() or lstat() inside Perl is more sensible than futzing with the output of ls.
If you need to execute commands, think very carefully about how you invoke them. In particular, you probably want to avoid the shell interpreting the arguments, so use the array form of system (see also exec), etc, rather than a single string for the command plus arguments (which means the shell is used to process the command line).
Executing external commands can be expensive simply because it involves forking new process and watching for its output if you need it.
Probably more importantly, should external process fail for any reason, it may be difficult to understand what happened by means of your script. Worse still, surprisingly often external process can be stuck forever, so will be your script. You can use special tricks like opening pipe and watching for output in loop, but this itself is error-prone.
Perl is very capable of doing many things. So, if you stick to using only Perl native constructs and modules to accomplish your tasks, not only it will be faster because you never fork, but it will be more reliable and easier to catch errors by looking at native Perl objects and structures returned by library routines. And of course, it will be automatically portable to different platforms.
If your script runs under elevated permissions (like root or under sudo), you should be very careful as to what external programs you execute. One of the simple ways to ensure basic security is to always specify commands by full name, like /usr/bin/grep (but still think twice and just do grep by Perl itself!). However, even this may not be enough if attacker is using LD_PRELOAD mechanism to inject rogue shared libraries.
If you are willing to go very secure, it is suggested to use tainted check by using -T flag like this:
#!/usr/bin/perl -T
Taint flag will be also enabled by Perl automatically if your script was determined to have different real and effective user or group ids.
Tainted mode will severely limit your ability to do many things (like system() call) without Perl complaining - see more at http://perldoc.perl.org/perlsec.html#Taint-mode, but it will give you much higher security confidence.
Should one really use external commands while coding in Perl?
There's no single answer to this question. It all depends on what you are doing within the wide range of potential uses of Perl.
Are you using Perl as a glorified shell script on your local machine, or just trying to find a quick-and-dirty solution to your problem? In that case, it makes a lot of sense to run system commands if that is the easiest way to accomplish your task. Security and speed are not that important; what matters is the ability to code quickly.
On the other hand, are you writing a production program? In that case, you want secure, portable, efficient code. It is often preferable to write the functionality in Perl (or use a module), rather than calling an external program. At least, you should think hard about the benefits and drawbacks.
In Perl, how do I test if a file of open by another Perl program? I need the first program to be done before the second program can start.
alt text http://musicritics.com/wp/wp-content/uploads/2009/05/20061129morbo.gif
FILES DO NOT WORK THAT WAY!
But seriously, folks, advisory locks with flock are generally the best you can do. There's no way to guarantee that no other program wants to read or write a file at the same time as you.
You may be able to coordinate the two programs using flock: The first program would lock the file, and the second program would also try to acquire a lock on it, and it would block until the first program releases the lock.
Is flock() available on your system ? Otherwise, the two programs have be be synchronized, they can communicate thru a pipe or a socket, or via the presence/absence of a file.
Another direction if you are on a Unix-like system, could be use lsof output.
I assume having the first program starting the second one is not feasible.
In my experience, flock works fine on local systems on both Windows and Linux.
You could also, presumably, have the first program exec the second program when it's done processing the file.
If you are running on Windows, you could call CreateFile directly with a dwShareMode of 0.
According to MSDN:
Prevents other processes from opening a file or device if they request delete, read, or write access.
Win32API::File gives access to this call.
To specifically look for whether or not a file is open or in use, if you're on unix there is a wrapper to the lsof command to list open files: Unix::Lsof
If you're in Unix, you could also call fuser.
I might be misunderstanding the context and my comment may have limited usability in your case, but depending on what you are using the code for/on - Using serial queues to ensure that tasks to execute in a predictable order maybe an option. Your application (written in Perl) will need to explicitly create and manage the serial queues. For more details refer to the following link: GCD
It is generally advised not to use additional linux tools in a Perl code;
e.g if someone intends to print the last line of a text file he can:
$last_line = `tail -1 $file` ;
or otherwise, open the file and read it line by line
open(INFO,$file);
while(<INFO>) {
$last_line = $_ if eof;
}
What are the pitfalls of using the previous and why should I avoid using shell tools in my code?
thanx,
Efficiency - you don't have to spawn a new process
Portability - you don't have to worry about an executable not existing, accepting different switches, or having different output
Ease of use - you don't have to parse the output, the results are already in a usable form
Error handling - you have finer-grained control over errors and what to do about them in Perl.
It's better to keep all the action in Perl because it's faster and because it's more secure. It's faster because you're not spawning a new process, and it's more secure because you don't have to worry about shell meta character trickery.
For example, in your first case if $file contained "afilename ; rm -rf ~" you would be a very unhappy camper.
P.S. The best all-Perlway to do the tail is to use File::ReadBackwards
One of the primary reasons (besides portability) for not executing shell commands is that it introduces overhead by spawning another process. That's why much of the same functionality is available via CPAN in Perl modules.
One reason is that your Perl code might be running in an environment where there is no shell tool called 'tail'.
It's a personal call depending on the project:
Is it going to be always used in shell environments with tail?
Do you care about only using pure Perl code?
Using tail? Fine. But that's really a special case, since it's so easy to use and since it is so trivial.
The problem in general is not really efficiency or portability, that is largely irrelevant; the issue is ease of use. To run an external utility, you have to find out what arguments it accepts, write code to transform your program's data structures to that format, quote them properly, build the command line, and run the application. Then, you might have to feed it data and read data from it (involving complexity like an event loop, worrying about deadlocking, etc.), and finally interpret the return value. (UNIX processes consider "0" true and anything else false, but Perl assumes the opposite. foo() and die is hard to read.) This is a lot of work to do, and that's why people avoid it. It's much easier to create an instance of a class and call methods on it to get the data you need.
(You can abstract away processes this way; see Crypt::GpgME for example. It handles the complexity associated with invoking gpg, which would normally involve creating multiple filehandles other than STDOUT, STDIN, and STDERR, among other things.)
The main reason I see for doing it all in Perl would be for robustness. Your use of tail will fail if the filename has shell metacharacters or spaces or doesn't exist or isn't accessible. From Perl, characters in the filename aren't an issue, and you can distinguish between errors in accessing the file. Sometimes being robust is more important than speedy coding and sometimes it's not.