I usually think of UNIX pipes as a quick and dirty way to interact with the console, doing things such as:
ls | grep '\.pdf$' #list all pdfs
I understand that it's possible to use create named pipes using mkfifo and mknod.
Are named pipes still used significantly today, or are they a relic of the past?
They are still used, although you might not notice. As a first-class file-system object provided by the operating system, a named pipe can be used by any program, regardless of what language it is written in, that can read and write to the file system for interprocess communication.
Specific to bash (and other shells), process substitution can be implemented using named pipes, and on some platforms that may be how it actually is implemented. The following
command < <( some_other_command )
is roughly identical to
mkfifo named_pipe
some_other_command > named_pipe &
command < named_pipe
and so is useful for things like POSIX-compliant shell code, which does not recognize process substitution.
And it works in the other direction: command > >( some_other_command ) is
mkfifo named_pipe
some_other_command < named_pipe &
command > named_pipe
pianobar, the command line Pandora Radio player, uses a named pipe to provide a mechanism for arbitrary control input. If you want to control Pianobar with another app, such as PianoKeys, you set it up to write command strings to the FIFO file, which piano bar watches for incoming commands.
I wrote a MAMP stack app in college that we used to manage users in an LDAP database on a file server. The PHP scripts would write commands to a FIFO that a shell script running in launchd would read from and interact with the LDAP database. I had no idea what I was doing, so I don’t know if there was a better or more correct way to do it, but it worked great at the time.
Named pipes are useful where a program would expect a path to a file as an argument as opposed to being willing to read/write with stdin and stdout, though very modern versions of bash can get around this with < ( foo ) it is still sometimes useful to have a file like object which is usable as a pipe.
Related
In a C program I have a file descriptor (actual example: 14, but of course the program doesn't know this in advance) open for writing. I wish to call system(3) to run something and send its standard output to this file descriptor. Of course, this calls /bin/sh, which is the Bourne shell, which doesn't recognize constructs of the form 1>&14. Is there an alternative syntax (perhaps using braces or something) which I can use to let the Bourne shell see the 14 and use it? I could of course do one of these:
Do a fork/exec combo instead of system(3) and redirect by hand.
Redirect the output to a file and then copy the data therefrom to file descriptor 14.
Since I have root, have /bin/sh point to Bash.
The most elegant way would be to discover a syntax by which the Bourne shell will accept a multiple-digit file descriptor number. Is there such?
Consider my favorite rules about optimization: (1) Do not optimize. (2) For experts: do not optimize yet.
The situation arising in the question will (for me) arise in code which will be executed at most 100 times a day. So this will suffice:
sprintf(big_string,
"bash -c \"somethingsomething 1>&%d\"",
the_fd
);
Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.
Should one really use external commands while coding in Perl? I see several disadvantages of it. It's not system independent plus security risks might also be there. What do you think? If there is no way and you have to use the shell commands from Perl then what is the safest way to execute that particular command (like checking pid, uid etc)?
It depends on how hard it is going to be to replicate the functionality in Perl. If I needed to run the m4 macro processor on something, I'd not think of trying to replicate that functionality in Perl myself, and since there's no module on http://search.cpan.org/ that looks suitable, it would appear others agree with me. In that case, then, using the external program is sensible. On the other hand, if I needed to read the contents of a directory, then the combination of readdir() et al plus stat() or lstat() inside Perl is more sensible than futzing with the output of ls.
If you need to execute commands, think very carefully about how you invoke them. In particular, you probably want to avoid the shell interpreting the arguments, so use the array form of system (see also exec), etc, rather than a single string for the command plus arguments (which means the shell is used to process the command line).
Executing external commands can be expensive simply because it involves forking new process and watching for its output if you need it.
Probably more importantly, should external process fail for any reason, it may be difficult to understand what happened by means of your script. Worse still, surprisingly often external process can be stuck forever, so will be your script. You can use special tricks like opening pipe and watching for output in loop, but this itself is error-prone.
Perl is very capable of doing many things. So, if you stick to using only Perl native constructs and modules to accomplish your tasks, not only it will be faster because you never fork, but it will be more reliable and easier to catch errors by looking at native Perl objects and structures returned by library routines. And of course, it will be automatically portable to different platforms.
If your script runs under elevated permissions (like root or under sudo), you should be very careful as to what external programs you execute. One of the simple ways to ensure basic security is to always specify commands by full name, like /usr/bin/grep (but still think twice and just do grep by Perl itself!). However, even this may not be enough if attacker is using LD_PRELOAD mechanism to inject rogue shared libraries.
If you are willing to go very secure, it is suggested to use tainted check by using -T flag like this:
#!/usr/bin/perl -T
Taint flag will be also enabled by Perl automatically if your script was determined to have different real and effective user or group ids.
Tainted mode will severely limit your ability to do many things (like system() call) without Perl complaining - see more at http://perldoc.perl.org/perlsec.html#Taint-mode, but it will give you much higher security confidence.
Should one really use external commands while coding in Perl?
There's no single answer to this question. It all depends on what you are doing within the wide range of potential uses of Perl.
Are you using Perl as a glorified shell script on your local machine, or just trying to find a quick-and-dirty solution to your problem? In that case, it makes a lot of sense to run system commands if that is the easiest way to accomplish your task. Security and speed are not that important; what matters is the ability to code quickly.
On the other hand, are you writing a production program? In that case, you want secure, portable, efficient code. It is often preferable to write the functionality in Perl (or use a module), rather than calling an external program. At least, you should think hard about the benefits and drawbacks.
In Perl, how do I test if a file of open by another Perl program? I need the first program to be done before the second program can start.
alt text http://musicritics.com/wp/wp-content/uploads/2009/05/20061129morbo.gif
FILES DO NOT WORK THAT WAY!
But seriously, folks, advisory locks with flock are generally the best you can do. There's no way to guarantee that no other program wants to read or write a file at the same time as you.
You may be able to coordinate the two programs using flock: The first program would lock the file, and the second program would also try to acquire a lock on it, and it would block until the first program releases the lock.
Is flock() available on your system ? Otherwise, the two programs have be be synchronized, they can communicate thru a pipe or a socket, or via the presence/absence of a file.
Another direction if you are on a Unix-like system, could be use lsof output.
I assume having the first program starting the second one is not feasible.
In my experience, flock works fine on local systems on both Windows and Linux.
You could also, presumably, have the first program exec the second program when it's done processing the file.
If you are running on Windows, you could call CreateFile directly with a dwShareMode of 0.
According to MSDN:
Prevents other processes from opening a file or device if they request delete, read, or write access.
Win32API::File gives access to this call.
To specifically look for whether or not a file is open or in use, if you're on unix there is a wrapper to the lsof command to list open files: Unix::Lsof
If you're in Unix, you could also call fuser.
I might be misunderstanding the context and my comment may have limited usability in your case, but depending on what you are using the code for/on - Using serial queues to ensure that tasks to execute in a predictable order maybe an option. Your application (written in Perl) will need to explicitly create and manage the serial queues. For more details refer to the following link: GCD
It is generally advised not to use additional linux tools in a Perl code;
e.g if someone intends to print the last line of a text file he can:
$last_line = `tail -1 $file` ;
or otherwise, open the file and read it line by line
open(INFO,$file);
while(<INFO>) {
$last_line = $_ if eof;
}
What are the pitfalls of using the previous and why should I avoid using shell tools in my code?
thanx,
Efficiency - you don't have to spawn a new process
Portability - you don't have to worry about an executable not existing, accepting different switches, or having different output
Ease of use - you don't have to parse the output, the results are already in a usable form
Error handling - you have finer-grained control over errors and what to do about them in Perl.
It's better to keep all the action in Perl because it's faster and because it's more secure. It's faster because you're not spawning a new process, and it's more secure because you don't have to worry about shell meta character trickery.
For example, in your first case if $file contained "afilename ; rm -rf ~" you would be a very unhappy camper.
P.S. The best all-Perlway to do the tail is to use File::ReadBackwards
One of the primary reasons (besides portability) for not executing shell commands is that it introduces overhead by spawning another process. That's why much of the same functionality is available via CPAN in Perl modules.
One reason is that your Perl code might be running in an environment where there is no shell tool called 'tail'.
It's a personal call depending on the project:
Is it going to be always used in shell environments with tail?
Do you care about only using pure Perl code?
Using tail? Fine. But that's really a special case, since it's so easy to use and since it is so trivial.
The problem in general is not really efficiency or portability, that is largely irrelevant; the issue is ease of use. To run an external utility, you have to find out what arguments it accepts, write code to transform your program's data structures to that format, quote them properly, build the command line, and run the application. Then, you might have to feed it data and read data from it (involving complexity like an event loop, worrying about deadlocking, etc.), and finally interpret the return value. (UNIX processes consider "0" true and anything else false, but Perl assumes the opposite. foo() and die is hard to read.) This is a lot of work to do, and that's why people avoid it. It's much easier to create an instance of a class and call methods on it to get the data you need.
(You can abstract away processes this way; see Crypt::GpgME for example. It handles the complexity associated with invoking gpg, which would normally involve creating multiple filehandles other than STDOUT, STDIN, and STDERR, among other things.)
The main reason I see for doing it all in Perl would be for robustness. Your use of tail will fail if the filename has shell metacharacters or spaces or doesn't exist or isn't accessible. From Perl, characters in the filename aren't an issue, and you can distinguish between errors in accessing the file. Sometimes being robust is more important than speedy coding and sometimes it's not.