Writing into a pipe in perl - perl

I am passing the commands to some application through the Perl script using the pipe.
So I write the commands on pipe. But my problem is that the pipe is not waiting till the command execution is over from the application side and it takes the next command. So it is not blocking the inputs till the command execution is over. I need my Perl script work like UNIX shell. But it happens like the process is running in to background. I use readling to read the inputs.
#!/usr/bin/perl -w
use strict;
use Term::ReadLine;
open (GP, "|/usr/bin/gnuplot -noraise") or die "no gnuplot";
use FileHandle;
GP->autoflush(1);
# define readline
$term = new Term::ReadLine 'ProgramName';
while (defined( $_ = $term->readline('plot>'))) {
printf GP ("$_\n");
}
close(GP);

I recommend use of a concrete CPAN module, like Chart::Gnuplot, so you can have a high level of control

Related

Printing the result of the script(running in background) on the terminal

I am writing a perl script which runs another tcl script from it. The terminal doesn't print anything and waits for the tcl script to complete.
`chmod +x runme.tcl`; `./runme.tcl 2>&1`;
Can anyone please help me on how to print the results of the tcl script on the terminal instead of just waiting for it to get completed?
Thank you
system('chmod +x runme.tcl');
system('/runme.tcl 2>&1');
You can run tcl scripts directly from perl using the Tcl module without having to mess around with qx or system:
#!/usr/bin/env perl
use warnings;
use strict;
use Tcl;
Tcl->new->EvalFile("runme.tcl");
It'll share the same standard output as the perl script.
If you're using a new enough version of Tcl, you can easily create a safe interpreter to evaluate the script in case it tries to do anything nasty:
#!/usr/bin/env perl
use warnings;
use strict;
use Tcl v1.05;
my $interp = Tcl->new;
my $safeinterp = $interp->CreateSlave("safeinterp", 1);
$interp->Eval('interp share {} stdout safeinterp');
$interp->Eval('interp share {} stderr safeinterp');
$safeinterp->EvalFile("runme.tcl");
Backticks capture the output of an external command. You can write that output with a print command in front of the backticks.
`chmod +x runme.tcl`; print `./runme.tcl 2>&1`;

Writing to a text file using Perl CGI

I am using cgi to call a perl function, and inside the perl function, I want it to write something in a txt. But it's not working for now. I have tried to run the cgi script on my apache server. The cgi could print the webpage as I requied, but I can't see the file that I want it to write. Seems like the perl script is not execute by the server.
The perl script is quite simple.The file name is perl predict_seq_1.pl
#!/usr/bin/perl -w
use strict;
use warnings;
my $seq = $ARGV[0];
my $txtfile = "test.txt";
open(my $FH, '>', $txtfile);
print $FH "test $seq success\n";
close $FH;
And the cgi part, I just call it by
system('perl predict_seq_1.pl', $geneSeq);
Your CGI program is going to run by a user who has very low permissions on your system - certainly lower than your own user account on that same system.
It seems likely that one of two things is happening:
Your CGI process can't see the program you're trying to run.
Your CGI program doesn't have permission to execute the program you're trying to run.
You should check the return value from system() and look at the value of $?.
Either give system a single string, or give it individual arguments. Your script is attempting and failing to run the program perl predict_seq_1.pl, rather than having perl run the script predict_seq_1.pl.
system('perl', 'predict_seq_1.pl', $geneSeq);

Invoke perl script from another perl script [duplicate]

What would be an example of how I can call a shell command, say 'ls -a' in a Perl script and the way to retrieve the output of the command as well?
How to run a shell script from a Perl program
1. Using system system($command, #arguments);
For example:
system("sh", "script.sh", "--help" );
system("sh script.sh --help");
System will execute the $command with
#arguments and return to your script when finished. You may check $!
for certain errors passed to the OS by the external application. Read
the documentation for system for the nuances of how various
invocations are slightly different.
2. Using exec
This is very similar to the use of system, but it will
terminate your script upon execution. Again, read the documentation
for exec for more.
3. Using backticks or qx//
my $output = `script.sh --option`;
my $output = qx/script.sh --option/;
The backtick operator and it's equivalent qx//, excute the command and options inside the operator and return that commands output to STDOUT when it finishes.
There are also ways to run external applications through creative use of open, but this is advanced use; read the documentation for more.
From Perl HowTo, the most common ways to execute external commands from Perl are:
my $files = `ls -la` — captures the output of the command in $files
system "touch ~/foo" — if you don't want to capture the command's output
exec "vim ~/foo" — if you don't want to return to the script after executing the command
open(my $file, '|-', "grep foo"); print $file "foo\nbar" — if you want to pipe input into the command
Examples
`ls -l`;
system("ls -l");
exec("ls -l");
Look at the open function in Perl - especially the variants using a '|' (pipe) in the arguments. Done correctly, you'll get a file handle that you can use to read the output of the command. The back tick operators also do this.
You might also want to review whether Perl has access to the C functions that the command itself uses. For example, for ls -a, you could use the opendir function, and then read the file names with the readdir function, and finally close the directory with (surprise) the closedir function. This has a number of benefits - precision probably being more important than speed. Using these functions, you can get the correct data even if the file names contain odd characters like newline.
As you become more experienced with using Perl, you'll find that there are fewer and fewer occasions when you need to run shell commands. For example, one way to get a list of files is to use Perl's built-in glob function. If you want the list in sorted order you could combine it with the built-in sort function. If you want details about each file, you can use the stat function. Here's an example:
#!/usr/bin/perl
use strict;
use warnings;
foreach my $file ( sort glob('/home/grant/*') ) {
my($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks)
= stat($file);
printf("%-40s %8u bytes\n", $file, $size);
}
There are a lot of ways you can call a shell command from a Perl script, such as:
back tick
ls which captures the output and gives back to you.
system
system('ls');
open
Refer #17 here: Perl programming tips
You might want to look into open2 and open3 in case you need bidirectional communication.
I have been using system and qq to run linux programs inside perl. And it has worked well.
#!/usr/bin/perl # A hashbang line in perl
use strict; # It can save you a lot of time and headache
use warnings; # It helps you find typing mistakes
# my keyword in Perl declares the listed variable
my $adduser = '/usr/sbin/adduser';
my $edquota = '/usr/sbin/edquota';
my $chage = '/usr/bin/chage';
my $quota = '/usr/bin/quota';
my $nomeinteiro;
my $username;
my $home;
# system() function executes a system shell command
# qq() can be used in place of double quotes
system qq($adduser --home $home --gecos "$fullname" $username);
system qq($edquota -p john $username);
system qq($chage -E \$(date -d +180days +%Y-%m-%d) $username);
system qq($chage -l $username);
system qq($quota -s $username);

Simple PERL script to loop very quickly

I'm trying to get a perl script to loop very quickly (in Solaris).
I have something like this:
#! /bin/perl
while ('true')
{
use strict;
use warnings;
use Time::HiRes;
system("sh", "shell script.sh");
Time::HiRes::usleep(10);
}
I want the perl script to execute a shell script every 10 microseconds. The script doesn't fail but no matter how much I change the precision of usleep within the script, the script is still only being executed approx 10 times per second. I need it to loop much faster than that.
Am I missing something fundamental here? I've never used perl before but I can't get the sleep speed I want in Solaris so I've opted for perl.
TIA
Huskie.
EDIT:
Revised script idea thanks to user comments - I'm now trying to do it all within perl and failing miserably!
Basically I'm trying to run the PS command to capture processes - if the process exists I want to capture the line and output to a text file.
#! /bin/perl
while ('true')
{
use strict;
use warnings;
use Time::HiRes;
open(PS,"ps -ef | grep <program> |egrep -v 'shl|grep' >> grep_out.txt");
Time::HiRes::usleep(10);
}
This returns the following error:
Name "main::PS" used only once: possible typo at ./ps_test_loop.pl line 9.
This is a pure perl program (not launching any external process) that looks for processes running some particular executable:
#!/usr/bin/perl
use strict;
use warnings;
my $cmd = 'lxc-start';
my $cmd_re = qr|/\Q$cmd\E$|;
$| = 1;
while (1) {
opendir PROC, "/proc" or die $!;
while (defined(my $pid = readdir PROC)) {
next unless $pid =~ /^\d+$/;
if (defined(my $exe = readlink "/proc/$pid/exe")) {
if ($exe =~ $cmd_re) {
print "pid: $pid\n";
}
}
}
closedir PROC;
# sleep 1;
}
On my computer this runs at 250 times/second.
The bottleneck is the creation of processes, pipes, and opening the output file. You should be doing that at most once, instead of doing it in each iteration. That's why you need to do everything in Perl if you want to make this faster. Which means: don't call the ps command, or any other command. Instead, read from /proc or use Proc::ProcessTable, as the comments suggest.
Incidentally: the use statement is executed only once (it is essentially a shorthand for a require statement wrapped in a BEGIN { } clause), so you might as well put that at the top of the file for clarity.

How to run node.js script without a file, from Perl?

I'm generating a node.js script inside a Perl program, and I want to run that through node.js, as a JavaScript interpreter. How canO run the node in $script without writing it to disk and then calling node, afterwards capturing the output.
I'm using the system command, which I think is good for this purpose.
Use IPC::Run or IPC::Open3.
use strictures;
use IPC::Run qw(run);
use autodie qw(:all run);
my $in = '… JavaScript goes here …';
my $out;
run ['node'], \$in, \$out;
Use open instead of system
#!/usr/bin/perl
open(FOO, "|node");
print FOO "console.log('hello world');";
Or if you don't need to do it from inside the perl script, just from your shell:
$ ./myscript.pl | node
Where myscript.pl exits after printing the javascript code