Writing to a text file using Perl CGI - perl

I am using cgi to call a perl function, and inside the perl function, I want it to write something in a txt. But it's not working for now. I have tried to run the cgi script on my apache server. The cgi could print the webpage as I requied, but I can't see the file that I want it to write. Seems like the perl script is not execute by the server.
The perl script is quite simple.The file name is perl predict_seq_1.pl
#!/usr/bin/perl -w
use strict;
use warnings;
my $seq = $ARGV[0];
my $txtfile = "test.txt";
open(my $FH, '>', $txtfile);
print $FH "test $seq success\n";
close $FH;
And the cgi part, I just call it by
system('perl predict_seq_1.pl', $geneSeq);

Your CGI program is going to run by a user who has very low permissions on your system - certainly lower than your own user account on that same system.
It seems likely that one of two things is happening:
Your CGI process can't see the program you're trying to run.
Your CGI program doesn't have permission to execute the program you're trying to run.
You should check the return value from system() and look at the value of $?.

Either give system a single string, or give it individual arguments. Your script is attempting and failing to run the program perl predict_seq_1.pl, rather than having perl run the script predict_seq_1.pl.
system('perl', 'predict_seq_1.pl', $geneSeq);

Related

How to make output file of one script, as an input file for the another script?

I am currently working on a project where in I am using Perl language to create command line application of one online tool.
There are total nine modules (for each module there is separate Perl script).
This Command Line Application should work in the following way-
Out of these nine modules user would be able to select any number of modules. (in short pipeline should be built).
after running first selected module, output files are generated.
output file of first module should be taken as an input file by the next module selected by the user.
My doubt is how we can make output file of first module as an input file for the next selected module.
It will be a great help if you solve my doubt as I am new to Perl programming.
Thanking you!
Tamar is right. You can use pipe command: "|". You can do this no matter if you're using windows or a unix based operating system.
Here's a simple example of what you're doing:
Code to output data
out.pl
#!/usr/bin/perl
use strict;
use warnings;
my $file = "output.txt";
my $data = "gasp";
unless(-e $file){
open(my $fh, '>', $file);
print $fh $data;
close $fh;
}
Code that takes input file
in.pl
#!/usr/bin/perl
use strict;
use warnings;
my $gaspage = <STDIN>;
chomp $gaspage;
print $gaspage."\n";
Then you just run it with the commands below that can be run within your perl application or just in the terminal:
perl out.pl
cat output.pl | in.pl

How to read a .conf file in Perl

I just created a text test.conf file with some information. How can I read it on Perl?
I am new to Perl and I am not sue would will I need to do.
I tried the following:
C:\Perl\Perl_Project>perl
#!/usr/local/bin/perl
open (MYFILE, 'test.conf');
while (<MYFILE>)
{ chomp; print "$_\n"; }
close (MYFILE);
I tried installing Perl on my laptop that has Windows 7 OS, and using command line.
Instead of using command line, write your program in a file (you can use any editor to write your program, I would suggest use Notepad++) and save as myprogram.pl in the same directory where you have your .conf file.
use warnings;
use strict;
open my $fh, "<", "test.conf" or die $!;
while (<$fh>)
{
chomp;
print "$_\n";
}
close $fh;
Now open a command prompt and go to the same path where you have your both file myprogram.pl and test.conf file and execute your program by typing this:
perl myprogram.pl
You can give full path of your input file inside program and can run your program from any path from command prompt by giving full path of your program:
perl path\to\myprogram.pl
Side note: Always use use warnings; and use strict; at the top of your program and to open file always use lexical filehandle with three arguments with error handling.
This is an extended comment more than an answer, as I believe #serenesat has given you everything you need to execute your program.
When you do "command line" Perl, it's typically stuff that is relatively brief or trivial, such as:
perl -e "print 2 ** 16"
Anything that goes beyond a few lines, and you're probably better off putting that in a file and having Perl run the file. You certainly can put larger programs on the command line, but when it comes to going back in and editing lines, it becomes more of a hassle than a shortcut.
Also, for what it's worth the -n and -p parameters allow you to process the contents of a stream, meaning you could do something like this:
perl -ne "print if /oracle/i" test.conf

How can I automatically running a large amount of perl scripts?

I need to run over 100 perl scripts (written by the former employee) on Windows for our system stability testing. Each script has several functions, and each function sends certain of linux commands to our back end system, and get results back. The result is written into a log file (currently each script has one log file). The results are “Success”, “Fail”.
Running these perl scripts one-by-one is killing my time. I am thinking about writing a batch file to automate it, but I have to parse the result files to generate test report. I searched online, and seems several testing frameworks, such as Test::Harness, Test::More, Test::Most are good choices. While based on my understanding, they only take .t file, and our scripts are normal perl scripts (.pl), and not standard perl test script (.t script). If using, say, Test::Harness, should I change all the perl script from .pl to .t, and put them under t folder? How to call my functions in Test::Harness? Can someone suggest a better way to automate the testing process and generate the test report like Test::Harness does? I think an example will be very helpful.
Test::Harness and friends isn't really an appropriate choice for this task, unless you want to modify all 100 of your scripts to emit TAP data instead of a log file.
Why not just write a Perl script to run all your Perl scripts?
use strict;
use warnings;
my $script_dir = "/path/to/dir/full/of/scripts";
opendir my $dh, $script_dir or die "Can't open dir $script_dir: $!";
my #scripts = grep { /\.pl$/ } readdir $dh;
foreach my $script( #scripts ) {
print "Running $script\n";
system 'perl', $script;
}
You could even parallelize this using fork and exec (or Parallel::ForkManager, even better), assuming that makes sense for your system.
One of us is confused here. These (100+) perl scripts aren't unit tests right?
If I'm correct keep reading.
Test::* you mentioned aren't really what you're looking for.
Sounds to me like you just need a main.pl, or a .bat, to run each test.pl.
So it seems you're on the right path. If it's possible to have all tests in the same directory, you can do something like this.
my $tests_directory = "/some/path/test_dir";
opendir my $dh, $tests_directory or die"$!";
my #tests = grep { $_ !~ /^\./{1,2}$/ } readdir $dh;
for my $test (#tests) {
system('perl', $test);
}

Writing into a pipe in perl

I am passing the commands to some application through the Perl script using the pipe.
So I write the commands on pipe. But my problem is that the pipe is not waiting till the command execution is over from the application side and it takes the next command. So it is not blocking the inputs till the command execution is over. I need my Perl script work like UNIX shell. But it happens like the process is running in to background. I use readling to read the inputs.
#!/usr/bin/perl -w
use strict;
use Term::ReadLine;
open (GP, "|/usr/bin/gnuplot -noraise") or die "no gnuplot";
use FileHandle;
GP->autoflush(1);
# define readline
$term = new Term::ReadLine 'ProgramName';
while (defined( $_ = $term->readline('plot>'))) {
printf GP ("$_\n");
}
close(GP);
I recommend use of a concrete CPAN module, like Chart::Gnuplot, so you can have a high level of control

How do I run shell commands in a CGI program as the nobody user?

I want to run shell commands in a CGI program (written in Perl). My program doesn’t have root permission. It runs as nobody. I want to use this code:
use strict;
system <<'EEE';
awk '{a[$1]+=$2;b[$1]+=$3}END{for(i in a)print i, a[i], b[i]|"sort -nk 3"}' s.txt
EEE
I can run my code successfully with perl from the command line but not as a CGI program.
Based on the code in your question, there are at least four possibilities for failure.
The nobody user does not have permission to execute your program.
The Perl code in your question has no shebang (#!) line. You are trying to run awk, so I assume you are running on some form of Unix. If your code is missing this line, then your operating system does not know how to run your program.
The file s.txt is either not in the executing program’s working directory, or it is not readable by the nobody user.
For whatever reason, awk is not reachable via the PATH of your executing program’s environment.
To quickly diagnose such low-level problems, try to have all error output to show up in the browser. One way to do this is adding the following just after the shebang line in your code.
BEGIN {
print "Content-type: text/plain\n\n";
open STDERR, ">&", \*STDOUT or print "$0: dup: $!";
}
The output will render as plain text rather than HTML, but this is a temporary measure to see your program’s output. By wrapping it in a BEGIN block, the code executes as soon as it parses. Redirecting STDERR means your browser also gets anything written to the standard output.
Another way to do this is with the CGI::Carp module.
use CGI::Carp 'fatalsToBrowser';
This way, errors go to the browser and also to the web server’s error log.
If you still see 500-series errors from your server, the problem is happening at a lower level: probably some failure to start perl. Go examine your server’s error log. Once your program is executing, you can remove this temporary redirection of error output.
Finally, I recommend changing your program to
#! /usr/bin/perl -T
BEGIN { print "Content-type: text/plain\n\n"; }
use strict;
use warnings;
$ENV{PATH} = "/bin:/usr/bin";
my $input = "/path/to/your/s.txt";
my $buckets = <<'EOProgram'
{ a[$1] += $2; b[$1] += $3 }
END { for (i in a) print i, a[i], b[i] }
EOProgram
open STDIN, "-|", "awk", $buckets, $input or die "$0: open: $!";
exec "sort", "-nk", 3 or die "$0: exec: $!";
The -T switch enables a security dataflow analysis called taint mode that prevents you from using unsanitized input on system operations such as open, exec, and so on that an attacker (or benign user supplying unexpected input) could use to harm your system. You should always add -T to CGI programs and any other code that runs on behalf of another user.
Given the nature of your awk program, a content type of text/plain seems reasonable. Output it as soon as possible.
With taint mode enabled, be explicit about the value of your PATH environment variable. If instead you stick with whatever untrusted PATH your program inherits, attempting to run external programs will fail.
Nail down the full path of your input. This will eliminate surprises.
Using the multi-argument forms of open and exec eliminates the shell and its argument parsing. (For completeness, system also has a similar multi-argument form.) Yes, writing it this way can mean being a little more deliberate (such as breaking out the arguments and setting up the pipeline yourself), but it also avoids nasty surprises.
I'm sure nobody is allowed to run shell commands. The problem is that nobody doesn't have permission to open the file s.txt. Add read permission for everyone to s.txt, and add execute permission to everyone on every directory up to s.txt.
I would suggest finding out the full qualified path for awk and specifying it directly. Likely the nobody that launched httpd had a very minimal path in its $ENV{PATH}. Displaying the $ENV{PATH} I am guessing will show this.
This is a good thing, I wouldn't modify the path, but just specify the path /usr/bin/awk or what not.
If you have shell access and it works, type 'which awk' to find this out.
i can run my codes successfully in
perl file but not in cgi file.
What web server are you running under? For instance, apache requires printing a CGI header i.e. print "Content-type: text/plain; charset=utf-8\n\n", or
use CGI;
my $q = CGI->new();
print $q->header('text/html');
(See CGI)
Apache will conplain in the log (error.log) about "premature end of script headers" IF what I said is the case.
You could just do it inline without having to fork out to another process...
if ( open my $fh, '<', 's.txt' ) {
my %data;
while (<$fh>) {
my ($c1,$c2,$c3) = split;
$data{a}{$c1} += $c2;
$data{b}{$c1} += $c3;
}
foreach ( sort { $data{b}{$a} <=> $data{b}{$b} } keys %{ $data{b} } ) {
print "$_ $data{a}{$_} $data{b}{$_}\n";
}
} else {
warn "Unable to open s.txt: $!\n";
}