When starting my http server I don't want to see >> Dancer2 v0.201000 server <pid> listening on http://0.0.0.0:<port> printed on the stderr. Thats why I added the following line before calling start()
get "/pwd" => sub {
my $pwd = cwd;
print STDERR "\n\n[PWD] : $pwd\n"; # this line is not being printed
print "\n\n[STDOUT::PWD] : $pwd\n";
my %responseHash = ( pwd => $pwd );
my $response = encode_json \%responseHash;
return $response;
};
my $dancerStartErr;
sub startServer {
open (local *STDERR, ">", \$dancerStartErr)
or die "Dup err to variable error: $!\n";
start();
}
startServer();
The problem is that later I can't print something on the STERR. How can I reopen STDERR (open(STDERR, ">", \*STDERR); doesn't help)?
If you don't want your application to log anything, you can change the logging engine to use Dancer2::Logger::Null. You do that by editing your config.yml, or in one of your environments. For example, to turn it off in producion, change # appdir/environments/production.yml.
logger: 'null'
The default is the logging engine 'console', which prints stuff to your terminal.
There are other Dancer2::Logger:: classes available bundled with Dancer2 and on CPAN in their own distributions. A better solution to just dumping everything into a black hole might be to log to a file instead. Documentation of how to configure it further can be found in Dancer2::Core::Role::Logger.
Also note that instead of printing to STDERR in your code, you should use the logging keywords with the appropriate log level.
print STDERR "\n\n[PWD] : $pwd\n"; # this line is not being printed
This is not a good idea, because you cannot distinguish if this is an error, or a warning, or just debugging output. That's why there are different log levels built into Dancer2.
core
debug
info
warning
error
All of them are available as keywords. There is documentation on it in Dancer2::Manual.
Since the working directory is probably not relevant in production, but only during development, you'd go with debug.
debug "[PWD] : $pwd";
That's it. It takes care of newlines and such for you automatically.
You could use select before redirecting to save it in a variable
my $oldfh = select(STDERR);
and then use it later
select($oldfh);
Also check out:
Capture::Tiny::Extended
How to redirect and restore STDOUT/STDERR
Related
I am 1 day old to Perl, was going through API doc here, have few basic questions
$sftp = Net::SFTP::Foreign->new($host, autodie => 1);
my $ls = $sftp->ls("/bar");
# dies as: "Couldn't open remote dir '/bar': No such file"
Question
with autodie will the connection be auto closed ?
we see in above example how to use folder , similar syntax also works for file ?
Or something like this makes more sense ??
my $sftp = Net::SFTP::Foreign->new($host, autodie => 1);
$sftp->find("/sdfjkalshfl", # nonexistent directory
on_error => sub { print "foo!\n";sftp->disconnect();exit; });
I was trying to run following code on my windows machine
use Net::SFTP::Foreign;
my $host = "demo.wftpserver.com";
my $sftp = Net::SFTP::Foreign->new($host ,ssh_cmd => 'plink',autodie => 1);
my $ls = $sftp->ls("/bar");
But i get error
'plink' is not recognized as an internal or external command ,
however when i run plink from windows command line it works fine !!
with autodie will the connection be auto closed ?
Yes. When the program ends, everything is destroyed and connections are closed. That is also the case when the $sftp variable goes out of scope. Modules like this usually implement a DESTROY sub. Those are invoked when the object (which is just a reference in Perl) goes out of scope. There can be some cleanup in that sub. Another example that has that is DBI, and of course lexical filehandles (like $fh from a open call).
we see in above example how to use folder , similar syntax also works for file ?
No. The docs say ls is for a directory:
Fetches a listing of the remote directory $remote. If $remote is not given, the current remote working directory is listed.
But you can just do ls for the directory that the file you want is in, and use the wanted option.
my $ls = $sftp->ls( '/home/foo', wanted => qr/^filename.txt$/ );
Though with the autodie that should die, so if you don't want it to actually die here, you should wrap it in a Try::Tiny call or an eval.
use Try::Tiny
# ...
my $ls = try {
return $sftp->ls( '/home/foo', wanted => qr/^filename.txt$/ );
} catch {
return; # will return undef
};
say 'Found file "filename.txt" on remote server' if $ls;
As to plink being not found, probably the Windows PATH is different from what your Perl sees.
I have a few perl script file which have been up and running for past few years, all of a sudden, for past few days, they were up, down, up, down, ..... There are no syntax error for them, since sometimes they are up, and they have been there for quite a while, i did not change them recently. Plus, I can run them from Lunix shell command without any problem, the file permission is 755, so everything seems to be set up properly. They are hosted by a web hosting company, I have no access to server log file.
The error message is typical perl error message:
"Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator and inform them of the time the error occurred, and the actions you performed just before this error.
More information about this error may be available in the server error log."
Add use CGI::Carp qw( fatalsToBrowser ); early in your program to have the error returned to the browser.
Alternatively, you could use the same technique CGI::Carp uses or a wrapper to your script to save the errors in your own log file.
Add the following to the start of a script to have it log errors and warnings to a log file of your choice.
sub self_wrap {
my $log_qfn = "$ENV{HOME}/.web.log"; # Adjust as needed.
open(my $log_fh, '>>', $log_qfn)
or warn("Can't append to log file \"$qfn\": $!"), return;
require IPC::Open3;
require POSIX;
my $prefix = sprintf("[%s] [client %s] ",
POSIX::strptime('', localtime),
$ENV{REMOTE_ADDR} || '???'
);
my $suffix = $ENV{HTTP_REFERER} ? ", $ENV{HTTP_REFERER}" : '';
my $pid = IPC::Open3::open3(
'<&STDIN',
'>&STDOUT',
local *CHILD_STDERR,
$^X, $0, #ARGV
);
while (<CHILD_STDERR>) {
print(STDERR $_);
chomp;
print($log_fh $prefix, $_, $suffix, "\n");
}
waitpid($pid, 0);
POSIX::_exit(($? & 0x7F) ? ($? & 0x7F) | 0x80 : $? >> 8);
}
BEGIN { self_wrap() if !$ENV{WRAPPED}++; }
If your site has recently been transferred onto a different server by your hosting company or the server settings have recently been changed try saving the file with HTML kit by using 'Save as extra' >> 'Save as UNIX format' and then upload.
I want to run shell commands in a CGI program (written in Perl). My program doesn’t have root permission. It runs as nobody. I want to use this code:
use strict;
system <<'EEE';
awk '{a[$1]+=$2;b[$1]+=$3}END{for(i in a)print i, a[i], b[i]|"sort -nk 3"}' s.txt
EEE
I can run my code successfully with perl from the command line but not as a CGI program.
Based on the code in your question, there are at least four possibilities for failure.
The nobody user does not have permission to execute your program.
The Perl code in your question has no shebang (#!) line. You are trying to run awk, so I assume you are running on some form of Unix. If your code is missing this line, then your operating system does not know how to run your program.
The file s.txt is either not in the executing program’s working directory, or it is not readable by the nobody user.
For whatever reason, awk is not reachable via the PATH of your executing program’s environment.
To quickly diagnose such low-level problems, try to have all error output to show up in the browser. One way to do this is adding the following just after the shebang line in your code.
BEGIN {
print "Content-type: text/plain\n\n";
open STDERR, ">&", \*STDOUT or print "$0: dup: $!";
}
The output will render as plain text rather than HTML, but this is a temporary measure to see your program’s output. By wrapping it in a BEGIN block, the code executes as soon as it parses. Redirecting STDERR means your browser also gets anything written to the standard output.
Another way to do this is with the CGI::Carp module.
use CGI::Carp 'fatalsToBrowser';
This way, errors go to the browser and also to the web server’s error log.
If you still see 500-series errors from your server, the problem is happening at a lower level: probably some failure to start perl. Go examine your server’s error log. Once your program is executing, you can remove this temporary redirection of error output.
Finally, I recommend changing your program to
#! /usr/bin/perl -T
BEGIN { print "Content-type: text/plain\n\n"; }
use strict;
use warnings;
$ENV{PATH} = "/bin:/usr/bin";
my $input = "/path/to/your/s.txt";
my $buckets = <<'EOProgram'
{ a[$1] += $2; b[$1] += $3 }
END { for (i in a) print i, a[i], b[i] }
EOProgram
open STDIN, "-|", "awk", $buckets, $input or die "$0: open: $!";
exec "sort", "-nk", 3 or die "$0: exec: $!";
The -T switch enables a security dataflow analysis called taint mode that prevents you from using unsanitized input on system operations such as open, exec, and so on that an attacker (or benign user supplying unexpected input) could use to harm your system. You should always add -T to CGI programs and any other code that runs on behalf of another user.
Given the nature of your awk program, a content type of text/plain seems reasonable. Output it as soon as possible.
With taint mode enabled, be explicit about the value of your PATH environment variable. If instead you stick with whatever untrusted PATH your program inherits, attempting to run external programs will fail.
Nail down the full path of your input. This will eliminate surprises.
Using the multi-argument forms of open and exec eliminates the shell and its argument parsing. (For completeness, system also has a similar multi-argument form.) Yes, writing it this way can mean being a little more deliberate (such as breaking out the arguments and setting up the pipeline yourself), but it also avoids nasty surprises.
I'm sure nobody is allowed to run shell commands. The problem is that nobody doesn't have permission to open the file s.txt. Add read permission for everyone to s.txt, and add execute permission to everyone on every directory up to s.txt.
I would suggest finding out the full qualified path for awk and specifying it directly. Likely the nobody that launched httpd had a very minimal path in its $ENV{PATH}. Displaying the $ENV{PATH} I am guessing will show this.
This is a good thing, I wouldn't modify the path, but just specify the path /usr/bin/awk or what not.
If you have shell access and it works, type 'which awk' to find this out.
i can run my codes successfully in
perl file but not in cgi file.
What web server are you running under? For instance, apache requires printing a CGI header i.e. print "Content-type: text/plain; charset=utf-8\n\n", or
use CGI;
my $q = CGI->new();
print $q->header('text/html');
(See CGI)
Apache will conplain in the log (error.log) about "premature end of script headers" IF what I said is the case.
You could just do it inline without having to fork out to another process...
if ( open my $fh, '<', 's.txt' ) {
my %data;
while (<$fh>) {
my ($c1,$c2,$c3) = split;
$data{a}{$c1} += $c2;
$data{b}{$c1} += $c3;
}
foreach ( sort { $data{b}{$a} <=> $data{b}{$b} } keys %{ $data{b} } ) {
print "$_ $data{a}{$_} $data{b}{$_}\n";
}
} else {
warn "Unable to open s.txt: $!\n";
}
I just discovered wkhtmltopdf and I'm trying to use it in a perl CGI script to generate PDFs. Basically, the perl script writes an HTML file, calls wkhtmltopdf via system() to create a pdf, then downloads the pdf and deletes the temporary files.
open NNN, ">$path_to_files/${file}_pdf.html" or die "can't write file: $!";
print NNN $text;
close NNN;
my #pdfSettings = (
"d:/very/long/path/wkhtmltopdf",
"$path_to_files/${file}_pdf.html",
"$path_to_files/$file.pdf"
);
system(#pdfSettings);
open(DLFILE, '<', "$path_to_files/$file.pdf");
print $q->header(
-type=> 'application/x-download',
-attachment => "$file.pdf",
-filename => "$file.pdf",
'Content-length' => -s "$path_to_files/$file.pdf",
);
binmode DLFILE;
print while <DLFILE>;
close (DLFILE);
unlink("$path_to_files/${file}_pdf.html");
unlink("$path_to_files/${file}.pdf");
This works fine on my local server. However, when I upload it to my public server, it gets as far as creating the pdf file and then dies with "The specified CGI application misbehaved by not returning a complete set of HTTP headers."
Moving the "print $q->header" to before the system() call causes the pdf to generate with wkhtmltopdf's console output ("Loading pages (1/6)," etc.) at the top of the file, so I think what's happening is that wkhtmltopdf is spewing that information headerless to the server and causing it to fail. But I can't find any options in the wkhtmltopdf docs to turn off the console output, and I can't figure out a perl method to suppres/redirect that output.
(Yes, I'm aware of WKHTMLTOPDF.pm, but I was having trouble installing it for my flavor of ActivePerl and I wanted to avoid switching if possible.)
How about executing via qx or backticks instead of system(), and redirecting the output to NUL:?
qx("d:/very/long/path/wkhtmltopdf" "$path_to_files/${file}_pdf.html" "$path_to_files/$file.pdf" > NUL: 2> NUL:);
I'm trying to stream a file from a remote website to a local command and am running into some problems when trying to detect errors.
The code looks something like this:
use IPC::Open3;
my #cmd = ('wget','-O','-','http://10.10.1.72/index.php');#any website will do here
my ($wget_pid,$wget_in,$wget_out,$wget_err);
if (!($wget_pid = open3($wget_in,$wget_out,$wget_err,#cmd))){
print STDERR "failed to run open3\n";
exit(1)
}
close($wget_in);
my #wget_outs = <$wget_out>;
my #wget_errs = <$wget_err>;
print STDERR "wget stderr: ".join('',#wget_errs);
#page and errors outputted on the next line, seems wrong
print STDERR "wget stdout: ".join('',#wget_outs);
#clean up after this, not shown is running the filtering command, closing and waitpid'ing
When I run that wget command directly from the command-line and redirect stderr to a file, something sane happens - the stdout will be the downloaded page, the stderr will contain the info about opening the given page.
wget -O - http://10.10.1.72/index.php 2> stderr_test_file
When I run wget via open3, I'm getting both the page and the info mixed together in stdout. What I expect is the loaded page in one stream and STDERR from wget in another.
I can see I've simplified the code to the point where it's not clear why I want to use open3, but the general plan is that I wanted to stream stdout to another filtering program as I received it, and then at the end I was going to read the stderr from both wget and the filtering program to determine what, if anything went wrong.
Other important things:
I was trying to avoid writing the wget'd data to a file, then filtering that file to another file, then reading the output.
It's key that I be able to see what went wrong, not just reading $? >> 8 (i.e. I have to tell the user, hey, that IP address is wrong, or isn't the right kind of website, or whatever).
Finally, I'm choosing system/open3/exec over other perl-isms (i.e. backticks) because some of the input is provided by untrustworthy users.
You are passing an undefined value as the error handle argument to open3, and as IPC::Open3 says:
If CHLD_ERR is false, or the same file descriptor as CHLD_OUT, then STDOUT and STDERR of the child are on the same filehandle (this means that an autovivified lexical cannot be used for the STDERR filehandle, see SYNOPSIS) ...
The workaround is to initialize $wget_err to something before calling open3:
my ($wget_pid, $wget_in, $wget_out, $wget_err);
use Symbol qw(gensym);
$wget_err = gensym();
if (!$wget_pid = open3( ... ) ) { ...