Why is gnuplot failing with "Unknown device: pngalpha"? - png

I'm running gnuplot on RH7 through a perl script using the Chart::Gnuplot per module.
The perl version is 5.8.8.
The gnuplot version is a bit less obvious, but $VERSION in Gnuplot.pm is set equal to '0.23' (although I get the same results with ver 3.2)
Anyway, When I run this on RH6, it works fine. RH7 is a problem. The error is...
Unknown device: pngalpha
I tried different versions of gnuplot.pm with no success. But from googling around, I think the problem may reside in a utility (a different install) that gnuplot is using to generate png formatted output. I suspect there's something lacking in the RH7 env for that.
Does anyone know what gnuplot uses to translate it's native graphic format to png ?

I realize this isn't what you asked, but nevertheless I suggest it as a way forward.
Having now looked at the documentation and issues tracker for the perl module Graphics::Chart::Gnuplot, my take is that it is both too old and too narrowly focused on a limited set of gnuplot capabilities to be worth fixing or working around the limitations. You can see issues of inadequate png support that are still active on the tracker from 7 years ago.
I have done a fair amount of perl coding using gnuplot for graphics. Early on I looked into custom modules like the one you mention, but I soon found that it was much preferable to simply open a pipe to gnuplot and send the commands directly. I append below a simple example.
#!/usr/bin/perl -w
#
# open pipe to gnuplot and set terminal type
my $gnuplot = "/usr/local/bin/gnuplot";
open(GNUPLOT, "|$gnuplot") or die "can't find gnuplot";
binmode GNUPLOT,":encoding(UTF-8)";
my $outfile = $ARGV[0];
# send some simple commands one at a time
print GNUPLOT "set term pngcairo font 'arial,10' size 600,400\n";
print GNUPLOT "set output '$outfile'\n";
# send a block of commands
print GNUPLOT <<EOFgnu;
set title 'Example of gnuplot from perl'
sinc(x) = (x==0) ? 1.0 : sin(x) / x
plot sinc(x) with lines linecolor 'blue' linewidth 3
EOFgnu
# That's it. We're done.
close GNUPLOT;
And here is the file created by perl example.pl foo.png

Related

how to read texts on the terminal inside perl script

Is there any way to capture the texts on termianl screen inside a perl script. I know there are some functions like system,exec,backticks but the problem is that they execute commands FROM the script.For ex:- in terminal i write cd/ (or) ls,and after that i run my perl script which will read what was written on termianl screen(in this case, scipt will capture cd/ (or) ls-whichever was given to termianl). I came with one solution that by passing the commands which you wrote on termianl as a command line arguments to the script,but any other way???
Like this maybe:
history | perl -ne 'print $_'
As I understand it, in a situation where you've typed some stuff into a terminal like this:
[tai#littlerobot ~] echo "Hello"
Hello
[tai#littlerobot ~] perl myscript.pl
You want myscript.pl to be able to access the echo "Hello" part, and possibly also the Hello that was that command's output.
Perl does not provide such a feature. No programming language does or can provide such a feature because the process in which your script/program runs has no intrinsic knowledge about what happened in the same terminal before it was run. The only way it could access this text would be if it could ask the currently running terminal, which will have some record of this information (i.e. the scrollback buffer), even if it cannot distinguish between which characters in the text were typed by you, and which are output. However, I know of no terminal that exposes that information via any kind of public API.
So if you want myscript.pl to be able to access that echo "Hello", you'll need to pass it to your script. Piping history to your script (as shown by Mark Setchell in his answer) is one technique. history is a shell built-in, so it has as much knowledge as your shell has (which is not quite the same knowledge as your terminal has). In particular it can give you a list of what commands have been typed in this shell session. However, it cannot tell you about the output generated by those commands. And it cannot tell you about other shell sessions, so doing this in Perl is fairly useless:
my #history = `tcsh -c history`;
The last thing you could try (though it would be incredibly complicated to do) would be to ask the X server (or Windows if running on that operating system) for a screen shot and then attempt to locate which rectangle the current terminal is running in and perform OCR on it. This would be fraught with problems though, such as dealing with overlapping windows.
So, in summary, you cannot do this. It's nothing to do with Perl. You cannot do this in any programming language.

Cleanup huge Perl Codebase

I am currently working on a roughly 15 years old web application.
It contains mainly CGI perl scripts with HTML::Template templates.
It has over 12 000 files and roughly 260 MB of total code. I estimate that no more than 1500 perl scripts are needed and I want to get rid of all the unused code.
There are practically no tests written for the code.
My questions are:
Are you aware of any CPAN module that can help me get a list of only used and required modules?
What would be your approach if you'd want to get rid of all the extra code?
I was thinking at the following approaches:
try to override the use and require perl builtins with ones that output the loaded file name in a specific location
override the warnings and/or strict modules import function and output the file name in the specific location
study the Devel::Cover perl module and take the same approach and analyze the code when doing manual testing instead of automated tests
replace the perl executable with a custom one, which will log each name of file it reads (I don't know how to do that yet)
some creative use of lsof (?!?)
Devel::Modlist may give you what you need, but I have never used it.
The few times I have needed to do somehing like this I have opted for the more brute force approach of inspecting %INC at the end the program.
END {
open my $log_fh, ...;
print $log_fh "$_\n" for sort keys %INC;
}
As a first approximation, I would simply run
egrep -r '\<(use|require)\>' /path/to/source/*
Then spend a couple of days cleaning up the output from that. That will give you a list of all of the modules used or required.
You might also be able to play around with #INC to exclude certain library paths.
If you're trying to determine execution path, you might be able to run the code through the debugger with 'trace' (i.e. 't' in the debugger) turned on, then redirect the output to a text file for further analysis. I know that this is difficult when running CGI...
Assuming the relevant timestamps are turned on, you could check access times on the various script files - that should rule out any top-level script files that aren't being used.
Might be worth adding some instrumentation to CGI.pm to log the current script-name ($0) to see what's happening.

In Perl scripts, should we use shell commands or call Perl functions that imitate shell operations?

I want to know about the best practices here. Suppose I want to get the content of some line of a file. I can use a one-line shell command to get my answer, or write a subroutine, as shown in the code below.
A text file named some_text:
She laughed. Then both continued eating in silence, like strangers,
but after dinner they walked side by side; and there sprang up
between them the light jesting conversation of people who are free
and satisfied, to whom it does not matter where they go or what
they talk about.
Code to get content of line 5 of the file
#!perl
use warnings;
use strict;
my $file = "some_text";
my $lnum = 5;
my $shellcmd = "awk 'NR==$lnum' $file";
print qx($shellcmd);
print getSrcLine($file, $lnum);
sub getSrcLine {
my($file, $lnum) = #_;
open FILE, $file or die "$!";
my #ray = <FILE>;
return $ray[$lnum-1];
}
I ask this because I see a lot of Perl scripts where at some point, a shell command was called, while at some later point, the same task was done by a call to a (library or handwritten) function, for example, rm -rf versus File::Path::rmtree. I just want to make it consistent.
What is the recommended thing to do?
If there's a Perl function for the operation, Perl thinks you should use its version. However, you give an example of a Perl module providing a pure Perl way to do it. That's much different. There's no single answer (as in most things), so you have to decide for yourself what to do:
Does the pure Perl approach do it correctly? For example, File::Copy has some limitations because it makes some awkward decisions for the user, so many people think it's broken. See, for instance, File::Copy versus cp/mv.
Does pure Perl approach do it in an acceptable time? Sometimes the external program is orders of magnitude faster. Sometimes it's a lot slower.
External commands usually are portable within a family of systems (e.g. all linux-like systems) but probably not across families (e.g. Windows and linux). Your tolerance for that might affect your answer. Even if you think you are running the same command, the different flavors of unix-like systems might have different switches for the operations.
Passing complicated arguments—spaces, quotes, and special characters—to external commands can make you cry. You have to do a lot of fiddly work to make sure you're handling arguments correctly. Perl subroutines don't care though.
You have to pay much more attention to what you are doing when you are using the external command. If you just call rm, Perl is going to search through your PATH and use the first thing called rm. That doesn't mean it's the program you think it is. I write about this quite a bit in the "Secure Programming Techniques" in Mastering Perl.
If the pure Perl approach requires a module, especially if that module has many complicated dependencies, you might be in for dependency or distribution hell down the road.
Personally, I start with the pure Perl approach until it doesn't work for the situation.
For your particular examples, I'd use Perl. Shelling out to awk, which is a proto-Perl, is just odd. You should be able to do everything awk does right it Perl. If you have an awk program, you can convert it to Perl with the a2p program:
NR==5
a2p turns that into (modulo some setup bits at the start):
while (<>) {
print $_ if $. == 5;
}
Notice that it still scans the entire file even though you have the fifth line. However, you can use the translated program as a start:
while (<>) {
if( $. == 5 ) {
print;
last;
}
}
I don't think you should shell out to some other program to avoid that Perl code.
To remove a directory tree, I like File::Path. It has some dependencies, but they are all in the Perl Standard Library. There's very little pain, if any, associated with that module. I'd use it until I ran into a problem where it didn't work.
If you want your app to be portable to non-unix systems, then definitely code everything in Perl.
If not, it's really up to you... creating a new process is slower, but if it's not important for the task then it doesn't matter. Personally I would pick the solution which I can quicker implement.
It seems to me that code that works should be the first priority. Yours fails if the file name has a space in it, for example.
Using the shell makes it harder to code correctly since your program needs to properly generate another program to be run by sh. (This problem goes away if you use the multi-arg version of system to avoid the shell.)
Furthermore, using external tools can make it hard to handle errors. You didn't even attempt to do so!
On the flip side, there are multiple reasons for using external tools. For example, Perl doesn't provide as good an file copy utility as cp; using the sort tool allows you to sort arbitrary large files with limited RAM; etc.

What are the best-practices for implementing a CLI tool in Perl?

I am implementing a CLI tool using Perl.
What are the best-practices we can follow here?
As a preface, I spent 3 years engineering and implementing a pretty complicated command line toolset in Perl for a major financial company. The ideas below are basically part of our team's design guidelines.
User Interface
Command line option: allow as many as possible have default values.
NO positional parameters for any command that has more than 2 options.
Have readable options names. If length of command line is a concern for non-interactive calling (e.g. some un-named legacy shells have short limits on command lines), provide short aliases - GetOpt::Long allows that easily.
At the very least, print all options' default values in '-help' message.
Better yet, print all the options' "current" values (e.g. if a parameter and a value are supplied along with "-help", the help message will print parameter's value from command line). That way, people can assemble command line string for complicated command and verify it by appending "-help", before actually running.
Follow Unix standard convention of exiting with non-zero return code if program terminated with errors.
If your program may produce useful (e.g. worth capturing/grepping/whatnot) output, make sure any error/diagnostic messages go to STDERR so they are easily separable.
Ideally, allow the user to specify input/output files via command line parameter, instead of forcing "<" / ">" redirects - this allows MUCH simpler life to people who need to build complicated pipes using your command. Ditto for error messages - have logfile option.
If a command has side effect, having a "whatif/no_post" option is usually a Very Good Idea.
Implementation
As noted previously, don't re-invent the wheel. Use standard command line parameter handling modules - MooseX::Getopt, or Getopt::Long
For Getopt::Long, assign all the parameters to a single hash as opposed to individual variables. Many useful patterns include passing that CLI args hash to object constructors.
Make sure your error messages are clear and informative... E.g. include "$!" in any IO-related error messages. It's worth expending extra 1 minute and 2 lines in your code to have a separate "file not found" vs. "file not readable" errors, as opposed to spending 30 minutes in production emergency because a non-readable file error was misdiagnosed by Production Operations as "No input file" - this is a real life example.
Not really CLI-specific, but validate all parameters, ideally right after getting them.
CLI doesn't allow for a "front-end" validation like webapps do, so be super extra vigilant.
As discussed above, modularize business logic. Among other reasons already listed, the amount of times I had to re-implement an existing CLI tool as a web app is vast - and not that difficult if the logic is already a properly designed perm module.
Interesting links
CLI Design Patterns - I think this is ESR's
I will try to add more bullets as I recall them.
Use POD to document your tool, follow the guidelines of manpages; include at least the following sections: NAME, SYNOPSIS, DESCRIPTION, AUTHOR. Once you have proper POD you can generate a man page with pod2man, view the documentation at the console with perldoc your-script.pl.
Use a module that handles command line options for you. I really like using Getopt::Long in conjunction with Pod::Usage this way invoking --help will display a nice help message.
Make sure that your scripts returns a proper exit value if it was successful or not.
Here's a small skeleton of a script that does all of these:
#!/usr/bin/perl
=head1 NAME
simplee - simple program
=head1 SYNOPSIS
simple [OPTION]... FILE...
-v, --verbose use verbose mode
--help print this help message
Where I<FILE> is a file name.
Examples:
simple /etc/passwd /dev/null
=head1 DESCRIPTION
This is as simple program.
=head1 AUTHOR
Me.
=cut
use strict;
use warnings;
use Getopt::Long qw(:config auto_help);
use Pod::Usage;
exit main();
sub main {
# Argument parsing
my $verbose;
GetOptions(
'verbose' => \$verbose,
) or pod2usage(1);
pod2usage(1) unless #ARGV;
my (#files) = #ARGV;
foreach my $file (#files) {
if (-e $file) {
printf "File $file exists\n" if $verbose;
}
else {
print "File $file doesn't exist\n";
}
}
return 0;
}
Some lessons I've learned:
1) Always use Getopt::Long
2) Provide help on usage via --help, ideally with examples of common scenarios. It helps people don't know or have forgotten how to use the tool. (I.e., you in six months).
3) Unless it's pretty obvious to the user as why, don't go for long period (>5s) without output to the user. Something like 'print "Row $row...\n" unless ($row % 1000)' goes a long way.
4) For long running operations, allow the user to recover if possible. It really sucks to get through 500k of a million, die, and start over again.
5) Separate the logic of what you're doing into modules and leave the actual .pl script as barebones as possible; parsing options, display help, invoking basic methods, etc. You're inevitably going to find something you want to reuse, and this makes it a heck of a lot easier.
The most important thing is to have standard options.
Don't try to be clever, be simply consistent with already existing tools.
How to achieve this is also important, but only comes second.
Actually, this is quite generic to all CLI interfaces.
There are a couple of modules on CPAN that will make writing CLI programs a lot easier:
App::CLI
App::Cmd
If you app is Moose based also have a look at MooseX::Getopt and MooseX::Runnable
The following points aren't specific to Perl but I've found many Perl CL scripts to be deficient in these areas:
Use common command line options. To show the version number implement -v or --version not --ver. For recursive processing -r (or perhaps -R although in my Gnu/Linux experience -r is more common) not --rec. People will use your script if they can remember the parameters. It's easy to learn a new command if you can remember "it works like grep" or some other familiar utility.
Many command line tools process "things" (files or directories) within the "current directory". While this can be convenient make sure you also add command line options for explicitly identifying the files or directories to process. This makes it easier to put your utility in a pipeline without developers having to issue a bunch of cd commands and remember which directory they're in.
You should use Perl modules to make your code reusable and easy to understand.
should have a look at Perl best practices

Why does IIS crash when I print to stderr in Perl?

This has been driving me crazy. We have IIS (6) and windows 2008 and ActiveState Perl 5.10. For some reason whenever we do a warn or a carp it eventually corrupts the app pool. Of course, that's a pretty big deal since it means that our errors actually cause problems.
This happened with the previous version of Perl (5.8) and Windows (2003) and IIS (5.) Anyway, basically I put in a carp or a warn and I get an error message and then some garbage text. Any thoughts?
Check to make sure that IIS and the perl DLL are linked with the same version of the C runtime library. (Use depends.exe or dumpbin /dependents).
To expand: the problem may be that IIS has its FILE* table in one place, and the perl DLL thinks it's going to be in a slightly different place. When perl goes to find the stderr handle, it treats random memory as a file handle, with predictable results.
Try adding the following to the top of your scripts:
BEGIN {
open STDERR, '>> c:/iisError.log'
or die "Can't write to c:/issError.log: $!\n";
binmode STDERR;
}
I'm not sure why you would have this problem. But several "wild" guesses as to sources for such a problem would be addressed by the above code.
(It has been a while since I read the source code for appending to files in Win32, but, as I recall, the >> mode plus binmode means that writes to the file from different processes are unlikely to collide, preventing overlapping text in the log.)
A couple of suggestions:
Make sure that the id of the worker
process has write permission to the
directory/file you are writing. I
probably wouldn't give it full
control of C:, though. Better to
make a sub-directory.
Write to the event log instead of a file using
Win32::EventLog
Update: I discovered that this error only happens when you have a variable in the warn. If the warn is just regular text there are no issues. Also, the variable cannot be empty and it looks like you have to have two warns with nonempty variables to hit the bug.