A module to generate a table automatically in perl - perl

I have some content on my STDOUT and i want that content need to be arranged in to a descent table.
Can anyone suggest me a Perl module that does handle this kind of requirement
Thanks in Advance, any small help is appreciated.
Thanks!
Aditya

Text::Table and Text::ASCIITable make two different outputs, the latter having outlines. I'm sure there are more hanging around CPAN. You also might look at formats, a little-used bit of Perl functionality, meant for formatting reports.

From CPAN, you can use Text::Table

Assuming you are wanting to pipe the STDOUT from the existing program in to something else to format it, you can do something like this using printf
Create a perl script called process.pl
#/bin/perl
use strict;
while (<>) {
my $unformatted_input = $_;
# Assuming you want to split on spaces, adjust if it is in fixed format.
my #elements = split / +/, $unformatted_input, 4;
# Printf format string, you can adjust lengths here. This would take
# an input of items in the elements array and make each file 10 characters
# See http://perldoc.perl.org/functions/sprintf.html for options
my $format_string='%10s%10s%10s%10s';
printf($format_string,#elements);
}
Then, pipe your STDOUT to this and it will format it to screen:
$ yourProcessThatDoesStdout | process.pl

Related

Convert Perl Script to VBA Script

I have a working Perl script that I would like to convert to VBA to run in an Excel macro so that it can easily be shared to other PC's. I have a shell script (where I pass the parameters) driving the perl script.
I used the Perl script to read each specified column of all rows of data from a fixed width file (below the start is 54,63) and compare that data with another file and print the difference. I'd pass parameters in the Shell script as runpro.pl filea.txt fileb.csv > myoutput.txt
Any assistance would be great! Especially if someone can point me in the right direction since the code is fairly simple. Thanks!
#!/usr/bin/perl
#Perl Script runpro.pl
#***************************************************
use strict;
use warnings;
my ($fa, $fb) = #ARGV;
#ARGV = $fa;
my %codes;
while(<>) {
s/[\r\n]+\z//;
$_ = substr($_, 54, 63);
s/\s+\z//;
next if $_ eq "";
$codes{$_} =1;
}
#ARGV = $fb;
my %descrip;
while(<>) {
s/[\r\n]+\z//;
s/,.*//;
s/"//g;
$descrip {$_} = 1 if s/^1234//;
}
for (sort keys %codes) {$
print "$_\n" unless ($descrip{$_});
}
A couple of points:
1) VBA is very different from Perl - so things that are one-liners in one will be tricky in the other
2) If you haven't used VBA in Excel, I suggest you start by "recording" a macro (first make the "Developer" tab in the ribbon visible, then select "record macro", and start doing things like opening files, importing them (fixed width). After you stop the recording you will see the syntax for doing these things - that should help a lot
3) You will have to decide how you want to pass arguments to VBA - cells on a worksheet, dialog box... There is no such thing as "running VBA from the command line".
I wonder if you really need / want VBA or if you would be better off compiling a standalone program (.exe). Is this meant to run on PC (windows), Mac OS, or both? See for example this earlier question - maybe that's what you actually need (if not what you asked for...)?

How can I have Perl take input from STDIN one character at a time?

I am somewhat a beginner at Perl (compared to people here). I know enough to be able to write programs to do many things with through the command prompt. At one point, I decided to write a command prompt game that constructed a maze and let me solve it. Besides quality graphics, the only thing that it was missing was the ability for me to use the WASD controls without pressing enter after every move I made in the maze.
To make my game work, I want to be able to have Perl take a single character as input from STDIN, without requiring me to use something to separate my input, like the default \n. How would I accomplish this?
I have tried searching for a simple answer online and in a book that I have, but I didn't seem to find anything. I tried setting $/="", but that seemed to bypass all input. I think that there may be a really simple answer to my question, but I am also afraid that it might be impossible.
Also, does $/="" actually bypass input, or does it take input so quickly that it assumes there isn't any input if I'm not already pressing the key?
IO::Prompt can be used:
#!/usr/bin/env perl
use strict;
use warnings;
use IO::Prompt;
my $key = prompt '', -1;
print "\nPressed key: $key\n";
Relevant excerpt from perldoc -v '$/' related to setting $/ = '':
The input record separator, newline by default. This influences Perl's
idea of what a "line" is. Works like awk's RS variable, including
treating empty lines as a terminator if set to the null string (an empty line cannot contain any spaces or tabs).
The shortest way to achieve your goal is to use this special construct:
$/ = \1;
This tells perl to read one character at a time. The next time you read from any stream (not just STDIN)
my $char = <STREAM>;
it will read 1 character per assignment. From perlvar "Setting $/ to a reference to an integer, scalar containing an integer, or scalar that's convertible to an integer will attempt to read records instead of lines, with the maximum record size being the referenced integer number of characters."
If you are using *nix, you will find Curses useful.
It has a getch method that does what you want.
Term::TermKey also looks like a potential solution.
IO::Prompt is no longer maintained but IO::Prompter
has a nice example (quoted from that site):
use IO::Prompter;
# This call has no automatically added options...
my $assent = prompt "Do you wish to take the test?", -yn;
{
use IO::Prompter [-yesno, -single, -style=>'bold'];
# These three calls all have: -yesno, -single, -style=>'bold' options
my $ready = prompt 'Are you ready to begin?';
my $prev = prompt 'Have you taken this test before?';
my $hints = prompt 'Do you want hints as we go?';
}
# This call has no automatically added options...
scalar prompt 'Type any key to start...', -single;

How can I read a continuously updating log file in Perl?

I have a application generating logs in every 5 sec. The logs are in below format.
11:13:49.250,interface,0,RX,0
11:13:49.250,interface,0,TX,0
11:13:49.250,interface,1,close,0
11:13:49.250,interface,4,error,593
11:13:49.250,interface,4,idle,2994215
and so on for other interfaces...
I am working to convert these into below CSV format:
Time,interface.RX,interface.TX,interface.close....
11:13:49,0,0,0,....
Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated.
What I have tried to read the output and make the header is:
#!/usr/bin/perl -w
use strict;
use File::Tail;
my $head=["Time"];
my $pos={};
my $last_pos=0;
my $current_event=[];
my $events=[];
my $file = shift;
$file = File::Tail->new($file);
while(defined($_=$file->read)) {
next if $_ =~ some filters;
my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_;
my $key = $interface.".".$eve;
if (not defined $pos->{$eve_key}) {
$last_pos+=1;
$pos->{$eve_key}=$last_pos;
push #$head,$eve;
}
print join(",", #$head) . "\n";
}
Is there any way to do this using Perl?
Module Text::CSV will allow you to both read and write CSV format files. Text::CSV will internally use Text::CSV_XS if it's installed, or it will fall back to using Text::CSV_PP (thanks to Brad Gilbert for improving this explanation).
Grouping the related rows together is something you will have to do; it is not clear from your example where the source date goes to.
Making sure that the CSV output is updated is primarily a question of ensuring that you have the output file line buffered.
As David M suggested, perhaps you should look at the File::Tail module to deal with the continuous reading aspect of the problem. That should allow you to continually read from the input log file.
You can then use the 'parse' method in Text::CSV to split up the read line, and the 'print' method to format the output. How you combine the information from the various input lines to create an output line is a mystery to me - I cannot see how the logic works from the example you give. However, I assume you know what you need to do, and these tools will give you the mechanisms you need to handle the data.
No-one can do much more to spoon-feed you the answer. You are going to have to do some thinking for yourself. You will have a file handle that can be continuously read via File::Tail; you will have a CSV structure for reading the data lines; you will probably have another CSV structure for the written output; you will have an output file handle that you ensure is flushed every time you write. Connecting these dots is now your problem.

How can I extract fields from a CSV file in Perl?

I want to extract a particular fields from a csv file (830k records) and store into hash. Is there any fast and easy way to do in Perl with out using any external methods?
How can I achieve that?
Use Text::CSV_XS. It's fast, moderately flexible, and extremely well-tested. The answer to many of these questions is something on CPAN. Why spend the time to make something not as good as what a lot of people have already perfected and tested?
If you don't want to use external modules, which is a silly objection, look at the code in Text::CSV_XS and do that. I'm constantly surprised that people think that even though they think they can't use a module they won't use a known and tested solution as example code for the same task.
assuming normal csv (ie, no embedded commas), to get 2nd field for example
$ perl -F"," -lane 'print $F[1];' file
See also this code fragment taken from The Perl Cookbook which is a great book in itself for Perl solutions to common problems
using split command would do the job I guess. (guessing columns are separated by commas and no commas present in fields)
while (my $line = <INPUTFILE>){
#columns= split ('<field_separator>',$line); #field separator is ","
}
and then from elements of the "column" array you can construct whatever hash you like.

How can I filter out specific column from a CSV file in Perl?

I am just a beginner in Perl and need some help in filtering columns using a Perl script.
I have about 10 columns separated by comma in a file and I need to keep 5 columns in that file and get rid of every other columns from that file. How do we achieve this?
Thanks a lot for anybody's assistance.
cheers,
Neel
Have a look at Text::CSV (or Text::CSV_XS) to parse CSV files in Perl. It's available on CPAN or you can probably get it through your package manager if you're using Linux or another Unix-like OS. In Ubuntu the package is called libtext-csv-perl.
It can handle cases like fields that are quoted because they contain a comma, something that a simple split command can't handle.
CSV is an ill-defined, complex format (weird issues with quoting, commas, and spaces). Look for a library that can handle the nuances for you and also give you conveniences like indexing by column names.
Of course, if you're just looking to split a text file by commas, look no further than #Pax's solution.
Use split to pull the line apart then output the ones you want (say every second column), create the following xx.pl file:
while(<STDIN>) {
chomp;
#fields = split (",",$_);
print "$fields[1],$fields[3],$fields[5],$fields[7],$fields[9]\n"
}
then execute:
$ echo 1,2,3,4,5,6,7,8,9,10 | perl xx.pl
2,4,6,8,10
If you are talking about CSV files in windows (e.g., generated from Excel), you will need to be careful to take care of fields that contain comma themselves but are enclosed by quotation marks.
In this case, a simple split won't work.
Alternatively, you could use Text::ParseWords, which is in the standard library. Add
use Text::ParseWords;
to the top of Pax's example above, and then substitute
my #fields = parse_line(q{,}, 0, $_);
for the split.
You can use some of Perl's built in runtime options to do this on the command line:
$ echo "1,2,3,4,5" | perl -a -F, -n -e 'print join(q{,}, $F[0], $F[3]).qq{\n}'
1,4
The above will -a(utosplit) using the -F(ield) of a comma. It will then join the fields you are interested in and print them back out (with a line separator). This assumes simple data without nested comma's. I was doing this with an unprintable field separator (\x1d) so this wasn't an issue for me.
See http://perldoc.perl.org/perlrun.html#Command-Switches for more details.
Went looking didn't find a nice csv compliant filter program thats flexible to be useful for than just a one-of, so I wrote one. Enjoy.
Basic usage is:
bash$ csvfilter [-r <columnTitle>]* [-quote] <csv.file>
#!/usr/bin/perl
use strict;
use warnings;
use Getopt::Long;
use Text::CSV;
my $always_quote=0;
my #remove;
if ( ! GetOptions('remove:s'=> \#remove,
'quote-always'=>sub {$always_quote=1;}) ) {
die "$0:invalid option (use --remove [--quote-always])";
}
my #cols2remove;
sub filter(#)
{
my #fields=#_;
my #r;
my $i=0;
for my $c (#cols2remove) {
my $p;
#if ( $i $i ) {
push(#r, splice(#fields, $i));
}
return #r;
}
# create just one if these
my $csvOut=new Text::CSV({always_quote=>$always_quote});
sub printLine(#)
{
my #fields=#_;
my $combined=$csvOut->combine(filter(#fields));
my $str=$csvOut->string();
if ( length($str) ) {
print "$str\n";
}
}
my $csv = Text::CSV->new();
my $od;
open($od, "| cat") || die "output:$!";
while () {
$csv->parse($_);
if ( $. == 1 ) {
my $failures=0;
my #cols=$csv->fields;
for my $rm (#remove) {
for (my $c=0; $c$b} #cols2remove);
}
printLine($csv->fields);
}
exit(0);
\
In addition to what people here said about processing comma-separated files, I'd like to note that one can extract the even (or odd) array elements using an array slice and/or map:
#myarray[map { $_ * 2 } (0 .. 4)]
Hope it helps.
My personal favorite way to do CSV is using the AnyData module. It seems to make things pretty simple, and removing a named column can be done rather easily. Take a look on CPAN.
This answers a much larger question, but seems like a good relevant bit of information.
The unix cut command can do what you want (and a whole lot more). It has been reimplemented in Perl.