How do you call user input from terminal - perl

I have a script that starts of with asking the user for a filename.
I want to add a feature so that i can supply a filename in the commandline even before running the program.
To the point:
If i started my program with "perl wordcounter.pl text.txt",
How could i access the string after the name of the program (ie text.txt), from within the code?
I started doing something like:
$filename = "Filename supplied in the commandline";
if ($filename == 0) {
print "Word frequancy counter.\nEnter the name of the file that you wish to analyze.\n";
chomp ($filename = <STDIN>);
}
Basically if no textfile was supplied at the commandline, $filename would still 0 and then it would proceed to ask for a file.
Anyone know how i can access the "Filename supplied in the commandline" in the code?

try using array #ARGV
$x1 = $ARGV[0] || 'NONE';

although #ARGV is a promising option but i would rather use getopts
here is the sample code that can be used according to your need
try this perl sample.pl -file text.txt and again without file argument perl sample.pl
#!/usr/bin/perl
use Getopt::Long;
# setup my defaults
GetOptions(
'file=s' => \$file,
'help!' => \$help,
) or die "Incorrect usage!\n";
if( $help ) {
print "Common on, it's really not that hard.\n";
} else {
print "reading the $name.\n";
if ( -e $file ){
open( DATA ,'<',$file ) or die "unable to open \$file = $file\n";
while (<DATA>){
print "$_\n";
}
} else {
chomp ($file = <STDIN>);
open( DATA ,'<',$file ) or die "unable to open \$file = $file\n";
while (<DATA>){
print "$_\n";
}
}
}
~

Related

how to combine the code to make the output is on the same line?

Can you help me to combine both of these progeam to display the output in a row with two columns? The first column is for $1 and the second column is $2.
Kindly help me to solve this. Thank you :)
This is my code 1.
#!/usr/local/bin/perl
#!/usr/bin/perl
use strict ;
use warnings ;
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
my $input = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $output = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $input => $output
or die "gunzip failed: $GunzipError\n";
open (FILE, '<',"$output") or die "Cannot open $output\n";
while (<FILE>) {
my $line = $_;
chomp ($line);
if ($line=~ m/^\s+Timing Path Group \'(\S+)\'/) {
$line = $1;
print ("$1\n");
}
}
close (FILE);
This is my code 2.
my $input = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $output = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $input => $output
or die "gunzip failed: $GunzipError\n";
open (FILE, '<',"$output") or die "Cannot open $output\n";
while (<FILE>) {
my $line = $_;
chomp ($line);
if ($line=~ m/^\s+Levels of Logic:\s+(\S+)/) {
$line = $1;
print ("$1\n");
}
}
close (FILE);
this is my output for code 1 which contain 26 line of data:
**async_default**
**clock_gating_default**
Ddia_link_clk
Ddib_link_clk
Ddic_link_clk
Ddid_link_clk
FEEDTHROUGH
INPUTS
Lclk
OUTPUTS
VISA_HIP_visa_tcss_2000
ckpll_npk_npkclk
clstr_fscan_scanclk_pulsegen
clstr_fscan_scanclk_pulsegen_notdiv
clstr_fscan_scanclk_wavegen
idvfreqA
idvfreqB
psf5_primclk
sb_nondet4tclk
sb_nondetl2tclk
sb_nondett2lclk
sbclk_nondet
sbclk_sa_det
stfclk_scan
tap4tclk
tapclk
The output code 1 also has same number of line.
paste is useful for this: assuming your shell is bash, then using process substitutions
paste <(perl script1.pl) <(perl script2.pl)
That emits columns separated by a tab character. For prettier output, you can pipe the output of paste to column
paste <(perl script1.pl) <(perl script2.pl) | column -t -s $'\t'
And with this, you con't need to try and "merge" your perl programs.
To combine the two scripts and to output two items of data on the same line, you need to hold on until the end of the file (or until you have both data items) and then output them at once. So you need to combine both loops into one:
#!/usr/bin/perl
use strict ;
use warnings ;
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
my $input = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $output = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $input => $output
or die "gunzip failed: $GunzipError\n";
open (FILE, '<',"$output") or die "Cannot open $output\n";
my( $levels, $timing );
while (<FILE>) {
my $line = $_;
chomp ($line);
if ($line=~ m/^\s+Levels of Logic:\s+(\S+)/) {
$levels = $1;
}
if ($line=~ m/^\s+Timing Path Group \'(\S+)\'/) {
$timing = $1;
}
}
print "$levels, $timing\n";
close (FILE);
You still haven't given us one vital piece of information - what does the input data looks like. Most importantly, are the two pieces of information you're looking for on the same line?
[Update: Looking more closely at your regexes, I see it's possible for both pieces of information to be on the same line - as they are both supposed to be the first item on the line. It would be helpful if you were clearer about that in your question.]
I think this will do the right thing, no matter what the answer to your question is. I've also added the improvements I suggested in my answer to your previous question:
#!/usr/bin/perl
use strict ;
use warnings ;
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
my $zipped = "par_disp_fabric.all_max_lowvcc_qor.rpt.gz";
my $unzipped = "par_disp_fabric.all_max_lowvcc_qor.txt";
gunzip $zipped => $unzipped
or die "gunzip failed: $GunzipError\n";
open (my $fh, '<', $unzipped) or die "Cannot open '$unzipped': $!\n";
my ($levels, $timing);
while (<$fh>) {
chomp;
if (m/^\s+Levels of Logic:\s+(\S+)/) {
$levels = $1;
}
if (m/^\s+Timing Path Group \'(\S+)\'/) {
$timing = $1;
}
# If we have both values, then print them out and
# set the variables to 'undef' for the next iteration
if ($levels and $timing) {
print "$levels, $timing\n";
undef $levels;
undef $timing;
}
}
close ($fh);

Return file handle from subroutine and pass to other subroutine

I am trying to create a couple of functions that will work together. getFH should take in the mode to open the file (either > or < ), and then the file itself (from the command line). It should do some checking to see if the file is okay to open, then open it, and return the file handle. doSomething should take in the file handle, and loop over the data and do whatever. However when the program lines to the while loop, I get the error:
readline() on unopened filehandle 1
What am I doing wrong here?
#! /usr/bin/perl
use warnings;
use strict;
use feature qw(say);
use Getopt::Long;
use Pod::Usage;
# command line param(s)
my $infile = '';
my $usage = "\n\n$0 [options] \n
Options
-infile Infile
-help Show this help message
\n";
# check flags
GetOptions(
'infile=s' => \$infile,
help => sub { pod2usage($usage) },
) or pod2usage(2);
my $inFH = getFh('<', $infile);
doSomething($inFH);
## Subroutines ##
## getFH ##
## #params:
## How to open file: '<' or '>'
## File to open
sub getFh {
my ($read_or_write, $file) = #_;
my $fh;
if ( ! defined $read_or_write ) {
die "Read or Write symbol not provided", $!;
}
if ( ! defined $file ) {
die "File not provided", $!;
}
unless ( -e -f -r -w $file ) {
die "File $file not suitable to use", $!;
}
unless ( open( $fh, $read_or_write, $file ) ) {
die "Cannot open $file",$!;
}
return($fh);
}
#Take in filehandle and do something with data
sub doSomething{
my $fh = #_;
while ( <$fh> ) {
say $_;
}
}
my $fh = #_;
This line does not mean what you think it means. It sets $fh to the number of items in #_ rather than the filehandle that is passed in - if you print the value of $fh, it will be 1 instead of a filehandle.
Use my $fh = shift, my $fh = $_[0], or my ($fh) = #_ instead.
As has been pointed out, my $fh = #_ will set $fh to 1, which is not a file handle. Use
my ($fh) = #_
instead to use list assignment
In addition
-e -f -r -w $file will not do what you want. You need
-e $file and -f $file and -r $file and -w $file
And you can make this more concise and efficient by using underscore _ in place of the file name, which will re-use the information fetched for the previous file test
-e $file and -f _ and -r _ and -w _
However, note that you will be rejecting a request if a file isn't writeable, which makes no sense if the request is to open a file for reading. Also, -f will return false if the file doesn't exist, so -e is superfluous
It is good to include $! in your die strings as it contains the reason for the failure, but your first two tests don't set this value up, and so should be just die "Read or Write symbol not provided"; etc.
In addition, die "Cannot open $file", $! should probably be
die qq{Cannot open "$file": $!}
to make it clear if the file name is empty, and to add some space between the message and the value of $!
The lines read from the file will have a newline character at the end, so there is no need for say. Simply print while <$fh> is fine
Perl variable names are conventionally snake_case, so get_fh and do_something is more usual

How to read from a file and direct output to a file if a file name is given in the command line, and printing to console if no argument given

I made a file, "rootfile", that contains paths to certain files and the perl program mymd5.perl gets the md5sum for each file and prints it in a certain order. How do I redirect the output to a file if a name is given in the command line? For instance if I do
perl mymd5.perl md5file
then it will feed output to md5file. And if I just do
perl mydm5.perl
it will just print to the console.
This is my rootfile:
/usr/local/courses/cs3423/assign8/cmdscan.c
/usr/local/courses/cs3423/assign8/driver.c
/usr/local/courses/cs3423/assign1/xpostitplus-2.3-3.diff.gz
This is my program right now:
open($in, "rootfile") or die "Can't open rootfile: $!";
$flag = 0;
if ($ARGV[0]){
open($out,$ARGV[0]) or die "Can't open $ARGV[0]: $!";
$flag = 1;
}
if ($flag == 1) {
select $out;
}
while ($line = <$in>) {
$md5line = `md5sum $line`;
#md5arr = split(" ",$md5line);
if ($flag == 0) {
printf("%s\t%s\n",$md5arr[1],$md5arr[0]);
}
}
close($out);
If you don't give a FILEHANDLE to print or printf, the output will go to STDOUT (the console).
There are several way you can redirect the output of your print statements.
select $out; #everything you print after this line will go the file specified by the filehandle $out.
... #your print statements come here.
close $out; #close connection when done to avoid counfusing the rest of the program.
#or you can use the filehandle right after the print statement as in:
print $out "Hello World!\n";
You can print a filename influenced by the value in #ARGV as follows:
This will take the name of the file in $ARGV[0] and use it to name a new file, edit.$ARGV[0]
#!/usr/bin/perl
use warnings;
use strict;
my $file = $ARGV[0];
open my $input, '<', $file or die $!;
my $editedfile = "edit.$file";
open my $name_change, '>', $editedfile or die $!;
if ($input eq "md5file"){
while ($in){
# Do something...
print $name_change "$_\n";
}
}
Perhaps the following will be helpful:
use strict;
use warnings;
while (<>) {
my $md5line = `md5sum $_`;
my #md5arr = split( " ", $md5line );
printf( "%s\t%s\n", $md5arr[1], $md5arr[0] );
}
Usage: perl mydm5.pl rootfile [>md5file]
The last, optional parameter will direct output to the file md5file; if absent, the results are printed to the console.

Perl - empty rows while writing CSV from Excel

I want to convert excel-files to csv-files with Perl. For convenience I like to use the module File::Slurp for read/write operations. I need it in a subfunction.
While printing out to the screen, the program generates the desired output, the generated csv-files unfortunately just contain one row with semicolons, field are empty.
Here is the code:
#!/usr/bin/perl
use File::Copy;
use v5.14;
use Cwd;
use File::Slurp;
use Spreadsheet::ParseExcel;
sub xls2csv {
my $currentPath = getcwd();
my #files = <$currentPath/stage0/*.xls>;
for my $sourcename (#files) {
print "Now working on $sourcename\n";
my $outFile = $sourcename;
$outFile =~ s/xls/csv/g;
print "Output CSV-File: ".$outFile."\n";
my $source_excel = new Spreadsheet::ParseExcel;
my $source_book = $source_excel->Parse($sourcename)
or die "Could not open source Excel file $sourcename: $!";
foreach my $source_sheet_number ( 0 .. $source_book->{SheetCount} - 1 )
{
my $source_sheet = $source_book->{Worksheet}[$source_sheet_number];
next unless defined $source_sheet->{MaxRow};
next unless $source_sheet->{MinRow} <= $source_sheet->{MaxRow};
next unless defined $source_sheet->{MaxCol};
next unless $source_sheet->{MinCol} <= $source_sheet->{MaxCol};
foreach my $row_index (
$source_sheet->{MinRow} .. $source_sheet->{MaxRow} )
{
foreach my $col_index (
$source_sheet->{MinCol} .. $source_sheet->{MaxCol} )
{
my $source_cell =
$source_sheet->{Cells}[$row_index][$col_index];
if ($source_cell) {
print $source_cell->Value, ";"; # correct output!
write_file( $outFile, { binmode => ':utf8' }, $source_cell->Value, ";" ); # only one row of semicolons with empty fields!
}
}
print "\n";
}
}
}
}
xls2csv();
I know it has something to do with the parameter passing in the write_file function, but couldn't manage to fix it.
Has anybody an idea?
Thank you very much in advance.
write_file will overwrite the file unless the append => 1 option is given. So this:
write_file( $outFile, { binmode => ':utf8' }, $source_cell->Value, ";" );
Will write a new file for each new cell value. It does however not match your description of "only one row of semi-colons of empty fields", as it should only be one semi-colon, and one value.
I am doubtful towards this sentiment from you: "For convenience I like to use the module File::Slurp". While the print statement works as it should, using File::Slurp does not. So how is that convenient?
What you should do, if you still want to use write_file is to gather all the lines to print, and then print them all at once at the end of the loop. E.g.:
$line .= $source_cell->Value . ";"; # use concatenation to build the line
...
push #out, "$line\n"; # store in array
...
write_file(...., \#out); # print the array
Another simple option would be to use join, or to use the Text::CSV module.
Well, in this particular case, File::Slurp was indeed complicating this for me. I just wanted to avoid to repeat myself, which I did in the following clumsy working solution:
#!/usr/bin/perl
use warnings;
use strict;
use File::Copy;
use v5.14;
use Cwd;
use File::Basename;
use File::Slurp;
use Tie::File;
use Spreadsheet::ParseExcel;
use open qw/:std :utf8/;
# ... other functions
sub xls2csv {
my $currentPath = getcwd();
my #files = <$currentPath/stage0/*.xls>;
my $fh;
for my $sourcename (#files) {
say "Now working on $sourcename";
my $outFile = $sourcename;
$outFile =~ s/xls/csv/gi;
if ( -e $outFile ) {
unlink($outFile) or die "Error: $!";
print "Old $outFile deleted.";
}
my $source_excel = new Spreadsheet::ParseExcel;
my $source_book = $source_excel->Parse($sourcename)
or die "Could not open source Excel file $sourcename: $!";
foreach my $source_sheet_number ( 0 .. $source_book->{SheetCount} - 1 )
{
my $source_sheet = $source_book->{Worksheet}[$source_sheet_number];
next unless defined $source_sheet->{MaxRow};
next unless $source_sheet->{MinRow} <= $source_sheet->{MaxRow};
next unless defined $source_sheet->{MaxCol};
next unless $source_sheet->{MinCol} <= $source_sheet->{MaxCol};
foreach my $row_index (
$source_sheet->{MinRow} .. $source_sheet->{MaxRow} )
{
foreach my $col_index (
$source_sheet->{MinCol} .. $source_sheet->{MaxCol} )
{
my $source_cell =
$source_sheet->{Cells}[$row_index][$col_index];
if ($source_cell) {
print $source_cell->Value, ";";
open( $fh, '>>', $outFile ) or die "Error: $!";
print $fh $source_cell->Value, ";";
close $fh;
}
}
print "\n";
open( $fh, '>>', $outFile ) or die "Error: $!";
print $fh "\n";
close $fh;
}
}
}
}
xls2csv();
I'm actually NOT happy with it, since I'm opening and closing the files so often (I have many files with many lines). That's not very clever in terms of performance.
Currently I still don't know how to use the split or Text:CSV in this case, in order to put everything into an array and to open, write and close each file only once.
Thank you for your answer TLP.

opening file in perl gives errors?

I am writing a perl script which reads a text file (which contains absolute paths of many files one below the other), calculates the file names from abs path & then appends all file names separated by a space to the same file. So, consider a test.txt file:
D:\work\project\temp.txt
D:\work/tests/test.abc
C:/office/work/files.xyz
So after running the script the same file will contain:
D:\work\project\temp.txt
D:\work/tests/test.abc
C:/office/work/files.xyz
temp.txt test.abc files.xyz
I have this script revert.pl:
use strict;
foreach my $arg (#ARGV)
{
open my $file_handle, '>>', $arg or die "\nError trying to open the file $arg : $!";
print "Opened File : $arg\n";
my #lines = <$file_handle>;
my $all_names = "";
foreach my $line (#lines)
{
my #paths = split(/\\|\//, $line);
my $last = #paths;
$last = $last - 1;
my $name = $paths[$last];
$all_names = "$all_names $name";
}
print $file_handle "\n\n$all_names";
close $file_handle;
}
When I run the script I am getting the following error:
>> perl ..\revert.pl .\test.txt
Too many arguments for open at ..\revert.pl line 5, near "$arg or"
Execution of ..\revert.pl aborted due to compilation errors.
What is wrong over here?
UPDATE: The problem is that we are using a very old version of perl. So changed the code to:
use strict;
for my $arg (#ARGV)
{
print "$arg\n";
open (FH, ">>$arg") or die "\nError trying to open the file $arg : $!";
print "Opened File : $arg\n";
my $all_names = "";
my $line = "";
for $line (<FH>)
{
print "$line\n";
my #paths = split(/\\|\//, $line);
my $last = #paths;
$last = $last - 1;
my $name = $paths[$last];
$all_names = "$all_names $name";
}
print "$line\n";
if ($all_names == "")
{
print "Could not detect any file name.\n";
}
else
{
print FH "\n\n$all_names";
print "Success!\n";
}
close FH;
}
Now its printing the following:
>> perl ..\revert.pl .\test.txt
.\test.txt
Opened File : .\test.txt
Could not detect any file name.
What could be wrong now?
Perhaps you are running an old perl version, so you have to use the 2 params open version:
open(File_handle, ">>$arg") or die "\nError trying to open the file $arg : $!";
note I wrote File_handle without the $. Also, read and writting operations to the file will be:
#lines = <File_handle>;
#...
print File_handle "\n\n$all_names";
#...
close File_handle;
Update: reading file lines:
open FH, "+>>$arg" or die "open file error: $!";
#...
while( $line = <FH> ) {
#...
}