How to read excel file in Perl CGI?
my $book = ReadData('$lBASEPATH/macro/$gProj/TCO_Excel_Export/TCO_Data_201931316467.xlsx');
say 'A1: ' . $book->[1]{A1};
print $book;
Enclose the path to the .xlsx file using double quotes inside ReadData function.
my $book = ReadData("$lBASEPATH/macro/$gProj/TCO_Excel_Export/TCO_Data_201931316467.xlsx");
say 'A1: ' . $book->[1]{A1};
print $book;
It is not possible to do interpolation with single quotes.
Related
I am try to use Perl language to interact with Quickbase ,I used the below query to export a data table into a text file but I am not getting right format I want, any thoughts? Or if there is another language easier to interact with Quickbase?
#records = $qdb->doQuery($dbid,"{0.CT.''}","6.7.8.9");
$record_count = #records;
foreach $record (#records) {
print MYFILE "|";
foreach $field (keys %$record){
if ($field eq "ColumnA") {
print MYFILE "\"";
print MYFILE " $field : $record->{$field}";
print MYFILE "\"";
}
if ($field eq "ColumnB") {
print MYFILE "\"";
print MYFILE "$field : $record->{$field}";
print MYFILE "\"";
}
if ($field eq "ColumnC") {
print MYFILE "\"";
print MYFILE "$field : $record->{$field}";
print MYFILE "\"";
}
if ($field eq "ColumnD") {
print MYFILE "\"";
print MYFILE "$field : $record->{$field}";
print MYFILE "\"";
}
}
print MYFILE "\n";
}
close LOGFILE;
Wondering, for what kind of answer type do you looking for? But...
I am try to use Perl language to interact with Quickbase,
That's great. Perl is very powerful and suitable for (nearly) any task.
I used the below query to export a data table into a text file
Not very concise code. It is probably a legacy code from an Excel or BASIC person. Some comments:
the code doing the same actions for every field. So, why do you need the if statemenents?
Also, why need break each print into 3 separate prints?
why do you need the | at the beginning of the line?
you probably want to close MYFILE instead of the LOGFILE.
others
it is strange to print to every cell the field_name: field_value, instead of create the column header, but YMMV - so maybe you need this.
it is better to use lexical filehandles, like $myfile instead of the MYFILE
the foreach could be written as for :)
but I am not getting right format I want, any thoughts?
i'm unable to tell anything about the your wanted format, mainly because:
you didn't said anything about what format do you want to get
and, unfortunately, my crystal globe is on the scheduled maintenance. :)
Or if there is another language easier to interact with Quickbase?
Probably not.
The quickbase has an API for the access, (you can learn about it here, and every language (using some libraries) just does the bridge. For the perl it is the HTTP::QuickBase module. Did you read the doc?
Perl is extremely powerful, so anyone can write very concise code. Just need learn the language (as any other one). (Unfortunately, I'am also closer to beginners as experts.)
The above code is could be reduced to:
for my $record ($qdb->doQuery($dbid,"{0.CT.''}","6.7.8.9")) {
print MYFILE '|',
join('|', map {
'"' . $_ . ': ' . $_->{field} . '"'
} keys %$record
), "\n";
}
And will do exactly as the above.
But need to tell, it is still wrong solution. For example:
need cope with the quoting e.g. the "cell content".
but also, the cell contents could contain also the " character, so you need espace them. Here are more escaping techniques for the CSV files, one of is doubling the quote character (usually the "). Or prepend them with \. And much more possible problems, like "new line" characters \n in the cells and so on.
To avoid CSV quoting/escaping hell and other possible problems with CSV generation, you should to use the Text::CSV module. It's been developed in the last 20 years, so it is very long time/hard/stress tested module. You could to use it as:
use Text::CSV;
use autodie;
my $csv = Text::CSV->new ( { sep_char => '|', binary => 1 } ) #if you really want use the '|' instead of the standard comma.
or die "Cannot use CSV: ".Text::CSV->error_diag ();
open $fh, '>', 'some.csv';
$csv->print( $fh, [map { $_->{field} } keys %$_]) for #$records;
close $fh;
Of course, the code is not tested. So, what next?
learn about the quickbase API module
learn about and install the Text::CSV module
read some tutorials and docs about the Perl language itself.
I am reading a file into an array and printing the contents like this:
open (FILE, "ans.txt");
#file = <FILE>;
print "#file\n";
The file looks like this:
51.5440622646247 - 31.2571428571429
51.5440622646247 - 48.0616834439923
But the output has an extra space at the beginning of every line after the first:
51.5440622646247 - 31.2571428571429
51.5440622646247 - 48.0616834439923
What causes this and how can I fix it?
Pass your file lines to print as a list, instead of interpolating in the string:
print #file, "\n";
Your problem arises because when you interpolate an array "#file\n", it is equivalent to the following:
print join($", #file) . "\n";
Search for $LIST_SEPARATOR in perlvar for more info.
I have two very large XML files that have different kinds of line endings.
File A has CR LF at the end of each XML record. File B has only CR at the end of each XML record.
In order to read File B properly, I need to set the built-in Perl variable $/ to "\r".
But if I'm using the same script with File A, the script does not read each line in the file and instead reads it as a single line.
How can I make the script compatible with text files that have various line ending delimiters? In the code below, the script is reading XML data and then using regex to split records based on a specific XML tag record ending tag like <\record>. Finally it writes the requested records to a file.
open my $file_handle, '+<', $inputFile or die $!;
local $/ = "\r";
while(my $line = <$file_handle>) { #read file line-by-line. Does not load whole file into memory.
$current_line = $line;
if ($spliceAmount > $recordCounter) { #if the splice amount hasn't been reached yet
push (#setofRecords,$current_line); #start adding each line to the set of records array
if ($current_line =~ m|$recordSeparator|) { #check for the node to splice on
$recordCounter ++; #if the record separator was found (end of that record) then increment the record counter
}
}
#don't close the file because we need to read the last line
}
$current_line =~/(\<\/\w+\>$)/;
$endTag = $1;
print "\n\n";
print "End Tag: $endTag \n\n";
close $file_handle;
While you may not need it for this, in theory, to parse .xml, you should use an xml parser. I'd recommend XML::LibXM or perhaps to start off with XML::Simple.
If the file isn't too big to hold in memory, you can slurp the whole thing into a scalar and split it into the correct lines yourself with a suitably flexible regular expression. For example,
local $/ = undef;
my $data = <$file_handle>;
my #lines = split /(?>\r\n)|(?>\r)|(?>\n)/, $data;
foreach my $line (#lines) {
...
}
Using a look-ahead assertion (?>...) preserves the end-of-line characters like the regular <> operator does. If you were just going to chomp them anyway, you can save yourself a step by passing /\r\n|\r|\n/ to split instead.
I am writing data in a file. The file will look like this.
[section1] [section2] [section3]
[section1] [section2] [section3]
I am not writing data to the file directly.
I am first appending rows in a string and then writing to a file.
$str .= "section1_data section2_data section3_data\n";
$str .= "section1_more_data section2_more_data section3_more_data\n";
Now what I want is that all the sections should be 30 chars long.
The data inside all sections will always be less than or equal to 30 chars.
Is there a way to do this in perl?
I am using following syntax to write to file
open FH,">>filename";
print FH $str;
close FH;
$str .= sprintf("[%-30s] [%-30s] [%-30s]\n",
$section1_data,
$section2_data,
$section3_data,
);
I have a problem with my regex:
My script is written in perl.
#!/usr/bin/perl
# Inverse les colonnes 1 et 2
while(<>){
my #cols = split (/\|/);
print "$cols[-3]/$cols[-4]\n";
}
exit;
I create an alias using the command :
alias inverseur="perl /laboratoire10/inverseur_colonnes.pl
I am hoping to accomplish the following:
Write a "bash" script that creates a file container for each movie title (.avi) in the file.
The original file is: http://www.genxvideo.com/genxinventory-current.xls
but I have since renamed it to liste_films.csv .
All quotation marks, spaces, dashes, and other strange characters must be replaced by an underscore, "_".
The group would become the directory name and the title of the movie will follow the file name suffix( .avi). In order to do this, the code must process the fields "title" and "class" in reverse. You can reverse the fields "title" and "class" with the alias "inverter" created earlier.
The script will obviously create each directory in "/laboratoire10" before creating the .avi files. There should be 253 valid directories total. Directories are being created through a "|" with the command "xargs mkdir-pv /."
I need help augmenting my current code with a command to find .avi files whose name contains the string min/maj "wood
It is very hard to understand what exactly you are trying to do. Under the assumption you have a | separated CSV and wish to have a directory tree with CATEGORY/TITLE and the file named "cans.avi" under each directory with that name, here is a one liner perl script.
perl -mText::CSV -e '$csv = Text::CSV->new({ sep_char=>"|",binary=>1,auto_diag => 1 } ) || die; open my $fh, "<", $ARGV[0] or die; while (my $row = $csv->getline($fh)) { $file = cleaner($row->[1])."/".cleaner($row->[0]); print "mkdir $file; touch $file/cans.avi\n"; } sub cleaner($) { my($f) = #_; $f =~ s/\W/_/g; $f;}' ~/tmp/genxinventory-current.csv
I converted the XLS file to | separated CSV using libreoffice, so your conversion mileage (kilometerage?) may vary.