Perl Net::FTP::Recursive: Can't download folder but files of subfolders - perl

I succeed to download files from a subfolder of the ftp-server. But if I want to download the folders from upper level it does not work.
Here is the folder structure:
folder rwx r-x r-x
subfolder1 rwx r-x r-x
file1 rw- r-- r--
file2 rw- r-- r--
subfolder2 rxx r-x r-x
file3 rw- r-- r--
file4 rw- r-- r--
If I use this:
$f1->cwd("/folder/subfolder1");
$f1->rget();
$f1->quit;
the files file1 and file2 will be downloaded.
If I use this:
$f1->cwd("/folder");
$f1->rget();
$f1->quit;
nothing will be downloaded and the program finished due to timeout. I expected that it will download subfolder1 and subfolder2 and the content of the subfolders. Is there any explanation for this and how can I solve it in the way that I can download subfolder and files?
A detailled description of the code is here
UPDATE 1: Debugging
Debugging with
my $f1 = Net::FTP::Recursive->new($host1, Debug => 1) or die "Can't open $host1\n";
gives the following:
Net::FTP::Recursive=GLOB(0x312bf50)>>> CWD /folder
Net::FTP::Recursive=GLOB(0x312bf50)<<< 250 CWD command successful
Net::FTP::Recursive=GLOB(0x312bf50)>>> PWD
Net::FTP::Recursive=GLOB(0x312bf50)<<< 257 "/folder" is the current directory
Net::FTP::Recursive=GLOB(0x312bf50)>>> PASV
Net::FTP::Recursive=GLOB(0x312bf50)<<< 227 Entering Passive Mode (188,40,220,103,255,187).
Net::FTP::Recursive=GLOB(0x312bf50)>>> LIST
Net::FTP::Recursive=GLOB(0x312bf50)<<< 150 Opening BINARY mode data connection for file list
Timeout at C:/Strawberry/perl/lib/Net/FTP.pm line 1107.
UPDATE 2: Timeout at C:/Strawberry/perl/lib/Net/FTP.pm line 1107.
_list_cmd is the function of the line mentioned in the debug output. I also add the lines where _list_cmdis used and wrapped lines to make it more readible conserving line numbers.
671 # Try to delete the contents
672 # Get a list of all the files in the directory, excluding
# the current and parent directories
673 my #filelist = map { /^(?:\S+;)+ (.+)$/ ? ($1) : () }
grep { !/^(?:\S+;)*type=[cp]dir;/i } $ftp->_list_cmd("MLSD", $dir);
925 sub ls { shift->_list_cmd("NLST", #_); }
925 sub dir { shift->_list_cmd("LIST", #_); }
1087 sub _list_cmd {
1088 my $ftp = shift;
1089 my $cmd = uc shift;
1090
1091 delete ${*$ftp}{'net_ftp_port'};
1092 delete ${*$ftp}{'net_ftp_pasv'};
1093
1094 my $data = $ftp->_data_cmd($cmd, #_);
1095
1096 return
1097 unless (defined $data);
1098
1099 require Net::FTP::A;
1100 bless $data, "Net::FTP::A"; # Force ASCII mode
1101
1102 my $databuf = '';
1103 my $buf = '';
1104 my $blksize = ${*$ftp}{'net_ftp_blksize'};
1105
1106 while ($data->read($databuf, $blksize)) {
1107 $buf .= $databuf;
1108 }
1109
1110 my $list = [split(/\n/, $buf)];
1111
1112 $data->close();
1114 if (EBCDIC) {
1115 for (#$list) { $_ = $ftp->toebcdic($_) }
1116 }
1117
1118 wantarray
1119 ? #{$list}
1120 : $list;
1121 }

To download the directories and files a tried the following workaround: using a loop over all subdirectory and appyling rget. It does the job I want. Nevertheless, the reason why rget does not work on the upper level is still not answered. At least now it is clear that it is not a permission probplem.
# ftp-server directory
my $ftpdir = "folder";
# Defie local download folder
my $download = "C:/local";
chdir($download);
# Change to remote directory
$f1->cwd($ftpdir) or die "Can't cwd to $ftpdir\n", $f1->message;
# grep all folder of top level
my #ftp_directories = $f1->ls;
# remove . and ..
#ftp_subdir = grep ! /^\.+$/, #ftp_subdir;
foreach my $sd (#ftp_subdir) {
# Make folder on local computer
my $localdir = catfile($download,$sd);
mkdir $localdir;
# Change local working directory
chdir $localdir;
# Change to remote sub directory to be downloaded
$f1->cwd($sd) or die "Can't cwd to $sd\n";
}
# download files
$f1->rget();
# Change to upper level
$f1->cwd("..");
}
$f1->quit;

Related

Why does Perl file test operator "-l" not detect symlinks?

Why does the Perl file test operator "-l" fail to detect symlinks under the following conditions?
System Info
john#testbed-LT:/temp2/test$ uname -a
Linux Apophis-LT 4.13.0-37-generic #42-Ubuntu SMP Wed Mar 7 14:13:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
john#testbed-LT:/temp2/test$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 17.10
Release: 17.10
Codename: artful
Perl Info
john#testbed-LT:/temp2/test$ perl -v
This is perl 5, version 26, subversion 0 (v5.26.0) built for x86_64-linux-gnu-thread-multi (with 56 registered patches, see perl -V for more detail)
Test Resources
john#testbed-LT:/temp2/test$ touch regular_file
john#testbed-LT:/temp2/test$ mkdir dir
john#testbed-LT:/temp2/test$ ln -s regular_file symlink
john#testbed-LT:/temp2/test$ ls -al
total 12
drwxrwxr-x 3 john john 4096 May 6 02:29 .
drwxrwxrwx 6 john john 4096 May 6 02:29 ..
drwxrwxr-x 2 john john 4096 May 6 02:29 dir
-rw-rw-r-- 1 john john 0 May 6 02:29 regular_file
lrwxrwxrwx 1 john john 12 May 6 02:29 symlink -> regular_file
Script Containing Failing "-l" Operator
john#testbed-LT:/temp2/test$ cat ~/.scripts/test.pl
#!/usr/bin/perl
use strict;
use warnings;
use Cwd 'abs_path';
my $targetDir = "/temp2/test";
opendir(DIR, $targetDir) || die "Can't open $targetDir: $!";
while (readdir DIR) {
my $file = "$_";
if($file =~ m/^\.{1,2}/) {
next;
}
$file = abs_path($file);
if(-l "$file") {
print "Link: $file\n";
}
elsif(-d "$file") {
print "Dir: $file\n";
}
elsif(-f "$file") {
print "File: $file\n";
}
else {
print "\n\n *** Unhandled file type for file [$file]!\n\n";
exit 1;
}
}
closedir(DIR);
Script Output
john#testbed-LT:/temp2/test$ perl ~/.scripts/test.pl
File: /temp2/test/regular_file
Dir: /temp2/test/dir
File: /temp2/test/regular_file
Problem I'm Trying to Solve
Note in the above output that the symlink (named "symlink") is not listed while the file, "regular_file," is listed twice (I want "symlink" listed -- the actual link and not the file it points to).
When I change ... if(-l "$file") ... to ... if(lstat "$file") ... in the script, again "symlink" is not listed while "regular_file" is listed twice, but they are being listed from within the block meant to catch symlinks, i.e.:
john#testbed-LT:/temp2/test$ perl ~/.scripts/test.pl
Link: /temp2/test/regular_file
Link: /temp2/test/dir
Link: /temp2/test/regular_file
Goal
The output I'm trying to achieve (which is faked below -- not actually generated by the script, but by hand) is:
john#testbed-LT:/temp2/test$ perl ~/.scripts/test.pl
File: /temp2/test/regular_file
Dir: /temp2/test/dir
Link: /temp2/test/symlink
...but not necessarily in that order (I don't care about the order of the listing).
Why is the above-shown script not achieving the above-stated goal (why is the "-l" operator not working)?
perldoc Cwd:
abs_path
my $abs_path = abs_path($file);
Uses the same algorithm as getcwd(). Symbolic links and relative-path components ("." and "..") are resolved to return the canonical pathname, just like realpath(3). On error returns undef, with $! set to indicate the error.
(Emphasis mine.)
If you want to see symlinks, don't use abs_path.
What you want to do instead is
$file = "$targetDir/$file";
i.e. prepend the name of the directory you read $file from.
Additional notes:
opendir(DIR, $targetDir) || die "Can't open $targetDir: $!";
while (readdir DIR) {
my $file = "$_";
should be
opendir(my $dh, $targetDir) || die "Can't open $targetDir: $!";
while (my $file = readdir $dh) {
Why use bareword filehandles when you can just use normal variables (that are scoped properly)?
There's no reason to quote "$_" here.
Why first assign to $_ when you're just going to copy the string to $file in the next step?
Note in the above output that the symlink (named "symlink") is not listed while the file, "regular_file," is listed twice
Yeah, because you used abs_path to turn symlink into /temp2/test/regular_file. Get rid of that line.
By the way, you are missing
$file = "$targetDir/$file";
The only reason your program worked without it is because $targetDir happened to be the current work directory.

How can I optimise the search of unique lines?

I have large tab separated files like the following example:
scaffold1443 182629 182998 chr1.1.1.241051.241420 367 99.80
scaffold1443 131948 132412 chr1.1.2.291778.292242 462 99.80
scaffold1443 96142 96474 chr1.1.3.327471.327803 330 99.70
scaffold1443 53153 53479 chr1.1.4.370342.370668 322 99.40
scaffold526 2870014 2870523 chr1.1.5.488372.488881 507 99.90
scaffold526 2865956 2866314 chr1.1.6.490869.491234 357 98.10
scaffold526 2867666 2868024 chr1.1.6.490869.491234 357 98.10
scaffold526 2485557 2485867 chr1.1.7.610677.610987 310 100.00
I want to print in a new file only the lines that the 4th column is unique.
In the previous example, all the lines should be printed except the 2 lines that have the "chr1.1.6.490869.491234" in the 4th column.
The following script that I wrote (it is a part of a larger pipeline) does the job but it is extremely slow, especially when the input file is very big.
#!/usr/bin/perl
use strict;
use warnings;
#This script takes the best hits output and finds the unique elements that up to only one scaffold.
my $target = $ARGV[0];
my $chromosome = $ARGV[1];
my #mykeys = `cat OUTPUT_$target/psl_score_byname_$target/$chromosome.table| awk '{print \$4}'| sort -u`;
foreach (#mykeys)
{
my $key = $_;
chomp($key);
my $command = "cat OUTPUT_$target/psl_score_byname_$target/$chromosome.table|grep -w $key";
my #belongs= `$command`;
chomp(#belongs);
my $count = scalar(#belongs);
if ($count == 1)
{
open FILE, ">>OUTPUT_$target/unique_hces_$target/$chromosome.txt" or die $!;
print FILE "#belongs\n";
#belongs = ();
}
else {
#belongs = ();
}
}
Is there any smarter and faster way to do it?
Thank you very much in advance.
Given that you do not want to print lines that have duplicates at all, you need to see the whole file before any printing, to first find those lines with duplicates. Then go back and print others.
This can be done by keeping the whole file in memory along with ancillary data structures, or by making two passes. Since the file is "very big" here is a less memory-straining way
use warnings;
use strict;
my $file = 'skip.txt';
open my $fh, '<', $file or die "Can't open $file: $!";
my (%seen, %dupe);
while (<$fh>)
{
my $patt = (split)[3];
# Record line numbers if the 4th field has been seen
if (exists $seen{$patt}) {
$dupe{ $seen{$patt} }++; # num of line with it seen first, with count
$dupe{$.} = 1; # this line's number as well
}
else { $seen{$patt} = $. } # first time this 4th field is seen
}
# Now we know all lines which carry duplicate fourth field
my $outfile = 'filtered_' . $file;
open my $fh_out, '>', $outfile or die "Can't open $outfile: $!";
seek $fh, 0, 0; # rewind to the beginning
$. = 0; # seek doesn't reset $.
while (<$fh>) {
print $fh_out $_ if not exists $dupe{$.}
}
close $fh_out;
The first time a duplicate is found its original line also need be recorded, $dupe{$seen{$patt}}++, in that branch. This need be done only once, and while we can check (whether it's already been recorded) we may well pick up a potentially useful duplicates' count instead.
I've added a few more duplicates (some more than twice) to your posted sample and this produces the correct output.
Comment on the posted code
The posted code checks the fourth field on each line against the whole file, thus processing the file as many times as there are lines. That is a lot of work and it has to take time, specially for big files.
Also, there is no reason to use external programs for that job.
As oneliner:
perl -F"\t" -lanE 'push #l,[#F];$s{$F[3]}++}{say join"\t",#$_ for grep{$s{$_->[3]}==1}#l' <<EOF
scaffold1443 182629 182998 chr1.1.1.241051.241420 367 99.80
scaffold1443 131948 132412 chr1.1.2.291778.292242 462 99.80
scaffold1443 96142 96474 chr1.1.3.327471.327803 330 99.70
scaffold1443 53153 53479 chr1.1.4.370342.370668 322 99.40
scaffold526 2870014 2870523 chr1.1.5.488372.488881 507 99.90
scaffold526 2865956 2866314 chr1.1.6.490869.491234 357 98.10
scaffold526 2867666 2868024 chr1.1.6.490869.491234 357 98.10
scaffold526 2485557 2485867 chr1.1.7.610677.610987 310 100.00
EOF
output
scaffold1443 182629 182998 chr1.1.1.241051.241420 367 99.80
scaffold1443 131948 132412 chr1.1.2.291778.292242 462 99.80
scaffold1443 96142 96474 chr1.1.3.327471.327803 330 99.70
scaffold1443 53153 53479 chr1.1.4.370342.370668 322 99.40
scaffold526 2870014 2870523 chr1.1.5.488372.488881 507 99.90
scaffold526 2485557 2485867 chr1.1.7.610677.610987 310 100.00
more readable:
perl -F"\t" -lanE '
push #lines, [ #F ]; $seen{ $F[3] }++;
END {
say join("\t",#$_) for grep { $seen{ $_->[3] } == 1 } #lines
}
'
You can translate it to full script if want, I created this as oneliner because you said: it is a part of a larger pipeline.
Also note, the above reads the whole file into the memory first - so very the big files could cause problems.
The simple approach involves using an associative array to identify duplicates.
perl -F'\t' -lane'
push #{ $h{ $F[3] } }, $_;
END {
for (values(%h)) {
print(#$_) if #$_ == 1;
}
}
' file.tsv
The above approach requires as much memory as the file is large. That's a no-go if you files are truly large.
If you have truly large files, the simple approach is to sort the file using the sort command line utility (which is rather fast, and can handle arbitrarily large files). By first rearranging the file such that duplicates are next to each other, we can easily filtered out the duplicates without worrying about memory issues.
sort -t$'\t' -k 4,4 file.tsv | perl -F'\t' -lane'
if ($key ne $F[3]) {
print(#buf) if #buf == 1;
#buf = ();
}
$key = $F[3];
push #buf, $_;
END { print(#buf) if #buf == 1; }
'
If you have truly large files, another relatively simple approach is to load the data in a database (e.g. an sqlite3 database). You could easily maintain the original order with this approach.

Perl: how to compare array to hash and print out results

I'm quite new to Perl, so I'm sorry if this is somewhat rudimentary.
I'm working with a Perl script that is working as a wrapper for some Python, text formatting, etc. and I'm struggling to get my desired output.
The script takes a folder, for this example, the folder contains 6 text files (test1.txt through test6.txt). The script then extracts some information from the files, runs a series of command line programs and then outputs a tab-delimited result. However, that result contains only those results that made it through the rest of the processing by the script, i.e. the result.
Here are some snippets of what I have so far:
use strict;
use warnings;
## create array to capture all of the file names from the folder
opendir(DIR, $folder) or die "couldn't open $folder: $!\n";
my #filenames = grep { /\.txt$/ } readdir DIR;
closedir DIR;
#here I run some subroutines, the last one looks like this
my $results = `blastn -query $shortname.fasta -db DB/$db -outfmt "6 qseqid sseqid score evalue" -max_target_seqs 1`;
#now I would like to compare what is in the #filenames array with $results
Example of tab delimited result - stored in $results:
test1.txt 200 1:1-20 79 80
test3.txt 800 1:1-200 900 80
test5.txt 900 1:1-700 100 2000
test6.txt 600 1:1-1000 200 70
I would like the final output to include all of the files that were run through the script, so I think I need a way to compare two arrays or perhaps compare an array to a hash?
Example of the desired output:
test1.txt 200 1:1-20 79 80
test2.txt 0 No result
test3.txt 800 1:1-200 900 80
test4.txt 0 No result
test5.txt 900 1:1-700 100 2000
test6.txt 600 1:1-1000 200 70
Update
Ok, so I got this to work with suggestions by #terdon by reading the file into a hash and then comparing. So I was trying to figure out how to do this with out writing to file and the reading the file back in - I still can't seem to get the syntax correct. Here's what I have, however it seems like I'm not able to match the array to the hash - meaning the hash must not be correct:
#!/usr/bin/env perl
use strict;
use warnings;
#create variable to mimic blast results
my $blast_results = "file1.ab1 9 350 0.0 449 418 418 403479 403042 567
file3.ab1 2 833 0.0 895 877 877 3717226 3718105 984";
#create array to mimic filename array
my #filenames = ("file1.ab1", "file2.ab1", "file3.ab1");
#header for file
my $header = "Query\tSeq_length\tTarget found\tScore (Bits)\tExpect(E-value)\tAlign-length\tIdentities\tPositives\tChr\tStart\tEnd\n";
#initialize hash
my %hash;
#split blast results into array
my #row = split(/\s+/, $blast_results);
$hash{$row[0]}=$_;
print $header;
foreach my $file (#filenames){
## If this filename has an associated entry in the hash, print it
if(defined($hash{$file})){
print "$row[0]\t$row[9]\t$row[1]:$row[7]-$row[8]\t$row[2]\t$row[3]\t$row[4]\t$row[5]\t$row[6]\t$row[1]\t$row[7]\t$row[8]\n";
}
## If not, print this.
else{
print "$file\t0\tNo Blast Results: Sequencing Rxn Failed\n";
}
}
print "-----------------------------------\n";
print "$blast_results\n"; #test what results look like
print "-----------------------------------\n";
print "$row[0]\t$row[1]\n"; #test if array is getting split correctly
print "-----------------------------------\n";
print "$filenames[2]\n"; #test if other array present
The result from this script is (the #filenames array is not matching the hash):
Query Seq_length Target found Score (Bits) Expect(E-value) Align-length Identities Positives Chr Start End
file1.ab1 0 No Blast Results: Sequencing Rxn Failed
file2.ab1 0 No Blast Results: Sequencing Rxn Failed
file3.ab1 0 No Blast Results: Sequencing Rxn Failed
-----------------------------------
file1.ab1 9 350 0.0 449 418 418 403479 403042 567
file3.ab1 2 833 0.0 895 877 877 3717226 3718105 984
-----------------------------------
file1.ab1 9
-----------------------------------
file3.ab1
I'm not entirely sure what you need here but the equivalent of awk's A[$1]=$0 is done using hashes in Perl. Something like:
my %hash;
## Open the output file
open(my $fh, "<","text_file");
while(<$fh>){
## remove newlines
chomp;
## split the line
my #A=split(/\s+/);
## Save this in a hash whose keys are the 1st fields and whose
## values are the associated lines.
$hash{$A[0]}=$_;
}
close($fh);
## Now, compare the file to #filenames
foreach my $file (#filenames){
## Print the file name
print "$file\t";
## If this filename has an associated entry in the hash, print it
if(defined($hash{$file})){
print "$hash{$file}\n";
}
## If not, print this.
else{
print "0\tNo result\n";
}
}

Getting member size from zip using Archive::Zip::MemberRead

I am trying to read each member file size from a zip without actually extracting. I iterate through all member names, then use Archive::Zip::MemberRead to get a file handle for each member, against which I was hoping to be able to use the stat method to get the size. However, stat on a file handle from a zip file element returns an empty array so I can't get my file size. Here is my code:
my $zip = Archive::Zip->new($zipFilePath);
my #mbrs = $zip->memberNames();
foreach my $mbrName(#mbrs)
{
my $fh = Archive::Zip::MemberRead->new($zip, $mbrName);
my #fileStats = stat($fh);
my $size = $fileStats[7];
print "\n".$mbrName." -- ".$size;
}
However, the output I get does not display any file size:
dir/fileName1.txt --
dir/fileName2.txt --
The question is how to retrieve member file sizes without actually extracting them.
Why not just use the Archive::Zip module itself? This seems to work for me:
#!/usr/bin/perl
use strict;
use warnings;
use Archive::Zip qw(:ERROR_CODES :CONSTANTS);
my $filename = "somezipfile.zip";
# Read in the ZIP file
my $zip = Archive::Zip->new();
unless ($zip->read($filename) == AZ_OK) {
die "Read error\n";
}
# Loop through the members, printing their name,
# compressed size, and uncompressed size.
my #members = $zip->members();
foreach (#members)
{
print " - " . $_->fileName() . ": " . $_->compressedSize() .
" (" . $_->uncompressedSize() . ")\n";
}
Here is one way only if you have 7-zip installed:
#!/usr/bin/env perl
use warnings;
use strict;
## List files from zip file provided as first argument to the script, the format
## is like:
# Date Time Attr Size Compressed Name
#------------------- ----- ------------ ------------ ------------------------
#2012-10-19 16:56:38 ..... 139 112 1.txt
#2012-10-19 16:56:56 ..... 126 105 2.txt
#2012-10-19 16:57:24 ..... 71 53 3.txt
#2012-10-03 14:39:54 ..... 155 74 A.txt
#2012-09-29 17:53:44 ..... 139 70 AA.txt
#2011-12-08 10:41:16 ..... 30 30 AAAB.txt
#2011-12-08 10:41:16 ..... 18 18 AAAC.txt
# ...
for ( map { chomp; $_ } qx/7z l $ARGV[0]/ ) {
# Omit headers and footers with this flip-flop.
if ( my $l = ( m/^(?:-+\s+){2,}/ ... m/^(?:-+\s+){2,}/ ) ) {
## Don't match flip-flop boundaries.
next if $l == 1 || $l =~ m/E0$/;
## Extract file name and its size.
my #f = split ' ';
printf qq|%s -- %d bytes\n|, $f[5], $f[3];
}
}
I run it like:
perl script.pl files.zip
That yiedls in my test (with some output suppressed):
1.txt -- 139 bytes
2.txt -- 126 bytes
3.txt -- 71 bytes
A.txt -- 155 bytes
AA.txt -- 139 bytes
AAAB.txt -- 30 bytes
AAAC.txt -- 18 bytes
B.txt -- 40 bytes
BB.txt -- 131 bytes
C.txt -- 4 bytes
CC.txt -- 184 bytes
File1.txt -- 177 bytes
File2.txt -- 250 bytes
aaa.txt -- 30 bytes
...

cpan2rpm cant stat /tmp folder

Im trying to build perl-Heap-Priority for RHEL6. Weired thing is when I run
cpan2rpm Heap::Priority it shows following
...
Tarball extraction: [/root/rpm/SOURCES/Heap-Priority-0.01.tar.gz]
Can't stat /tmp/CldQkErG6r/18:51: No such file or directory
at /usr/bin/cpan2rpm line 392
get_meta(): No such file or directory at /usr/bin/cpan2rpm line 396.
...
Practically this temporary folder is not created. Buy why?
my tmp folder permission is 777
drwxrwxrwt. 3 root root 4096 May 29 16:35 tmp
Known problem, see https://rt.cpan.org/Ticket/Display.html?id=72421. The problem is the space in the user column of the output.
$ tar -tzvf $HOME/rpmbuild/SOURCES/Heap-Priority-0.01.tar.gz |head -1
drwxr-xr-x James Freeman/544 0 2002-05-07 14:51 Heap-Priority-0.01/
Apply the following patch to fix the problem for this module. To get the name, instead of accessing the fifth column, we're accessing the last one. I do not know what else this patch might break, but it should be less wrong than the original code on average.
diff --git a/cpan2rpm b/cpan2rpm
index 28e8b01..6a36b68 100755
--- a/cpan2rpm
+++ b/cpan2rpm
## -1259,7 +1259,7 ## sub untar($) {
;
chomp($_ = qx/$cmd/);
- $_ = (split)[5] unless $zip;
+ $_ = (split)[-1] unless $zip;
$dst .= "/$1" if m|^(\S+)/?|;
$dst =~ s|/*$||; # path shouldn't end in / or tardir gets wiped
$dst =~ s|\./||; # paths in tarballs shouldn't be relative
You could have found out all of this by yourself by using the debugger. Learn to use this tool, it is invaluable.
I think this might be a sightly cleaner way to do it:
--- /usr/bin/cpan2rpm.orig 2017-10-20 14:45:57.000000000 -0700
+++ /usr/bin/cpan2rpm 2017-10-23 12:29:07.006118950 -0700
## -1258,7 +1258,7 ##
my $cmd = $zip
? "unzip -l $_ | grep -P -o '\\S+/\$' |tail -1"
- : "tar -t${z}vf $_ |head -1"
+ : "tar --numeric-owner -t${z}vf $_ |head -1"
;
chomp($_ = qx/$cmd/);