Perl File::Copy is not actually copying the file - perl

Quick synopsis: Let's say there are multiple of the same file type in one directory (in this example, 10 .txt files). I am trying to use Perl's copy function to copy 5 of them into a new directory, then zip up that directory.
The code works...except the folder that is supposed to have the .txt files copied, doesn't actually have anything in it, and I don't know why. Here is my complete code:
#!/usr/bin/perl
use strict;
use warnings;
use File::Copy;
my $source_dir = "C:\\Users\\user\\Documents\\test";
my $dest_dir = "C:\\Users\\user\\Desktop\\files";
my $count = 0;
opendir(DIR, $source_dir) or die $!;
system('mkdir files');
while (my $file = readdir(DIR)) {
print "$file\n";
if ($count eq 5) {
last;
}
if ($file =~ /\.txt/) {
copy("$file", "$dest_dir/$file");
$count++;
}
}
closedir DIR;
system('"C:\Program Files\Java\jdk1.8.0_66\bin\jar.exe" -cMf files.zip files');
system('del /S /F /Q files');
system('rmdir files');
Everything works...the directory files is created, then zipped up into files.zip...when I open the zip file, the files directory is empty, so it's as if the copy statement didn't copy anything over.
In the $source_dir are 10 .txt files, like this (for testing purposes):
test1.txt
test2.txt
test3.txt
test4.txt
test5.txt
test6.txt
test7.txt
test8.txt
test9.txt
test10.txt
The files don't actually get copied over...NOTE: the print "$file\n" was added for testing, and it indeed is printing out test1.txt, test2.txt, etc. up to test6.txt so I know that it is finding the files, just not copying them over.
Any thoughts as to where I'm going wrong?

I think there is a typo in your script:
system('mkdir files');
should be:
system("mkdir $dest_dir");
but, the real issue is that you are not using the full path of the source file. Change your copy to:
copy("$source_dir/$file", $dest_dir);
and see if that helps.
You might also want to look at: File::Path and Archive::Zip, they would eliminate the system calls.

Related

What order does Perl use by default to read all files in a directory?

I have .gz files inside a directory and I am reading them with Perl. Everything is ok but what I don't understand is the order in which this files are being read. For sure, I can tell that it is not alphabetical. So my question is what order does Perl use by default to read files from a directory.
Below is a snippet of my code
# Open the source file
my $dir = "/home/myname/mydir";
# Open directory and loop through
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
# We only want files
next unless (-f "$dir/$file");
# Use a regular expression to find files ending in .gz
next unless ($file =~ m/\.gz$/);
my $gzip_file = "./mydir/$file";
open ( my $gunzip_stream, "-|", "gzip -dc $gzip_file") or die $!;
while (my $line = <$gunzip_stream> ) {
print ("$line\n");
}
}
readdir returns the files in the same order as the system returns them. I'm not aware of any guarantee of order from any OS. I imagine different drives might even behave differently.

Build array of the contents of the working directory in perl

I am working on a script which utilizes files in surrounding directories using a path such as
"./dir/file.txt"
This works fine, as long as the working directory is the one containing the script. However the script is going out to multiple users and some people may not change their working directory and run the script by typing its entire path like this:
./path/to/script/my_script.pl
This poses a problem as when the script tries to access ./dir/file.txt it is looking for the /dir directory in the home directory, and of course, it can't fine it.
I am trying to utilize readdir and chdir to correct the directory if it isn't the right one, here is what I have so far:
my $working_directory = $ENV{PWD};
print "Working directory: $working_directory\n"; #accurately prints working directory
my #directory = readdir $working_directory; #crashes script
if (!("my_script.pl" ~~ #directory)){ #if my_script.pl isnt in #directoryies, do this
print "Adjusting directory so I work\n";
print "Your old directory: $ENV{PWD}\n";
chdir $ENV{HOME}; #make the directory home
chdir "./path/to/script/my_script.pl"; #make the directory correct
print "Your new directory: $ENV{PWD}\n";
}
The line containing readdir crashes my script with the following error
Bad symbol for dirhandle at ./path/to/script/my_script.pl line 250.
which I find very strange because I am running this from the home directory which prints out properly right beforehand and contains nothing to do with the "bad symbol"
I'm open to any solutions
Thank you in advance
The readdir operates with a directory handle, not a path on a string. You need to do something like:
opendir(my $dh, $working_directory) || die "can't opendir: $!";
my #directory = readdir($dh);
Check perldoc for both readdir and opendir.
I think you're going about this the wrong way. If you're looking for a file that's travelling with your script, then what you probably should consider is the FindBin module - that lets you figure out the path to your script, for use in path links.
So e.g.
use FindBin;
my $script_path = $FindBin::Bin;
open ( my $input, '<', "$script_path/dir/file.txt" ) or warn $!;
That way you don't have to faff about with chdir and readdir etc.

perl how to read files one by one from directory other than array concept?

How can I read a log files one by one from directory other than array concept. I tried with that concept but I didn't met requirements. Because in current working directory log files keep on adding to it. If i use array concept there are missing of latest log files. Is there any better solution for this? Below code what I have tried, here array contents all files of a directory.
opendir ( DIR, $readDir ) || die "Error in opening dir $readDir\n";
my #files = grep { !/^\.\.?$/ } readdir DIR;
print STDERR "files: #files \n\n";
If you are using linux,
my $log_content = `cat /log/dir/*.log`;
This will combine all the log file contents as one.

rename the txt file extension using perl

I am doing the below steps:
Read all the text files in a directory and store it in an array named #files
Run a foreach loop on each text file. Extract the file name(stripping of .txt) using split operation and creating a folder of that particular filename. Rename that file to Test.txt (so as to work as input fo another perl executable) Executing test.pl for each file by adding the line require "test.pl";
It works fine for only one file, but not any more. Here is my code:
opendir DIR, ".";
my #files = grep {/\.txt/} readdir DIR;
foreach my $files (#files) {
#fn = split '\.', $files;
mkdir "$fn[0]"
or die "Unable to create $fn[0] directory <$!>\n";
rename "$files", "Test.txt";
require "test3.pl";
rename "Test.txt", "$files";
system "move $files $fn[0]";
}
you don't require the file to be loaded once, but done every time.
So, replace
require "test3.pl";
with
do "test3.pl";
Can you glob for files in that directory..
Replace,
opendir DIR, ".";
my #files = grep {/\.txt/} readdir DIR;
with,
my #files = <*.txt>;

How to recursively copy with wildcards in perl?

I've modified some script that I've written to now only copy .jpg files.
The script seems to work. It will copy all of the .jpg files from one folder to another but the script is meant to continually loop every X amount of seconds.
If I add a new .jpg file to the folder I'm moving items from after I have already started the script it will not copy over the newly added file. If I stop and restart the script then it will copy the new .jpg file that was added but I want the script to copy items as they are put into the folders and not have to stop and restart the script.
Before I added the glob function trying to only copy .jpg files the script would copy anything in the folder even if it was moved into the folder while the script was still running.
Why is this happening? Any help would be awesome.
Here is my code:
use File::Copy;
use File::Find;
my #source = glob ("C:/sorce/*.jpg");
my $target = q{C:/target};
while (1)
{ sleep (10);
find(
sub {
if (-f) {
print "$File::Find::name -> $target";
copy($File::Find::name, $target)
or die(q{copy failed:} . $!);
}
},
#source
);
}
Your #source array contains a list of file names. It should contain a list of folders to start your search in. So simply change it to:
my $source = "C:/source";
I changed it to a scalar, because it only holds one value. If you want to add more directories at a later point, an array can be used instead. Also, of course, why mix a glob and File::Find? It makes little sense, as File::Find is recursive.
The file checking is then done in the wanted subroutine:
if (-f && /\.jpg$/i)
It won't refresh its list of files if you only glob the list once.
I prefer to use File::Find::Rule, and would use that for each iteration on the directory instead to update the list.
use File::Find::Rule;
my $source_dir = 'C:/source';
my $target_dir = 'C:/target';
while (1) {
sleep 10;
my #files = File::Find::Rule->file()
->name( '*.jpg' )
->in( $source_dir );
for my $file (#files) {
copy $file, $target
or die "Copy failed on $file: $!";
}
}