I am doing the below steps:
Read all the text files in a directory and store it in an array named #files
Run a foreach loop on each text file. Extract the file name(stripping of .txt) using split operation and creating a folder of that particular filename. Rename that file to Test.txt (so as to work as input fo another perl executable) Executing test.pl for each file by adding the line require "test.pl";
It works fine for only one file, but not any more. Here is my code:
opendir DIR, ".";
my #files = grep {/\.txt/} readdir DIR;
foreach my $files (#files) {
#fn = split '\.', $files;
mkdir "$fn[0]"
or die "Unable to create $fn[0] directory <$!>\n";
rename "$files", "Test.txt";
require "test3.pl";
rename "Test.txt", "$files";
system "move $files $fn[0]";
}
you don't require the file to be loaded once, but done every time.
So, replace
require "test3.pl";
with
do "test3.pl";
Can you glob for files in that directory..
Replace,
opendir DIR, ".";
my #files = grep {/\.txt/} readdir DIR;
with,
my #files = <*.txt>;
Related
I have .gz files inside a directory and I am reading them with Perl. Everything is ok but what I don't understand is the order in which this files are being read. For sure, I can tell that it is not alphabetical. So my question is what order does Perl use by default to read files from a directory.
Below is a snippet of my code
# Open the source file
my $dir = "/home/myname/mydir";
# Open directory and loop through
opendir(DIR, $dir) or die $!;
while (my $file = readdir(DIR)) {
# We only want files
next unless (-f "$dir/$file");
# Use a regular expression to find files ending in .gz
next unless ($file =~ m/\.gz$/);
my $gzip_file = "./mydir/$file";
open ( my $gunzip_stream, "-|", "gzip -dc $gzip_file") or die $!;
while (my $line = <$gunzip_stream> ) {
print ("$line\n");
}
}
readdir returns the files in the same order as the system returns them. I'm not aware of any guarantee of order from any OS. I imagine different drives might even behave differently.
Quick synopsis: Let's say there are multiple of the same file type in one directory (in this example, 10 .txt files). I am trying to use Perl's copy function to copy 5 of them into a new directory, then zip up that directory.
The code works...except the folder that is supposed to have the .txt files copied, doesn't actually have anything in it, and I don't know why. Here is my complete code:
#!/usr/bin/perl
use strict;
use warnings;
use File::Copy;
my $source_dir = "C:\\Users\\user\\Documents\\test";
my $dest_dir = "C:\\Users\\user\\Desktop\\files";
my $count = 0;
opendir(DIR, $source_dir) or die $!;
system('mkdir files');
while (my $file = readdir(DIR)) {
print "$file\n";
if ($count eq 5) {
last;
}
if ($file =~ /\.txt/) {
copy("$file", "$dest_dir/$file");
$count++;
}
}
closedir DIR;
system('"C:\Program Files\Java\jdk1.8.0_66\bin\jar.exe" -cMf files.zip files');
system('del /S /F /Q files');
system('rmdir files');
Everything works...the directory files is created, then zipped up into files.zip...when I open the zip file, the files directory is empty, so it's as if the copy statement didn't copy anything over.
In the $source_dir are 10 .txt files, like this (for testing purposes):
test1.txt
test2.txt
test3.txt
test4.txt
test5.txt
test6.txt
test7.txt
test8.txt
test9.txt
test10.txt
The files don't actually get copied over...NOTE: the print "$file\n" was added for testing, and it indeed is printing out test1.txt, test2.txt, etc. up to test6.txt so I know that it is finding the files, just not copying them over.
Any thoughts as to where I'm going wrong?
I think there is a typo in your script:
system('mkdir files');
should be:
system("mkdir $dest_dir");
but, the real issue is that you are not using the full path of the source file. Change your copy to:
copy("$source_dir/$file", $dest_dir);
and see if that helps.
You might also want to look at: File::Path and Archive::Zip, they would eliminate the system calls.
I wrote a program to check for misspellings or unused data in a text file. Now, I want to check all files in a directory using the same process.
Here are a few lines of the script that I run for the first file:
open MYFILE, 'checking1.txt' or die $!;
#arr_file = <MYFILE>;
close (MYFILE);
open FILE_1, '>text1' or die $!;
open FILE_2, '>Output' or die $!;
open FILE_3, '>Output2' or die $!;
open FILE_4, '>text2' or die $!;
for ($i = 0; $i <= $#arr_file; $i++) {
if ( $arr_file[$i-1] =~ /\s+\S+\_name\s+ (\S+)\;/ ) {
print FILE_1 "name : $i $1\n";
}
...
I used only one file, checking1.txt, to execute the script, but now I want to do the same process for all files in the all_file_directory
Use an array to store file names and then loop over them. At the end of loop rename output files or copy them somewhere so that they do not get overwritten in next iteration.
my #files = qw(checking1.txt checking2.txt checking3.txt checking4.txt checking5.txt);
foreach my $filename (#files){
open (my $fh, "<", $filename) or die $!;
#perform operations on $filename using filehandle $fh
#rename output files
}
Now for the above to work you need to make sure the files are in the same directory. If not then:
Provide absolute path to each file in #files array
Traverse directory to find desired files
If you want to traverse the directory then see:
How do I read in the contents of a directory in Perl?
How can I recursively read out directories in Perl?
Also:
Use 3 args open
Always use strict; use warnings; in your Perl program
and give proper names to the variables. For eg:
#arr_file = <MYFILE>;
should be written as
#lines = <MYFILE>;
Your all files in same directory means put the program inside the directory then run it.
For read the file from a directory use glob
while (my $filename =<*.txt>) # change the file extension whatever you want
{
open my $fh, "<" , $filename or die "Error opening $!\n";
#do your stuff here
}
Why not useFile::Find? Makes changing files in directories very easy. Just supply the start directory.
It's not always the best choice, depends on your needs, but it's useful and easy almost every time I need to modify a lot of files all at once.
Another option is to just loop through the files, but in this case you'll have to supply the file names.
As mkHun pointed out a glob can be helpful.
I have a few sub-folders in the main folder. My program will do some calculations in each sub-folder. Firstly the code will create the "result" folder in main folder for all calculations. And, for the calculation in each sub-folder I want to create a folder in the "result" folder. But they should have the same name as sub-folder.
My working directory is "/home/abc/Desktop/test". The "test" is my main folder. There are "a", "b" and "c" sub-folders in "test" folder. My code creates the "result" folder in "test" main folder. But it also should create "a", "b" and "c" sub-folders in "result" folder. How can I fix my code?
#!/usr/bin/env perl
use strict;
use warnings;
use File::Path qw/make_path/;
use Cwd;
my $dir = cwd();
opendir (DIR, $dir) or die "Unable to open current directory! $!\n";
my #subdirs = readdir (DIR) or die "Unable to read directory! $!\n";
closedir DIR;
my $result_path = "$dir/results";
make_path("$result_path");
foreach my $subdir ( sort #subdirs ) {
chdir($subdir) or die "Cannot cd to $dir: $!\n";
make_path("$result_path/$subdir");
system("echo '1 0' | program -f data.mol -o $result_path/$subdir outfile.txt");
chdir("..");
}
I don't think File::Find::Rule is a good choice for this problem. The module's speciality is recursively searching directory trees, and here you just want a list of all the directories in a single folder. That can very simply be done with grep -d, glob '*'
Here's a version that uses the File::chdir module as per your previous question. It avoids the need for Cwd and File::Basename, and it allows you to localise the current working directory so that there is no need for chdir '..' at the end of each loop.
use strict;
use warnings;
use File::chdir;
my #folders = grep -d, glob '*';
my $result_path = "$CWD/result";
mkdir $result_path;
for my $folder ( #folders ) {
my $result_folder = "$result_path/$folder";
mkdir $result_folder;
local $CWD = $folder;
system("echo '1 0' | program -f data.mol -o $result_folder/output.txt");
}
File::Find::Rule->directory->in( $dir );
finds all directories recursively down the directory tree with starting point $dir. For each directory it finds, you are taking the basename.
So, when it comes across $dir/test/a, the basename of that is a, and your code goes ahead and creates result/a.
I suspect you do not need to find all the directories in a tree -- but given your jumbled problem description it is not easy to be certain.
Maybe you just want to opendir the directory, readdir all the entries keeping only directories other than . and .., and closedir when you are done instead of traversing the entire tree under $dir.
I've modified some script that I've written to now only copy .jpg files.
The script seems to work. It will copy all of the .jpg files from one folder to another but the script is meant to continually loop every X amount of seconds.
If I add a new .jpg file to the folder I'm moving items from after I have already started the script it will not copy over the newly added file. If I stop and restart the script then it will copy the new .jpg file that was added but I want the script to copy items as they are put into the folders and not have to stop and restart the script.
Before I added the glob function trying to only copy .jpg files the script would copy anything in the folder even if it was moved into the folder while the script was still running.
Why is this happening? Any help would be awesome.
Here is my code:
use File::Copy;
use File::Find;
my #source = glob ("C:/sorce/*.jpg");
my $target = q{C:/target};
while (1)
{ sleep (10);
find(
sub {
if (-f) {
print "$File::Find::name -> $target";
copy($File::Find::name, $target)
or die(q{copy failed:} . $!);
}
},
#source
);
}
Your #source array contains a list of file names. It should contain a list of folders to start your search in. So simply change it to:
my $source = "C:/source";
I changed it to a scalar, because it only holds one value. If you want to add more directories at a later point, an array can be used instead. Also, of course, why mix a glob and File::Find? It makes little sense, as File::Find is recursive.
The file checking is then done in the wanted subroutine:
if (-f && /\.jpg$/i)
It won't refresh its list of files if you only glob the list once.
I prefer to use File::Find::Rule, and would use that for each iteration on the directory instead to update the list.
use File::Find::Rule;
my $source_dir = 'C:/source';
my $target_dir = 'C:/target';
while (1) {
sleep 10;
my #files = File::Find::Rule->file()
->name( '*.jpg' )
->in( $source_dir );
for my $file (#files) {
copy $file, $target
or die "Copy failed on $file: $!";
}
}