Delete files in a folder in perl - perl

Im trying to delete all the files in the directory called spool but it doesent work... Im trying to use Unlink
unlink glob "$dir/*home/roz/newfolder/spool*";
is the code im trying to use but it doesent work

First of all, spool* did not return the files in 'spool' folder, but rather - all files in folder 'newfolder' with name starting with 'spool'.
To get all the files in a folder named 'spool' use:
glob("$dir/*home/roz/newfolder/spool/*");
To get all hidden files in the 'spool' folder use:
glob("$dir/*home/roz/newfolder/spool/.*");
And finally, are you sure that '*home' is that what you really want?
If it is a typo, and you've meant 'home' it will be more clear and error-prone(you don't have to care about hidden files or files with spaces in the name) to use
my $path = "$dir/home/roz/newfolder/spool";
opendir(my $sdir, $path) or die "Unable to open $path: $!";
unlink map { "$path/$_" if -f "$path/$_" } readdir $sdir;

Related

How to open a file from different directory in perl?

i am very new to perl, so I would like to know if there is a way to
Open a file from a different directory (not the same directory as the perl script.pl for example)
open multiple files that have the same name, e.g sameName.txt, under the same parent directory but have different sub directory, e.g
directory:
- /alias/a/1/sameName.txt
- /alias/b/1/sameName.txt
- /alias/c/1/sameName.txt
for example as above, but at the same time, there is also the same file, sameName.txt in other directory which I don't want, e.g
directory:
- /alias/a/2/sameName.txt
- /alias/b/2/sameName.txt
- /alias/c/2/sameName.txt
How can I automatically search the directory which the user want, using user input <STDIN>, not hard-coded into the script perl.pl for example, the user want all the sameName.txt files which had in the directory /1/sameName.txt, but with different parent, which is a b and c folder. I want to make it automatically read those files sameName.txt which is located in different folder, so that the user doesn't need to adjust the script every time there is a new path like d/1/sameName.txt created.
if I want the data in these files with the same name with different directories, should I loop it, save into arrays for example, or should i copy all the contents and append it to a single file? because i need to match the data between the files which i have done the script.
You can open a file from anywhere that you like.
The filename argument is a path and you associate that with a filehandle to access its data:
my $path = '/alias/a/1/sameName.txt';
open my $fh, '<', $path or die "Could not open $path: $!";
Perl doesn't care if another file in a different directory has the same name.
You distinguish them with a different file handle:
my $path2 = '/alias/a/2/sameName.txt';
open my $fh2, '<', $path or die "Could not open $path: $!";
You can construct that second path by grabbing the filename portion of the first path and putting it together with the other directory. These are core Perl modules that should already be there:
use File::Basename;
use File::Spec::Functions;
my $other_dir = '/alias/a/2';
my $basename = basename( $path ); # sameName.txt
my $path2 = catfile( $other_dir, $basename );
Not quite sure what you are trying to do.
You might be interested in Learning Perl or the other resources at learn.perl.org.

Build array of the contents of the working directory in perl

I am working on a script which utilizes files in surrounding directories using a path such as
"./dir/file.txt"
This works fine, as long as the working directory is the one containing the script. However the script is going out to multiple users and some people may not change their working directory and run the script by typing its entire path like this:
./path/to/script/my_script.pl
This poses a problem as when the script tries to access ./dir/file.txt it is looking for the /dir directory in the home directory, and of course, it can't fine it.
I am trying to utilize readdir and chdir to correct the directory if it isn't the right one, here is what I have so far:
my $working_directory = $ENV{PWD};
print "Working directory: $working_directory\n"; #accurately prints working directory
my #directory = readdir $working_directory; #crashes script
if (!("my_script.pl" ~~ #directory)){ #if my_script.pl isnt in #directoryies, do this
print "Adjusting directory so I work\n";
print "Your old directory: $ENV{PWD}\n";
chdir $ENV{HOME}; #make the directory home
chdir "./path/to/script/my_script.pl"; #make the directory correct
print "Your new directory: $ENV{PWD}\n";
}
The line containing readdir crashes my script with the following error
Bad symbol for dirhandle at ./path/to/script/my_script.pl line 250.
which I find very strange because I am running this from the home directory which prints out properly right beforehand and contains nothing to do with the "bad symbol"
I'm open to any solutions
Thank you in advance
The readdir operates with a directory handle, not a path on a string. You need to do something like:
opendir(my $dh, $working_directory) || die "can't opendir: $!";
my #directory = readdir($dh);
Check perldoc for both readdir and opendir.
I think you're going about this the wrong way. If you're looking for a file that's travelling with your script, then what you probably should consider is the FindBin module - that lets you figure out the path to your script, for use in path links.
So e.g.
use FindBin;
my $script_path = $FindBin::Bin;
open ( my $input, '<', "$script_path/dir/file.txt" ) or warn $!;
That way you don't have to faff about with chdir and readdir etc.

perl how to read files one by one from directory other than array concept?

How can I read a log files one by one from directory other than array concept. I tried with that concept but I didn't met requirements. Because in current working directory log files keep on adding to it. If i use array concept there are missing of latest log files. Is there any better solution for this? Below code what I have tried, here array contents all files of a directory.
opendir ( DIR, $readDir ) || die "Error in opening dir $readDir\n";
my #files = grep { !/^\.\.?$/ } readdir DIR;
print STDERR "files: #files \n\n";
If you are using linux,
my $log_content = `cat /log/dir/*.log`;
This will combine all the log file contents as one.

How to open multiple files in Perl

Guys im really confused now. Im new to learning Perl. The book ive read sometimes do Perl codes and sometimes do Linux commands.
Is there any connection between them? (Perl codes and linux commands)
I want to open multiple files using Perl code, i know how to open a single file in Perl using:
open (MYFILE,'somefileshere');
and i know how to view multiple files in Linux using ls command.
So how to do this? can i use ls in perl? And i want to open certain files only (perl files) which dont have file extension visible (I cant use *.txt or etc. i guess)
A little help guys
Use system function to execute linux command, glob - for get list of files.
http://perldoc.perl.org/functions/system.html
http://perldoc.perl.org/functions/glob.html
Like:
my #files = glob("*.h *.m"); # matches all files with a .h or .m extension
system("touch a.txt"); # linux command "touch a.txt"
Directory handles are also quite nice, particularly for iterating over all the files in a directory. Example:
opendir(my $directory_handle, "/path/to/directory/") or die "Unable to open directory: $!";
while (my $file_name = <$directory_handle>) {
next if $file_name =~ /some_pattern/; # Skip files matching pattern
open (my $file_handle, '>', $file_name) or warn "Could not open file '$file_name': $!";
# Write something to $file_name. See <code>perldoc -f open</code>.
close $file_handle;
}
closedir $directory_handle;

How to recursively copy with wildcards in perl?

I've modified some script that I've written to now only copy .jpg files.
The script seems to work. It will copy all of the .jpg files from one folder to another but the script is meant to continually loop every X amount of seconds.
If I add a new .jpg file to the folder I'm moving items from after I have already started the script it will not copy over the newly added file. If I stop and restart the script then it will copy the new .jpg file that was added but I want the script to copy items as they are put into the folders and not have to stop and restart the script.
Before I added the glob function trying to only copy .jpg files the script would copy anything in the folder even if it was moved into the folder while the script was still running.
Why is this happening? Any help would be awesome.
Here is my code:
use File::Copy;
use File::Find;
my #source = glob ("C:/sorce/*.jpg");
my $target = q{C:/target};
while (1)
{ sleep (10);
find(
sub {
if (-f) {
print "$File::Find::name -> $target";
copy($File::Find::name, $target)
or die(q{copy failed:} . $!);
}
},
#source
);
}
Your #source array contains a list of file names. It should contain a list of folders to start your search in. So simply change it to:
my $source = "C:/source";
I changed it to a scalar, because it only holds one value. If you want to add more directories at a later point, an array can be used instead. Also, of course, why mix a glob and File::Find? It makes little sense, as File::Find is recursive.
The file checking is then done in the wanted subroutine:
if (-f && /\.jpg$/i)
It won't refresh its list of files if you only glob the list once.
I prefer to use File::Find::Rule, and would use that for each iteration on the directory instead to update the list.
use File::Find::Rule;
my $source_dir = 'C:/source';
my $target_dir = 'C:/target';
while (1) {
sleep 10;
my #files = File::Find::Rule->file()
->name( '*.jpg' )
->in( $source_dir );
for my $file (#files) {
copy $file, $target
or die "Copy failed on $file: $!";
}
}