I am new in perl script. I want to write perl which delete previous backup file and extract new backup file from dropbox and rename with specific file name.
Example:
backup location:
D:\Database\store_name\ containing .bak files
Actual folder data
D:\Database\Mahavir Dhanya Bhandar\ contain .bak file
D:\Database\Patel General Store\ containg .bak files
..so on
How can write perl script code which delete *.bak files store_recursively
2.extract new backup file from dropbox and rename with specific file name.
Have you looked into walking your file tree. http://rosettacode.org/wiki/Walk_a_directory/Recursively. Combine this with simple file operations (copying, deleting, etc.) and you should be good.
use File::Find qw(find);
my $dir = "D:\Database\Store_Name";
find sub {unlink $File::Find::name if /\.bak$/}, $dir;
and assuming that connectToDropbox() connects to your dropbox
use File::Copy;
use File::Find qw(find);
my $backup = connectToDropbox();
my $dir = "D\Database\Store_Name";
find sub {copy($backup -> getFile("file"), "newFile")} $dir;
of course, this assumes that you already can set up a connection and such to Dropbox. If not, there is a good CPAN libraryhere you can check out.
I usually get a bunch of files whose name start with a dash '-' . This is causing all sorts of problem when i do any kind of linux commands because anything after - is interpreted as a flag.
What is the fastest way to rename these files without dash character in the front of the file. I can manually rename each file by adding a '--' in front of the file name.For eg: '-File1' will be renamed as
mv -- -File1 File1
But this is not ideal when i have to rename 100's of files on the fly. Currently I have to export it out and use a windows program so I can batch rename them and then upload it back to a Linux box.
The easiest way to refer to such a file is ./-File1. (You only have the problem if the file is in the current directory, anyway.) Maybe if you get used to that it's not so bad.
To bulk rename them, you could do something like:
for f in -*; do mv "./$f" "renamed$f"; done
or, as #shellter suggests in a comment, to reproduce the example in the OP:
for f in -*; do mv "./$f" "${f#-}"; done
Note: the above will only remove a single - from the name.
If you have the util-linux package (most do?):
rename - '' ./-*
man rename
Might be easier to do this in the shell, but if you're worried about special cases or if you would just rather use perl there's a couple ways to do it. One is to use File::Copy mv:
use strict;
use warnings;
use File::Copy qw(mv);
opendir(my $dir, ".") or die "Can't open $!";
foreach my $file (readdir($dir)) {
my $new_name = $file =~ s/^-+//r; #works if filename begins with multiple '-'s
if ($new_name ne $file) {
say "$file -> $new_name";
mv $file, $new_name;
}
}
or use the rename builtin, but this theoretically can not work for some system implementations:
rename $file, $new_name; #instead of mv $file, $new_name;
In either case, if a file with the new name exists it will get silently overwritten with this code. You might need some logic to take care of that:
# Stick inside the "if" clause above
if (-e $new_name) {
say "$new_name already exists!"
next;
}
Using find:
find -name '-*' -exec rename -- - '' {} \;
hi i have written a perl script which copies all the entire directory structure from source to destination and then i had to create a restore script from the perl script which will undo what the perl script has done that is create a script(shell) which can use bash features to restore the contents from destination back to source i m struggling to find the correct function or command which can copy recursively (not an requirement) but i want exactly the same structure as it was before
Below is the way i m trying to create a file called restore to do the restoration process
i m particularly looking for algorithm.
Also restore will restore the structure to a command line directory input if it is supplied if not You can assume the default input supplied to perl script
$source
$target
in this case we would wanna copy from target to source
So we have two different parts in one script.
1 which will copy from source to destination.
2 it will create a script file which will undo what part 1 has done
i hope this makes it very clear
unless(open FILE, '>'."$source/$file")
{
# Die with error message
# if we can't open it.
die "\nUnable to create $file\n";
}
# Write some text to the file.
print FILE "#!/bin/sh\n";
print FILE "$1=$target;\n";
print FILE "cp -r \n";
# close the file.
close FILE;
# here we change the permissions of the file
chmod 0755, "$source/$file";
The last problem i have is i couldn't get $1 in my restore file as it refers to a some variable in perl
but i need this for getting command line input when i run restore as $0 = ./restore $1=/home/xubuntu/User
First off, the standard way in Perl for doing this:
unless(open FILE, '>'."$source/$file") {
die "\nUnable to create $file\n";
}
is to use the or statement:
open my $file_fh, ">", "$source/$file"
or die "Unable to create "$file"";
It's just easier to understand.
A more modern way would be use autodie; which will handle all IO problems when opening or writing to files.
use strict;
use warnings;
use autodie;
open my $file_fh, '>', "$source/$file";
You should look at the Perl Modules File::Find, File::Basename, and File::Copy for copying files and directories:
use File::Find;
use File::Basename;
my #file_list;
find ( sub {
return unless -f;
push #file_list, $File::Find::name;
},
$directory );
Now, #file_list will contain all the files in $directory.
for my $file ( #file_list ) {
my $directory = dirname $file;
mkdir $directory unless -d $directory;
copy $file, ...;
}
Note that autodie will also terminate your program if the mkdir or copy commands fail.
I didn't fill in the copy command because where you want to copy and how may differ. Also you might prefer use File::Copy qw(cp); and then use cp instead of copy in your program. The copy command will create a file with default permissions while the cp command will copy the permissions.
You didn't explain why you wanted a bash shell command. I suspect you wanted to use it for the directory copy, but you can do that in Perl anyway. If you still need to create a shell script, the easiest way is via the :
print {$file_fh} << END_OF_SHELL_SCRIPT;
Your shell script goes here
and it can contain as many lines as you need.
Since there are no quotes around `END_OF_SHELL_SCRIPT`,
Perl variables will be interpolated
This is the last line. The END_OF_SHELL_SCRIPT marks the end
END_OF_SHELL_SCRIPT
close $file_fh;
See Here-docs in Perldoc.
First, I see that you want to make a copy-script - because if you only need to copy files, you can use:
system("cp -r /sourcepath /targetpath");
Second, if you need to copy subfolders, you can use -r switch, can't you?
I want to uncompress zipped file say, files.zip, to a directory that is different from my working directory.
Say, my working directory is /home/user/address and I want to unzip files in /home/user/name.
I am trying to do it as follows
#!/usr/bin/perl
use strict;
use warnings;
my $files= "/home/user/name/files.zip"; #location of zip file
my $wd = "/home/user/address" #working directory
my $newdir= "/home/user/name"; #directory where files need to be extracted
my $dir = `cd $newdir`;
my #result = `unzip $files`;
But when run the above from my working directory, all the files get unzipped in working directory. How do I redirect the uncompressed files to $newdir?
unzip $files -d $newdir
Use Perl command
chdir $newdir;
and not the backticks
`cd $newdir`
which will just start a new shell, change the directory in that shell, and then exit.
Though for this example, the -d option to unzip is probably the simplest way to do what you want (as mentioned by ennuikiller), for other types of directory-changing, I like the File::chdir module, which allows you to localize directory changes, when combined with the perl "local" operator:
#!/usr/bin/perl
use strict;
use warnings;
use File::chdir;
my $files= "/home/user/name/files.zip"; #location of zip file
my $wd = "/home/user/address" #working directory
my $newdir= "/home/user/name"; #directory where files need to be extracted
# doesn't work, since cd is inside a subshell: my $dir = `cd $newdir`;
{
local $CWD = $newdir;
# Within this block, the current working directory is $newdir
my #result = `unzip $files`;
}
# here the current working directory is back to what it was before
You can also use the Archive::Zip module. Look specifically at the extractToFileNamed:
"extractToFileNamed( $fileName )
Extract me to a file with the given name. The file will be created with default modes. Directories will be created as needed. The $fileName argument should be a valid file name on your file system. Returns AZ_OK on success. "
Recently there have been a few attackers trying malicious things on my server so I've decided to somewhat "track" them even though I know they won't get very far.
Now, I have an entire directory containing the server logs and I need a way to search through every file in the directory, and return a filename if a string is found. So I thought to myself, what better of a language to use for text & file operations than Perl? So my friend is helping me with a script to scan all files for a certain IP, and return the filenames that contain the IP so I don't have to search for the attacker through every log manually. (I have hundreds)
#!/usr/bin/perl
$dir = ".";
opendir(DIR, "$dir");
#files = grep(/\.*$/,readdir(DIR));
closedir(DIR);
foreach $file(#files) {
open FILE, "$file" or die "Unable to open files";
while(<FILE>) {
print if /12.211.23.200/;
}
}
although it is giving me directory read errors. Any assistance is greatly appreciated.
EDIT: Code edited, still saying permission denied cannot open directory on line 10. I am just going to run the script from within the logs directory if you are questioning the directory change to "."
Mike.
Can you use grep instead?
To get all the lines with the IP, I would directly use grep, no need to show a list of files, it's a simple command:
grep 12\.211\.23\.200 *
I like to pipe it to another file and then open that file in an editor...
If you insist on wanting the filenames, it's also easy
grep -l 12\.211\.23\.200 *
grep is available on all Unix//Linux with the GNU tools, or on windows using one of the many implementations (unxutils, cygwin, ...etc.)
You have to concatenate $dirname with $filname when using files found through readdir, remember you haven't chdir'ed into the directory where those files resides.
open FH, "<", "$dirname/$filname" or die "Cannot open $filname:$!";
Incidentally, why not just use grep -r to recursively search all subdirectories under your log dir for your string?
EDIT: I see your edits, and two things. First, this line:
#files = grep(/\.*$/,readdir(DIR));
Is not effective, because you are searching for zero or more . characters at the end of the string. Since it's zero or more, it'll match everything in the directory. If you're trying to exclude files ending in ., try this:
#files = grep(!/\.$/,readdir(DIR));
Note the ! sign for negation if you're trying to exclude those files. Otherwise (if you only want those files and I'm misunderstanding your intent), leave the ! out.
In any case, if you're getting your die message on line 10, most likely you're hitting a file that has permissions such that you can't read it. Try putting the filename in the die output so you can see which file it's failing on:
open FILE, "$file" or die "Unable to open file: $file";
But as with other answers, and to reiterate: Why not use grep? The unix command, not the Perl function.
This will get the file names you are looking for in perl, and probably do it much faster than running and doing a perl regex.
#files = `find ~/ServerLogs -name "*.log" | xargs grep -l "<ip address>"`'
Although, this will require a *nix compliant system, or Cygwin on Windows.
Firstly get a list of files within your source directory:
opendir(DIR, "$dir");
#files = grep(/\.log$/,readdir(DIR));
closedir(DIR);
And then loop through those files
foreach $file(#files)
{
// file processing code
}
My first suggest would be to use grep instead. The right tool for the job, they say...
But to answer your question:
readdir just returns the filenames from the directory. You'll need to concatenate the directory name and filename together.
$path = "$dirname/$filname";
open FH, $path or die ...
Then you should ignore files that are actually directories, such as "." and "..". After getting the $path, check to see if it's a file.
if (-f $path) {
open FH, $path or die ...
while (<FH>)
BTW, I thought I would throw in a mention for File::Next. To iterate over all files in a directory (recursively):
use Path::Class; # always useful.
use File::Next;
my $files = File::Next::files( dir(qw/path to files/) ); # look in path/to/files
while( defined ( my $file = $files->() ) ){
$file = file( $file );
say "Examining $file";
say "found foo" if $file->slurp =~ /foo/;
}
File::Next is taint-safe.
~ doesn't auto-expand in Perl.
opendir my $fh, '~/' or die("Doin It Wrong"); # Doing It Wrong.
opendir my $fh, glob('~/') and die( "Thats right!" );
Also, if you must use readdir(), make sure you guard the expression thus:
while (defined(my $filename = readdir(DH))) {
...
}
If you don't do the defined() test, the loop will terminate if it finds a file called '0'.
Have you looked on CPAN for log parsers? I searched with 'log parse' and it yielded over 200 hits. Some (probably many) won't be relevant - some may be. It depends, in part, on which web server you are using.
Am I reading this right? Your line 10 that gives you the error is
open FILE, "$file" or die "Unable to open files";
And the $file you are trying to read, according to line 6,
#files = grep(/\.*$/,readdir(DIR));
is a file that ends with zero or more dot. Is this what you really wanted? This basically matches every file in the directory, including "." and "..". Maybe you don't have enough permission to open the parent directory for reading?
EDIT: if you only want to read all files (including hidden ones), you might want to use something like the following:
opendir(DIR, ".");
#files = readdir(DIR);
closedir(DIR);
foreach $file (#files) {
if ($file ne "." and $file ne "..") {
open FILE, "$file" or die "cannot open $file\n";
# do stuff with FILE
}
}
Note that this doesn't take care of sub directories.
I know I am way late to this discussion (ran across it while searching for grep related posts) but I am going to answer anyway:
It isn't specified clearly if these are web server logs (Apache, IIS, W3SVC, etc.) but the best tool for mining those for data is the LogParser tool from Microsoft. See logparser.com for more info.
LogParser will allow you to write SQL-like statements against the log files. It is very flexible and very fast.
Use perl from the command line, like a better grep
perl -wnl -e '/12.211.23.200/ and print;' *.log > output.txt
the benefit here is that you can chain logic far easier
perl -wnl -e '(/12.211.23.20[1-11]/ or /denied/i ) and print;' *.log
if you are feeling wacky you can also use more advanced command line options to feed perl one liner result into other perl one liners.
You really need to read "Minimal Perl: For UNIX and Linux People", awesome book on this very sort of thing.
First, use grep.
But if you don't want to, here are two small improvements you can make that I haven't seen mentioned yet:
1) Change:
#files = grep(/\.*$/,readdir(DIR));
to
#files = grep({ !-d "$dir/$_" } readdir(DIR));
This way you will exclude not just "." and ".." but also any other subdirectories that may exist in the server log directory (which the open downstream would otherwise choke on).
2) Change:
print if /12.211.23.200/;
to
print if /12\.211\.23\.200/;
"." is a regex wildcard meaning "any character". Changing it to "\." will reduce the number of false positives (unlikely to change your results in practice but it's more correct anyway).