I usually get a bunch of files whose name start with a dash '-' . This is causing all sorts of problem when i do any kind of linux commands because anything after - is interpreted as a flag.
What is the fastest way to rename these files without dash character in the front of the file. I can manually rename each file by adding a '--' in front of the file name.For eg: '-File1' will be renamed as
mv -- -File1 File1
But this is not ideal when i have to rename 100's of files on the fly. Currently I have to export it out and use a windows program so I can batch rename them and then upload it back to a Linux box.
The easiest way to refer to such a file is ./-File1. (You only have the problem if the file is in the current directory, anyway.) Maybe if you get used to that it's not so bad.
To bulk rename them, you could do something like:
for f in -*; do mv "./$f" "renamed$f"; done
or, as #shellter suggests in a comment, to reproduce the example in the OP:
for f in -*; do mv "./$f" "${f#-}"; done
Note: the above will only remove a single - from the name.
If you have the util-linux package (most do?):
rename - '' ./-*
man rename
Might be easier to do this in the shell, but if you're worried about special cases or if you would just rather use perl there's a couple ways to do it. One is to use File::Copy mv:
use strict;
use warnings;
use File::Copy qw(mv);
opendir(my $dir, ".") or die "Can't open $!";
foreach my $file (readdir($dir)) {
my $new_name = $file =~ s/^-+//r; #works if filename begins with multiple '-'s
if ($new_name ne $file) {
say "$file -> $new_name";
mv $file, $new_name;
}
}
or use the rename builtin, but this theoretically can not work for some system implementations:
rename $file, $new_name; #instead of mv $file, $new_name;
In either case, if a file with the new name exists it will get silently overwritten with this code. You might need some logic to take care of that:
# Stick inside the "if" clause above
if (-e $new_name) {
say "$new_name already exists!"
next;
}
Using find:
find -name '-*' -exec rename -- - '' {} \;
I am creating a Perl script to deploy webcode (Windows 2008 Server). I need to first Copy all of the old files from the destination folder and create an archive dir for the files with a timestamp trailing on the archive dir name (arc_dir.20131217). Move the files into the archive. Then I need to copy the code from the source dir into the destination folder. However it is not working at all and I am absolutely clueless as to why.
Two things, I am very green with Perl as will be shortly seen and I do not want someone to do the code for me. It kind of defeats the purpose of learning. Direction and a dialogue would be great. I am a veterans dream, willing to learn and I desire to write only clean code.
use strict;
use warnings;
use autodie;
use File::Copy; #Gives you access to the "move" command
use File::Path; #Copies data recursively from one dir and creates a new dir
use POSIX;
#Set Directories
use constant {
TIMESTAMP => strftime("%Y%M%d%H%M%S", localtime);
Source_Dir => "C:\Users\Documents\Source_Dir",
destination_Dir => "C:\Users\Documents\Destination_Dir",
ARCHIVE => "C:\Users\Documents\arc_dir.TIMESTAMP",
};
#Creates new directory to archive old files
make_path('C:\Users\Shaun\Documents\arc_dir.TIMESTAMP');
#Need to copy destination dir, create archive dir and paste data to it
#Opens destination_Dir, so I can read from it
opendir my $dir, destination_Dir;
# Loop through directory and grab all of the files and store in var
while (my $file = readdir $dir) {
my $destination_Dir = destination_Dir . "/" . "$file";
move $destination_Dir, ARCHIVE;
#Loop through directory and copy all webcode to destination_Dir
opendir my $dir, Source_Dir;
while (my $file = readdir $dir) {
my $Source_Dir = Source_Dir . "/" . "$file";
move $Source_Dir, destination_Dir;
There are some syntax errors in the script. I would use the -c on the command line to the PERL to check the syntax of the script (such as perl -c ). Please make sure your () and {} are matched.
The other item is that backwards slash (\) need to be escaped with a backward slash (\) when in double quotes ("). Otherwise it is just escaping the next character in the string (and is probably not what you want for a path name). Strings in double quotes are interpolated before being processed, where single quotes are not (Nice explanation of the difference between sinqle quotes and doubles: http://www.perlmonks.org/?node_id=401006). You may want to change the sinqle quotes you have to double quotes in your makepath call. Otherwise TIMESTAMP will not be changed and the directory will have the name TIMESTAMP.
I would also suggest putting in some print statement to indicate what is being done and to give feedback that items are progressing. Such as printing the "moving $destination_Dir to ARCHIVE" and "moving $Source_Dir to destination_Dir" would let you know files are being moved.
I am trying to chdir in perl but I am just not able to get my head around what's going wrong.
This code works.
chdir('C:\Users\Server\Desktop')
But when trying to get the user's input, it doesn't work. I even tried using chomp to remove any spaces that might come.
print "Please enter the directory\n";
$p=<STDIN>;
chdir ('$p') or die "sorry";
system("dir");
Also could someone please explain how I could use the system command in this same situation and how it differs from chdir.
The final aim is to access two folders, check for files that are named the same (eg: if both the folders have a file named "water") and copy the file that has the same name into a third folder.
chdir('$p') tries to change to a directory literally named $p. Drop the single quotes:
chdir($p)
Also, after reading it in, you probably want to remove the newline (unless the directory name really does end with a newline):
$p = <STDIN>;
chomp($p);
But if you are just chdiring to be able to run dir and get the results in your script, you probably don't want to do that. First of all, system runs a command but doesn't capture its output. Secondly, you can just do:
opendir my $dirhandle, $p or die "unable to open directory $p: $!\n";
my #files = readdir $dirhandle;
closedir $dirhandle;
and avoid the chdir and running a command prompt command altogether.
I will use it this way.
chdir "C:/Users/Server/Desktop"
The above works for me
I am new to Perl and have created a simple Perl program. However, it never seems to find a file on the file system. The code is:
my $filename = 'test.txt';
if (-e $filename)
{
print "exists";
}
else
{
print "not found";
}
I have also tried to use the exact path of the file "test.txt" but it still does not work; it never finds the file. Does anybody know what I'm doing wrong?
Your code seems correct, which either means that "test.txt" really doesn't exist (or if there is, it's not in the working directory).
For example, if you have this:
/home/you/code/test.pl
/home/you/test.txt
And run your code like this:
$ cd code
$ perl test.pl
Then your test file won't be found.
It may help to make your script print the current working directory before it does anything:
use Cwd;
print getcwd();
...
Write the full path to your file. It should work. For example:
folder/files/file.txt
and probably use " instead of '
Here are some possibilities for what might be wrong:
Regarding the full path: You are using windows and just copied the full path into your string. In this case don't forget to escape the backspaces in your path. For example: C:\myFolder\test.txt must be put into the variable like this: my $filename = "C:\\myFolder\\test.txt"
Your script uses another directory than the one your file is in. Here's how you can find out where your script is executed and where it looks for the relative file path test.txt:
use strict;
use Cwd;
print getcwd;
If you are in the wrong filepath you have to switch to the right one before you execute your script. Use the shell command cd for this.
You are in the right directory and/or are using the right full path but the file has another name. You can use perl to find out what the actual name is. Change into the directory where the file is before you execute this script:
use strict;
opendir my $dirh, '.';
print "'", join ("'\n'", grep $_ ne '.' && $_ ne '..', readdir $dirh), "'\n";
closedir $dirh;
This prints all files in the current directory in single quotes. Copy the filename from your file and use it in the code.
Good luck! :)
Use this script:
my $filename=glob('*.txt');
print $filename;
if (-e $filename)
{
print "exists";
}
else
{
print "not found";
}
Recently there have been a few attackers trying malicious things on my server so I've decided to somewhat "track" them even though I know they won't get very far.
Now, I have an entire directory containing the server logs and I need a way to search through every file in the directory, and return a filename if a string is found. So I thought to myself, what better of a language to use for text & file operations than Perl? So my friend is helping me with a script to scan all files for a certain IP, and return the filenames that contain the IP so I don't have to search for the attacker through every log manually. (I have hundreds)
#!/usr/bin/perl
$dir = ".";
opendir(DIR, "$dir");
#files = grep(/\.*$/,readdir(DIR));
closedir(DIR);
foreach $file(#files) {
open FILE, "$file" or die "Unable to open files";
while(<FILE>) {
print if /12.211.23.200/;
}
}
although it is giving me directory read errors. Any assistance is greatly appreciated.
EDIT: Code edited, still saying permission denied cannot open directory on line 10. I am just going to run the script from within the logs directory if you are questioning the directory change to "."
Mike.
Can you use grep instead?
To get all the lines with the IP, I would directly use grep, no need to show a list of files, it's a simple command:
grep 12\.211\.23\.200 *
I like to pipe it to another file and then open that file in an editor...
If you insist on wanting the filenames, it's also easy
grep -l 12\.211\.23\.200 *
grep is available on all Unix//Linux with the GNU tools, or on windows using one of the many implementations (unxutils, cygwin, ...etc.)
You have to concatenate $dirname with $filname when using files found through readdir, remember you haven't chdir'ed into the directory where those files resides.
open FH, "<", "$dirname/$filname" or die "Cannot open $filname:$!";
Incidentally, why not just use grep -r to recursively search all subdirectories under your log dir for your string?
EDIT: I see your edits, and two things. First, this line:
#files = grep(/\.*$/,readdir(DIR));
Is not effective, because you are searching for zero or more . characters at the end of the string. Since it's zero or more, it'll match everything in the directory. If you're trying to exclude files ending in ., try this:
#files = grep(!/\.$/,readdir(DIR));
Note the ! sign for negation if you're trying to exclude those files. Otherwise (if you only want those files and I'm misunderstanding your intent), leave the ! out.
In any case, if you're getting your die message on line 10, most likely you're hitting a file that has permissions such that you can't read it. Try putting the filename in the die output so you can see which file it's failing on:
open FILE, "$file" or die "Unable to open file: $file";
But as with other answers, and to reiterate: Why not use grep? The unix command, not the Perl function.
This will get the file names you are looking for in perl, and probably do it much faster than running and doing a perl regex.
#files = `find ~/ServerLogs -name "*.log" | xargs grep -l "<ip address>"`'
Although, this will require a *nix compliant system, or Cygwin on Windows.
Firstly get a list of files within your source directory:
opendir(DIR, "$dir");
#files = grep(/\.log$/,readdir(DIR));
closedir(DIR);
And then loop through those files
foreach $file(#files)
{
// file processing code
}
My first suggest would be to use grep instead. The right tool for the job, they say...
But to answer your question:
readdir just returns the filenames from the directory. You'll need to concatenate the directory name and filename together.
$path = "$dirname/$filname";
open FH, $path or die ...
Then you should ignore files that are actually directories, such as "." and "..". After getting the $path, check to see if it's a file.
if (-f $path) {
open FH, $path or die ...
while (<FH>)
BTW, I thought I would throw in a mention for File::Next. To iterate over all files in a directory (recursively):
use Path::Class; # always useful.
use File::Next;
my $files = File::Next::files( dir(qw/path to files/) ); # look in path/to/files
while( defined ( my $file = $files->() ) ){
$file = file( $file );
say "Examining $file";
say "found foo" if $file->slurp =~ /foo/;
}
File::Next is taint-safe.
~ doesn't auto-expand in Perl.
opendir my $fh, '~/' or die("Doin It Wrong"); # Doing It Wrong.
opendir my $fh, glob('~/') and die( "Thats right!" );
Also, if you must use readdir(), make sure you guard the expression thus:
while (defined(my $filename = readdir(DH))) {
...
}
If you don't do the defined() test, the loop will terminate if it finds a file called '0'.
Have you looked on CPAN for log parsers? I searched with 'log parse' and it yielded over 200 hits. Some (probably many) won't be relevant - some may be. It depends, in part, on which web server you are using.
Am I reading this right? Your line 10 that gives you the error is
open FILE, "$file" or die "Unable to open files";
And the $file you are trying to read, according to line 6,
#files = grep(/\.*$/,readdir(DIR));
is a file that ends with zero or more dot. Is this what you really wanted? This basically matches every file in the directory, including "." and "..". Maybe you don't have enough permission to open the parent directory for reading?
EDIT: if you only want to read all files (including hidden ones), you might want to use something like the following:
opendir(DIR, ".");
#files = readdir(DIR);
closedir(DIR);
foreach $file (#files) {
if ($file ne "." and $file ne "..") {
open FILE, "$file" or die "cannot open $file\n";
# do stuff with FILE
}
}
Note that this doesn't take care of sub directories.
I know I am way late to this discussion (ran across it while searching for grep related posts) but I am going to answer anyway:
It isn't specified clearly if these are web server logs (Apache, IIS, W3SVC, etc.) but the best tool for mining those for data is the LogParser tool from Microsoft. See logparser.com for more info.
LogParser will allow you to write SQL-like statements against the log files. It is very flexible and very fast.
Use perl from the command line, like a better grep
perl -wnl -e '/12.211.23.200/ and print;' *.log > output.txt
the benefit here is that you can chain logic far easier
perl -wnl -e '(/12.211.23.20[1-11]/ or /denied/i ) and print;' *.log
if you are feeling wacky you can also use more advanced command line options to feed perl one liner result into other perl one liners.
You really need to read "Minimal Perl: For UNIX and Linux People", awesome book on this very sort of thing.
First, use grep.
But if you don't want to, here are two small improvements you can make that I haven't seen mentioned yet:
1) Change:
#files = grep(/\.*$/,readdir(DIR));
to
#files = grep({ !-d "$dir/$_" } readdir(DIR));
This way you will exclude not just "." and ".." but also any other subdirectories that may exist in the server log directory (which the open downstream would otherwise choke on).
2) Change:
print if /12.211.23.200/;
to
print if /12\.211\.23\.200/;
"." is a regex wildcard meaning "any character". Changing it to "\." will reduce the number of false positives (unlikely to change your results in practice but it's more correct anyway).