How to execute cleartool command within perl - perl

Sorry if its to naive. I am wanting to write a simple PERL script that will accept two arguments(Here Commit Labels) which are taken as inputs to cleartool command and provides me with a suitable output.
My Code:
#!/usr/bin/perl
$file1 = system('cleartool find . -version "lbtype($ARGV[0])" -print > filename1');
$file2 = system('cleartool find . -version "lbtype($ARGV[1])" -print > filename2');
$file3 = system('diff filename1 filename2 > changeset');
print $ARGV[0];
print $ARGV[1];
print $file3;
close filename1;
close filename2;
close changeset
The output now is 3 empty files: filename1,filename2 and changeset.
But i need the files committed between the two committed labels.
Could anyone shed light as to where i am going wrong!!
Thanks in advance.

try this:
$file1 = system("cleartool find . -version 'lbtype($ARGV[0])' -print > filename1");
instead of
$file1 = system('cleartool find . -version "lbtype($ARGV[0])" -print > filename1');

You have also some Perl package to facilitate the execution of cleartool commands.
package CCCmd: you can see it illustrated in "How can I interact with ClearCase from Perl?"
ClearCase::CtCmd
That would allow to directly get the result in an array, instead of a file (even though you can dump that array in a file if you need to)
#res = ClearCase::CtCmd::exec("find . -version 'lbtype($ARGV[0])' -print");

Related

find command problems in perl script

I am writing a script that will create a new tar file containing only those files that were created after the previous tar.gz file was created.
my $path_to_logs = "/home/myscripts/";
my $FNAME= `ls -t *.tar.gz | head -n1`;
my $FILENAME = $path_to_logs.$FNAME;
chomp ($FILENAME);
if (-e $FILENAME){
my $changed= `find . -name '*.log' -newer $FILENAME`;
chomp $changed;
$command = "tar -cvzT ". $changed." -f deleteme-$(date +%Y-%m-%d-%H-%M-%S).tar.gz";
chomp $command;
print $command;
}
However, the outout for $command shows that each of the find results are on a new line, so I dont get on concatenated command for tar. Any idea why?
Thanks.
How about this to solve your immediate problem:
my $find_cmd = "find . -name '*.log' -newer $filename";
open my $in, '-|', $find_cmd or die "Couldn't run command. $!\n";
while(<$in>) {
chomp;
print "Do something with file: $_\n";
}
If you need the files in a single line you can create a variable and concatenate them or whatever, I just wanted to show you a better way to call a system command (it would be even better if you could call the command directly without the shell expansion but you are relying on the shell expansion there).
In the long run you might want to learn how to use perl's own find/wanted routine and how to do dir globbing instead of having to rely so much on the system.
Just transform the output from find into a single line:
my $changed= `find . -name '*.log' -newer $FILENAME`;
chomp $changed;
$changed =~ s/\n/ /g;
$command = "tar -cvzT -f deleteme-$(date +%Y-%m-%d-%H-%M-%S).tar.gz " . $changed;
Btw, in general, it's often better to reduce one's dependency on OS specific features. You can duplicate all of the commands that you're shelling to the OS in perl using not too much effort:
use strict;
use warnings;
use autodie;
use File::Find::Rule;
use File::stat;
use Time::Piece;
my $path_to_logs = "/home/myscripts/";
my ($FILENAME) = sort {
stat($a)->mtime <=> stat($b)->mtime
} glob('/home/myscripts/*.tar.gz');
if (-e $FILENAME){
my $modified = stat($FILENAME)->mtime;
my #files = File::Find::Rule->file()
->name('*.log')
->modified(">$modified")
->in('.');
my $datenow = localtime->strftime('%Y-%m-%d-%H-%M-%S');
my $command = "tar -cvzT -f deleteme-${datenow}.tar.gz ". join(' ', #files);
You could even use Archive::Tar instead of /bin/tar, but that would potentially have a performance hit.
However, regardless, these simple changes make your script much more portable, and didn't require that much additional code.
It doesn't make much sense to do this in Perl anyway. Regardless of the wrapper language, it would be simpler and more robust to pipe the find output straight to tar, which knows how to handle it.
Anyway, your use of tar's -T option is wrong. It expects a file name containing file names, one per line, not a list of file names.
Also, your FNAME contains the newest file in the current directory, where apparently the intent is to find the newest file in /home/myscripts.
Finally, the $(date ...) interpolation will not be interpolated by Perl, but trivially works if you convert this (back?) to a shell script.
#!/bin/sh
path_to_logs = "/home/myscripts/"
FNAME=$(cd "$path_to_logs"; ls -t *.tar.gz | head -n1)
FILENAME = "$path_to_logs/$FNAME"
if [ -e "$FILENAME" ]; then
find . -name '*.log' -newer "$FILENAME" |
tar -c -v -z -T - -f deleteme-$(date +%Y-%m-%d-%H-%M-%S).tar.gz
fi

Unix commands in Perl?

I'm very new to Perl, and I would like to make a program that creates a directory and moves a file into that directory using the Unix command like:
mkdir test
Which I know would make a directory called "test". From there I would like to give more options like:
mv *.jpg test
That would move all .jpg files into my new directory.
So far I have this:
#!/usr/bin/perl
print "Folder Name:";
$fileName = <STDIN>;
chomp($fileType);
$result=`mkdir $fileName`;
print"Your folder was created \n";
Can anyone help me out with this?
Try doing this :
#!/usr/bin/perl
use strict; use warnings;
print "Folder Name:";
$dirName = <STDIN>;
chomp($dirName);
mkdir($dirName) && print "Your folder was created \n";
rename $_, "$dirName/$_" for <*.jpg>;
You will have a better control when using built-in perl functions than using Unix commands. That's the point of my snippet.
Most (if not all) Unix commands have a corresponding version as a function
e.g
mkdir - see here
mv - See here
Etc. either get a print out of the various manual pages (or probably have a trip down to the book shop - O'Reilly nut shell book is quite good along with others).
In perl you can use bash commands in backticks. However, what happens when the directory isn't created by the mkdir command? Your program doesn't get notified of this and continues on its merry way thinking that everything is fine.
You should use built in command in perl that do the same thing.
http://perldoc.perl.org/functions/mkdir.html
http://perldoc.perl.org/functions/rename.html
It is much easier to trap errors with those functions and fail gracefully. In addition, they run faster because you don't have to fork a new process for each command you run.
Perl has some functions similar to those of the shell. You can just use
mkdir $filename;
You can use backquotes to run a shell command, but it is only usefull if the command returns anything to its standard output, which mkdir does not. For commands without output, use system:
0 == system "mv *.jpg $folder" or die "Cannot move: $?";

How can I search and replace all files recursively to remove some rogue code injected into php files on a wordpress installation?

How can I search and replace all files recursively to remove some rogue code injected into php files on a wordpress installation? The hacker added some code (below) to ALL of the .php files in my wordpress installation, and it happens fairly often to many sites, and I spend hours manually removing the code.
Today I tried a number of techniques I found online, but had no luck due to the long code snippet and the many special characters in it that mess up the delimiters. I tried using different delimiters with perl:
perl -p -i -e 's/rogue_code//g' *
to
perl -p -i -e 's{rogue_code}{}g' *
and tried using backslashes to escape the slashes in the code, but nothing seems to work. I'm working on a shared server, so I don't have full access to all the directories outside my own.
Thanks a lot...here's the code:
< ?php /**/ eval(base64_decode("aWYoZnVuY3
... snip tons of this ...
sgIH1lbHNleyAgICB9ICB9"));? >
Without having a chance to poke around the files myself, it's hard to be sure; but it sounds like you need:
find -name '*.php' -exec perl -i -pe 's{<\?php /\*\*/ eval\(base64_decode\("[^"]+"\)\);\?>}{}g' '{}' ';'
(That said, I agree with the commenters above that trying to undo the damage, piecemeal, after it happens is not the best strategy.)
and it happens fairly often to many sites, and I spend hours manually
removing the code....
Sounds like you need to do a better job of cleaning the hack or change hosts. Replace all WP core files and foldere, all plugins, and then all you have to do is search theme files and wp-config.php for the injected scripts.
See How to completely clean your hacked wordpress installation and How to find a backdoor in a hacked WordPress and Hardening WordPress « WordPress Codex and Recommended WordPress Web Hosting
I have the same problem (Dreamhost?) and first run this clean.pl script:
#!/usr/bin/perl
$file0 =$ARGV[0];
open F0,$file0 or die "error opening $file0 : $!";
$t = <F0>;
$hacked = 0;
if($t =~ s#.*base64_decode.*?;\?>##) {
$hacked=1;
}
print "# $file0: " . ($hacked ? "HACKED" : "CLEAN") . "\n";
if(! $hacked) {
close F0;
exit 0;
}
$file1 = $file0 . ".clean";
open F1,">$file1 " or die "error opening $file1 for write : $!";
print F1 $t;
while(<F0>) {
print F1;
}
close F0;
close F1;
print "mv -f $file0 $file0.bak\n"; #comment this if you don't want backup files.
print "mv -f $file1 $file0\n";
with find . -name '*.php' -exec perl clean.pl '{}' \; > cleanfiles.sh
and then I run . cleanfiles.sh
I also found that there were other differently infected files ("boostrap" infecters, those which triggered the other infection), which instead of the base64_decode call had some hex-escaped command... To detect them, this suspicious_php.sh :
#!/bin/sh
# prints filename if first 2 lines has more than 5000 bytes
file=$1
bytes=`head -n 2 $file | wc --bytes `
if (( bytes > 5000 ))
then
echo $file
fi
And then: find . -name '*.php' -type f -exec ./suspicious_php.sh '{}' \;
Of course, all this is not foolproof at all.

How do I run a Perl script on multiple input files with the same extension?

How do I run a Perl script on multiple input files with the same extension?
perl scriptname.pl file.aspx
I'm looking to have it run for all aspx files in the current directory
Thanks!
In your Perl file,
my #files = <*.aspx>;
for $file (#files) {
# do something.
}
The <*.aspx> is called a glob.
you can pass those files to perl with wildcard
in your script
foreach (#ARGV){
print "file: $_\n";
# open your file here...
#..do something
# close your file
}
on command line
$ perl myscript.pl *.aspx
You can use glob explicitly, to use shell parameters without depending to much on the shell behaviour.
for my $file ( map {glob($_)} #ARGV ) {
print $file, "\n";
};
You may need to control the possibility of a filename duplicate with more than one parameter expanded.
For a simple one-liner with -n or -p, you want
perl -i~ -pe 's/foo/bar/' *.aspx
The -i~ says to modify each target file in place, and leave the original as a backup with an ~ suffix added to the file name. (Omit the suffix to not leave a backup. But if you are still learning or experimenting, that's a bad idea; removing the backups when you're done is a much smaller hassle than restoring the originals from a backup if you mess something up.)
If your Perl code is too complex for a one-liner (or just useful enough to be reusable) obviously replace -e '# your code here' with scriptname.pl ... though then maybe refactor scriptname.pl so that it accepts a list of file name arguments, and simply use scriptname.pl *.aspx to run it on all *.aspx files in the current directory.
If you need to recurse a directory structure and find all files with a particular naming pattern, the find utility is useful.
find . -name '*.aspx' -exec perl -pi~ -e 's/foo/bar/' {} +
If your find does not support -exec ... + try with -exec ... \; though it will be slower and launch more processes (one per file you find instead of as few as possible to process all the files).
To only scan some directories, replace . (which names the current directory) with a space-separated list of the directories to examine, or even use find to find the directories themselves (and then perhaps explore -execdir for doing something in each directory that find selects with your complex, intricate, business-critical, maybe secret list of find option predicates).
Maybe also explore find2perl to do this directory recursion natively in Perl.
If you are on Linux machine, you could try something like this.
for i in `ls /tmp/*.aspx`; do perl scriptname.pl $i; done
For example to handle perl scriptname.pl *.aspx *.asp
In linux: The shell expands wildcards, so the perl can simply be
for (#ARGV) {
operation($_); # do something with each file
}
Windows doesn't expand wildcards so expand the wildcards in each argument in perl as follows. The for loop then processes each file in the same way as above
for (map {glob} #ARGV) {
operation($_); # do something with each file
}
For example, this will print the expanded list under Windows
print "$_\n" for(map {glob} #ARGV);
You can also pass the path where you have your aspx files and read them one by one.
#!/usr/bin/perl -w
use strict;
my $path = shift;
my #files = split/\n/, `ls *.aspx`;
foreach my $file (#files) {
do something...
}

How can I scan multiple log files to find which ones have a particular IP address in them?

Recently there have been a few attackers trying malicious things on my server so I've decided to somewhat "track" them even though I know they won't get very far.
Now, I have an entire directory containing the server logs and I need a way to search through every file in the directory, and return a filename if a string is found. So I thought to myself, what better of a language to use for text & file operations than Perl? So my friend is helping me with a script to scan all files for a certain IP, and return the filenames that contain the IP so I don't have to search for the attacker through every log manually. (I have hundreds)
#!/usr/bin/perl
$dir = ".";
opendir(DIR, "$dir");
#files = grep(/\.*$/,readdir(DIR));
closedir(DIR);
foreach $file(#files) {
open FILE, "$file" or die "Unable to open files";
while(<FILE>) {
print if /12.211.23.200/;
}
}
although it is giving me directory read errors. Any assistance is greatly appreciated.
EDIT: Code edited, still saying permission denied cannot open directory on line 10. I am just going to run the script from within the logs directory if you are questioning the directory change to "."
Mike.
Can you use grep instead?
To get all the lines with the IP, I would directly use grep, no need to show a list of files, it's a simple command:
grep 12\.211\.23\.200 *
I like to pipe it to another file and then open that file in an editor...
If you insist on wanting the filenames, it's also easy
grep -l 12\.211\.23\.200 *
grep is available on all Unix//Linux with the GNU tools, or on windows using one of the many implementations (unxutils, cygwin, ...etc.)
You have to concatenate $dirname with $filname when using files found through readdir, remember you haven't chdir'ed into the directory where those files resides.
open FH, "<", "$dirname/$filname" or die "Cannot open $filname:$!";
Incidentally, why not just use grep -r to recursively search all subdirectories under your log dir for your string?
EDIT: I see your edits, and two things. First, this line:
#files = grep(/\.*$/,readdir(DIR));
Is not effective, because you are searching for zero or more . characters at the end of the string. Since it's zero or more, it'll match everything in the directory. If you're trying to exclude files ending in ., try this:
#files = grep(!/\.$/,readdir(DIR));
Note the ! sign for negation if you're trying to exclude those files. Otherwise (if you only want those files and I'm misunderstanding your intent), leave the ! out.
In any case, if you're getting your die message on line 10, most likely you're hitting a file that has permissions such that you can't read it. Try putting the filename in the die output so you can see which file it's failing on:
open FILE, "$file" or die "Unable to open file: $file";
But as with other answers, and to reiterate: Why not use grep? The unix command, not the Perl function.
This will get the file names you are looking for in perl, and probably do it much faster than running and doing a perl regex.
#files = `find ~/ServerLogs -name "*.log" | xargs grep -l "<ip address>"`'
Although, this will require a *nix compliant system, or Cygwin on Windows.
Firstly get a list of files within your source directory:
opendir(DIR, "$dir");
#files = grep(/\.log$/,readdir(DIR));
closedir(DIR);
And then loop through those files
foreach $file(#files)
{
// file processing code
}
My first suggest would be to use grep instead. The right tool for the job, they say...
But to answer your question:
readdir just returns the filenames from the directory. You'll need to concatenate the directory name and filename together.
$path = "$dirname/$filname";
open FH, $path or die ...
Then you should ignore files that are actually directories, such as "." and "..". After getting the $path, check to see if it's a file.
if (-f $path) {
open FH, $path or die ...
while (<FH>)
BTW, I thought I would throw in a mention for File::Next. To iterate over all files in a directory (recursively):
use Path::Class; # always useful.
use File::Next;
my $files = File::Next::files( dir(qw/path to files/) ); # look in path/to/files
while( defined ( my $file = $files->() ) ){
$file = file( $file );
say "Examining $file";
say "found foo" if $file->slurp =~ /foo/;
}
File::Next is taint-safe.
~ doesn't auto-expand in Perl.
opendir my $fh, '~/' or die("Doin It Wrong"); # Doing It Wrong.
opendir my $fh, glob('~/') and die( "Thats right!" );
Also, if you must use readdir(), make sure you guard the expression thus:
while (defined(my $filename = readdir(DH))) {
...
}
If you don't do the defined() test, the loop will terminate if it finds a file called '0'.
Have you looked on CPAN for log parsers? I searched with 'log parse' and it yielded over 200 hits. Some (probably many) won't be relevant - some may be. It depends, in part, on which web server you are using.
Am I reading this right? Your line 10 that gives you the error is
open FILE, "$file" or die "Unable to open files";
And the $file you are trying to read, according to line 6,
#files = grep(/\.*$/,readdir(DIR));
is a file that ends with zero or more dot. Is this what you really wanted? This basically matches every file in the directory, including "." and "..". Maybe you don't have enough permission to open the parent directory for reading?
EDIT: if you only want to read all files (including hidden ones), you might want to use something like the following:
opendir(DIR, ".");
#files = readdir(DIR);
closedir(DIR);
foreach $file (#files) {
if ($file ne "." and $file ne "..") {
open FILE, "$file" or die "cannot open $file\n";
# do stuff with FILE
}
}
Note that this doesn't take care of sub directories.
I know I am way late to this discussion (ran across it while searching for grep related posts) but I am going to answer anyway:
It isn't specified clearly if these are web server logs (Apache, IIS, W3SVC, etc.) but the best tool for mining those for data is the LogParser tool from Microsoft. See logparser.com for more info.
LogParser will allow you to write SQL-like statements against the log files. It is very flexible and very fast.
Use perl from the command line, like a better grep
perl -wnl -e '/12.211.23.200/ and print;' *.log > output.txt
the benefit here is that you can chain logic far easier
perl -wnl -e '(/12.211.23.20[1-11]/ or /denied/i ) and print;' *.log
if you are feeling wacky you can also use more advanced command line options to feed perl one liner result into other perl one liners.
You really need to read "Minimal Perl: For UNIX and Linux People", awesome book on this very sort of thing.
First, use grep.
But if you don't want to, here are two small improvements you can make that I haven't seen mentioned yet:
1) Change:
#files = grep(/\.*$/,readdir(DIR));
to
#files = grep({ !-d "$dir/$_" } readdir(DIR));
This way you will exclude not just "." and ".." but also any other subdirectories that may exist in the server log directory (which the open downstream would otherwise choke on).
2) Change:
print if /12.211.23.200/;
to
print if /12\.211\.23\.200/;
"." is a regex wildcard meaning "any character". Changing it to "\." will reduce the number of false positives (unlikely to change your results in practice but it's more correct anyway).