Perl - Trouble with my unzip system call for zip file crack - perl

I am a junior currently taking a scripting languages class that is suppose to spit us out with intermediate level bash, perl, and python in one semester. Since this class is accelerated, we speed through topics quickly and our professor endorses using forums to supplement our learning if we have questions.
I am currently working on our first assignment. The requirement is to create a very simple dictionary attack using a provided wordlist "linux.words" and a basic bruteforce attack. The bruteforce needs to compensate for any combination of 4 letter strings.
I have used print statements to check if my logic is sound, and it seems it is. If you have any suggestions on how to improve my logic, I am here to learn and I am all ears.
This is on Ubuntu v12.04 in case that is relevant.
I have tried replacing the scalar within the call with a straight word like unicorn and it runs fine, obviously is the wrong password, and it returns correctly. I have done this both in terminal and in the script itself. My professor has looked over this for a good 15 minutes he could spare, before referring me to forum, and said it looked good. He suspected that since I wrote the code using Notepad++ there might be hidden characters. I rewrote the code straight in the terminal using vim and it gave the same errors above. The code pasted is below is from vim.
My actual issue is that my system call is giving me problems. It returns the help function for unzip showing usages and other help material.
Here is my code.
#!/usr/bin/perl
use strict;
use warnings;
#Prototypes
sub brute();
sub dict();
sub AddSlashes($);
### ADD SLASHES ###
sub AddSlashes($)
{
my $text = shift;
$text =~ s/\\/\\\\/g;
$text =~ s/'/\\'/g;
$text =~ s/"/\\"/g;
$text =~ s/\\0/\\\\0/g;
return $text;
}
### BRUTEFORCE ATTACK ###
sub brute()
{
print "Bruteforce Attack...\n";
print "Press any key to continue.\n";
if (<>)
{
#INCEPTION START
my #larr1 = ('a'..'z'); #LEVEL 1 +
foreach (#larr1)
{
my $layer1 = $_; #LEVEL 1 -
my #larr2 = ('a'..'z'); #LEVEL 2 +
foreach (#larr2)
{
my $layer2 = $_; # LEVEL 2 -
my#larr3 = ('a'..'z'); #LEVEL 3 +
foreach (#larr3)
{
my $layer3 = $_; #LEVEL 3 -
my#larr4 = ('a'..'z'); #LEVEL 4 +
foreach (#larr4)
{
my $layer4 = $_;
my $pass = ("$layer1$layer2$layer3$layer4");
print ($pass); #LEVEL 4 -
}
}
}
}
}
}
### DICTIONARY ATTACK ###
sub dict()
{
print "Dictionary Attack...\n"; #Prompt User
print "Provide wordlist: ";
my $uInput = "";
chomp($uInput = <>); #User provides wordlist
(open IN, $uInput) #Bring in wordlist
or die "Cannot open $uInput, $!"; #If we cannot open file, alert
my #dict = <IN>; #Throw the wordlist into an array
foreach (#dict)
{
print $_; #Debug, shows what word we are on
#next; #Debug
my $pass = AddSlashes($_); #To store the $_ value for later use
#Check pass call
my $status = system("unzip -qq -o -P $pass secret_file_dict.zip > /dev/null 2>&1"); #Return unzip system call set to var
#Catch the correct password
if ($status == 0)
{
print ("Return of unzip is ", $status, " and pass is ", $pass, "\n"); #Print out value of return as well as pass
last;
}
}
}
### MAIN ###
dict();
exit (0);
Here is my error
See "unzip -hh" or unzip.txt for more help. Examples:
unzip data1 -x joe => extract all files except joe from zipfile data1.zip
unzip -p foo | more => send contents of foo.zip via pipe into program more
unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer
aerify
UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.
Usage: unzip [-Z] [-opts[modifiers]] file[.zip] [list] [-x xlist] [-d exdir]
Default action is to extract files in list, except those in xlist, to exdir;
file[.zip] may be a wildcard. -Z => ZipInfo mode ("unzip -Z" for usage).
-p extract files to pipe, no messages -l list files (short format)
-f freshen existing files, create none -t test compressed archive data
-u update files, create if necessary -z display archive comment only
-v list verbosely/show version info -T timestamp archive to latest
-x exclude files that follow (in xlist) -d extract files into exdir
modifiers:
-n never overwrite existing files -q quiet mode (-qq => quieter)
-o overwrite files WITHOUT prompting -a auto-convert any text files
-j junk paths (do not make directories) -aa treat ALL files as text
-U use escapes for all non-ASCII Unicode -UU ignore any Unicode fields
-C match filenames case-insensitively -L make (some) names lowercase
-X restore UID/GID info -V retain VMS version numbers
-K keep setuid/setgid/tacky permissions -M pipe through "more" pager
-O CHARSET specify a character encoding for DOS, Windows and OS/2 archives
-I CHARSET specify a character encoding for UNIX and other archives
See "unzip -hh" or unzip.txt for more help. Examples:
unzip data1 -x joe => extract all files except joe from zipfile data1.zip
unzip -p foo | more => send contents of foo.zip via pipe into program more
unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer
aerifying
It is obviously not complete. In the main I will switch the brute(); for dict(); as needed to test. Once I get the system call working I will throw that into the brute section.
If you need me to elaborate more on my issue, please let me know. I am focused here on learning, so please add idiot proof comments to any thing you respond to me with.

First: DO NOT USE PERL'S PROTOTYPES. They don't do what you or your professor might wish they do.
Second: Don't write homebrew escaping routines such as AddSlashes. Perl has quotemeta. Use it.
Your problem is not with the specific programming language. How much time your professor has spent on your problem, how many classes you take are irrelevant to the problem. Focus on the actual problem, not all the extraneous "stuff".
Such as, what is the point of sub brute? You are not calling it in this script, it is not relevant to your problem, so don't post it. Narrow down your problem to the smallest relevant piece.
Don't prompt for the wordlist file in the body of dict. Separate the functionality into bite sized chunks so in each context you can focus on the problem at hand. Your dict_attack subroutine should expect to receive either a filehandle or a reference to an array of words. To keep memory footprint low, we'll assume it's a filehandle (so you don't have to keep the entire wordlist in memory).
So, your main looks like:
sub main {
# obtain name of wordlist file
# open wordlist file
# if success, call dict_attack with filehandle
# dict_attack returns password on success
}
Now, you can focus on dict_attack.
#!/usr/bin/perl
use strict;
use warnings;
main();
sub dict_attack {
my $dict_fh = shift;
while (my $word = <$dict_fh>) {
$word =~ s/\A\s+//;
$word =~ s/\s+\z//;
print "Trying $word\n";
my $pass = quotemeta( $word );
my $cmd = "unzip -qq -o -P $pass test.zip";
my $status = system $cmd;
if ($status == 0) {
return $word;
}
}
return;
}
sub main {
my $words = join("\n", qw(one two three four five));
open my $fh, '<', \$words or die $!;
if (my $pass = dict_attack($fh)) {
print "Password is '$pass'\n";
}
else {
print "Not found\n";
}
return;
}
Output:
C:\...> perl y.pl
Trying one
Trying two
Trying three
Trying four
Trying five
Password is 'five'

Related

How to print result STDOUT to a temporary blank new file in the same directory in Perl?

I'm new in Perl, so it's maybe a very basic case that i still can't understand.
Case:
Program tell user to types the file name.
User types the file name (1 or more files).
Program read the content of file input.
If it's single file input, then it just prints the entire content of it.
if it's multi files input, then it combines the contents of each file in a sequence.
And then print result to a temporary new file, which located in the same directory with the program.pl .
file1.txt:
head
a
b
end
file2.txt:
head
c
d
e
f
end
SINGLE INPUT program ioSingle.pl:
#!/usr/bin/perl
print "File name: ";
$userinput = <STDIN>; chomp ($userinput);
#read content from input file
open ("FILEINPUT", $userinput) or die ("can't open file");
#PRINT CONTENT selama ada di file tsb
while (<FILEINPUT>) {
print ; }
close FILEINPUT;
SINGLE RESULT in cmd:
>perl ioSingle.pl
File name: file1.txt
head
a
b
end
I found tutorial code that combine content from multifiles input but cannot adapt the while argument to code above:
while ($userinput = <>) {
print ($userinput);
}
I was stucked at making it work for multifiles input,
How am i suppose to reformat the code so my program could give result like this?
EXPECTED MULTIFILES RESULT in cmd:
>perl ioMulti.pl
File name: file1.txt file2.txt
head
a
b
end
head
c
d
e
f
end
i appreciate your response :)
A good way to start working on a problem like this, is to break it down into smaller sections.
Your problem seems to break down to this:
get a list of filenames
for each file in the list
display the file contents
So think about writing subroutines that do each of these tasks. You already have something like a subroutine to display the contents of the file.
sub display_file_contents {
# filename is the first (and only argument) to the sub
my $filename = shift;
# Use lexical filehandl and three-arg open
open my $filehandle, '<', $filename or die $!;
# Shorter version of your code
print while <$filehandle>;
}
The next task is to get our list of files. You already have some of that too.
sub get_list_of_files {
print 'File name(s): ';
my $files = <STDIN>;
chomp $files;
# We might have more than one filename. Need to split input.
# Assume filenames are separated by whitespace
# (Might need to revisit that assumption - filenames can contain spaces!)
my #filenames = split /\s+/, $files;
return #filenames;
}
We can then put all of that together in the main program.
#!/usr/bin/perl
use strict;
use warnings;
my #list_of_files = get_list_of_files();
foreach my $file (#list_of_files) {
display_file_contents($file);
}
By breaking the task down into smaller tasks, each one becomes easier to deal with. And you don't need to carry the complexity of the whole program in you head at one time.
p.s. But like JRFerguson says, taking the list of files as command line parameters would make this far simpler.
The easy way is to use the diamond operator <> to open and read the files specified on the command line. This would achieve your objective:
while (<>) {
chomp;
print "$_\n";
}
Thus: ioSingle.pl file1.txt file2.txt
If this is the sole objective, you can reduce this to a command line script using the -p or -n switch like:
perl -pe '1' file1.txt file2.txt
perl -ne 'print' file1.txt file2.txt
These switches create implicit loops around the -e commands. The -p switch prints $_ after every loop as if you had written:
LINE:
while (<>) {
# your code...
} continue {
print;
}
Using -n creates:
LINE:
while (<>) {
# your code...
}
Thus, -p adds an implicit print statement.

ksh perl script.. if condition

Friends...
I have got bash script which calls perl script and emails logfile result everytime.
I want to change my bash script such that it should only email if there is value in perl subroutine row counter (rcounter++) and not all time.
any tips on how to change .ksh file?
.ksh
#!/bin/ksh
d=`date +%Y%m%d`
log_dir=$HOME
output_file=log.list
if ! list_tables -login /#testdb -outputFile $output_file
then
mailx -s "list report : $d" test#mail < $output_file
fi
=======Below if condition also works for me=============================
list_tables -login /#testdb -outputFile $output_file
if ["$?" -ne "0"];
then
mailx -s "list report : $d" test#mail < $output_file
fi
========================================================================
Perl Script: list_tables
use strict;
use Getopt::Long;
use DBI;
use DBD::Oracle qw(:ora_types);
my $exitStatus = 0;
my %options = ()
my $oracleLogin;
my $outputFile;
my $runDate;
my $logFile;
my $rcounter;
($oracleLogin, $outputFile) = &validateCommandLine();
my $db = &attemptconnect($oracleLogin);
&reportListTables($outputFile);
$db->$disconnect;
exit($rcounter);
#---------------------------
sub reportListTables {
my $outputFile = shift;
if ( ! open (OUT,">" . $outputfile)) {
&logMessage("Error opening $outputFile");
}
print OUT &putTitle;
my $oldDB="DEFAULT";
my $dbcounter = 0;
my $i;
print OUT &putHeader();
#iterate over results
for (my $i=0; $i<=$lstSessions; $i++) {
# print result row
print OUT &putRow($i);
$dbCounter++;
}
print OUT &putFooter($dbCounter);
print OUT " *** Report End \n";
closeOUT;
}
#------------------------------
sub putTitle {
my $title = qq{
List Tables: Yesterday
--------------
};
#------------------------------
sub putHeader {
my $header = qq{
TESTDB
==============
OWNER Table Created
};
#------------------------------
sub putRow {
my $indx = shift;
my $ln = sprintf "%-19s %-30s %-19s",
$lstSessions[$indx]{owner},
$lstSessions[$indx]{object_name},
$lstSessions[$indx]{created};
return "$ln\n";
}
#------------------------------
sub getListTables {
my $runDt = shift;
$rcounter = 0;
my $SQL = qq{
selct owner, object_name, to_char(created,'MM-DD-YYYY') from dba_objects
};
my $sth = $db->prepare ($SQL) or die $db->errstr;
$sth->execute() or die $db->errstr;;
while (my #row = $sth->fethcrow_array) {
$lstSessions[$rcounter] {owner} =$row[0];
$lstSessions[$rcounter] {object_name} =$row[1];
$lstSessions[$rcounter] {created} =$row[2];
&logMessage(" Owner: $lstSessions[$rcounter]{owner}");
&logMessage(" Table: $lstSessions[$rcounter]{object_name}");
&logMessage(" created: $lstSessions[$rcounter]{created}");
$rcounter++;
}
&logMessage("$rcounter records found...");
}
thanks..
also happy to include mail-x part in perl if that makes life more easy..
I am not sure I understood your question correctly. Also, your code is incomplete. So there's some guessing involved.
You cannot check the value of a local Perl variable from the caller's side.
But if your question is if the Perl code added anything to the logfile, the solution is simple: Delete the "rcounter records found..." line (which doesn't make sense anyway since it is always executed, whether the query returned results or not). Then, let the shell script backup the logfile before the call to Perl, and make a diff afterwards, sending the mail only if diff tells you there has been output added to the logfile.
If this doesn't help you, please clarify the question.
EDIT (from comments below):
Shell scripting isn't that difficult. Right now, your Perl script ends with:
$db->($exitStatus);
That is your exit code. You don't check that in your shell script anyway, so you could change it to something more useful, like the number of data rows written. A primitive solution would be to make $rcounter global (instead of local to getListTables()), by declaring it at the top of the Perl script (e.g. after my $logFile;). Then you could replace the "exitStatus" line above with simply:
$rcounter;
Voila, your Perl script now returns the number of data rows written.
In Perl, a return code of 0 is considered a failure, any other value is a success. In shell, it's the other way around - but luckily you don't have to worry about that as Perl knows that and "inverts" (negates) the return code of a script when returning to the calling shell.
So all you need is making the mailing depend on a non-zero return of Perl:
if list_tables -login /#testdb -outputFile $output_file
then
mailx -s "list report : $d" test#mail < $output_file
fi
A comment on the side: It looks to me as if your programming skill isn't up to par with the scope of the problem you are trying to solve. If returning a value from Perl to bash gives you that much trouble, you should probably spend your time with tutorials, not with getting input from a database and sending emails around. Learn to walk before you try to fly...

How to find a file which exists in different directories under a given path in Perl

I'm looking for a method to looks for file which resides in a few directories in a given path. In other words, those directories will be having files with same filename across. My script seem to have the hierarchy problem on looking into the correct path to grep the filename for processing. I have a fix path as input and the script will need to looks into the path and finding files from there but my script seem stuck on 2 tiers up and process from there rather than looking into the last directories in the tier (in my case here it process on "ln" and "nn" and start processing the subroutine).
The fix input path is:-
/nfs/disks/version_2.0/
The files that I want to do post processing by subroutine will be exist under several directories as below. Basically I wanted to check if the file1.abc do exists in all the directories temp1, temp2 & temp3 under ln directory. Same for file2.abc if exist in temp1, temp2, temp3 under nn directory.
The files that I wanted to check in full path will be like this:-
/nfs/disks/version_2.0/dir_a/ln/temp1/file1.abc
/nfs/disks/version_2.0/dir_a/ln/temp2/file1.abc
/nfs/disks/version_2.0/dir_a/ln/temp3/file1.abc
/nfs/disks/version_2.0/dir_a/nn/temp1/file2.abc
/nfs/disks/version_2.0/dir_a/nn/temp2/file2.abc
/nfs/disks/version_2.0/dir_a/nn/temp3/file2.abc
My script as below:-
#! /usr/bin/perl -w
my $dir = '/nfs/fm/disks/version_2.0/' ;
opendir(TEMP, $dir) || die $! ;
foreach my $file (readdir(TEMP)) {
next if ($file eq "." || $file eq "..") ;
if (-d "$dir/$file") {
my $d = "$dir/$file";
print "Directory:- $d\n" ;
&getFile($d);
&compare($file) ;
}
}
Note that I put the print "Directory:- $d\n" ; there for debug purposes and it printed this:-
/nfs/disks/version_2.0/dir_a/
/nfs/disks/version_2.0/dir_b/
So I knew it get into the wrong path for processing the following subroutine.
Can somebody help to point me where is the error in my script? Thanks!
To be clear: the script is supposed to recurse through a directory and look for files with a particular filename? In this case, I think the following code is the problem:
if (-d "$dir/$file") {
my $d = "$dir/$file";
print "Directory:- $d\n" ;
&getFile($d);
&compare($file) ;
}
I'm assuming the &getFile($d) is meant to step into a directory (i.e., the recursive step). This is fine. However, it looks like the &compare($file) is the action that you want to take when the object that you're looking at isn't a directory. Therefore, that code block should look something like this:
if (-d "$dir/$file") {
&getFile("$dir/$file"); # the recursive step, for directories inside of this one
} elsif( -f "$dir/$file" ){
&compare("$dir/$file"); # the action on files inside of the current directory
}
The general pseudo-code should like like this:
sub myFind {
my $dir = shift;
foreach my $file( stat $dir ){
next if $file -eq "." || $file -eq ".."
my $obj = "$dir/$file";
if( -d $obj ){
myFind( $obj );
} elsif( -f $obj ){
doSomethingWithFile( $obj );
}
}
}
myFind( "/nfs/fm/disks/version_2.0" );
As a side note: this script is reinventing the wheel. You only need to write a script that does the processing on an individual file. You could do the rest entirely from the shell:
find /nfs/fm/disks/version_2.0 -type f -name "the-filename-you-want" -exec your_script.pl {} \;
Wow, it's like reliving the 1990s! Perl code has evolved somewhat, and you really need to learn the new stuff. It looks like you learned Perl in version 3.0 or 4.0. Here's some pointers:
Use use warnings; instead of -w on the command line.
Use use strict;. This will require you to predeclare variables using my which will scope them to the local block or the file if they're not in a local block. This helps catch a lot of errors.
Don't put & in front of subroutine names.
Use and, or, and not instead of &&, ||, and !.
Learn about Perl Modules which can save you a lot of time and effort.
When someone says detect duplicates, I immediately think of hashes. If you use a hash based upon your file's name, you can easily see if there are duplicate files.
Of course a hash can only have a single value for each key. Fortunately, in Perl 5.x, that value can be a reference to another data structure.
So, I recommend you use a hash that contains a reference to a list (array in old parlance). You can push each instance of the file to that list.
Using your example, you'd have a data structure that looks like this:
%file_hash = {
file1.abc => [
/nfs/disks/version_2.0/dir_a/ln/temp1
/nfs/disks/version_2.0/dir_a/ln/temp2
/nfs/disks/version_2.0/dir_a/ln/temp3
],
file2.abc => [
/nfs/disks/version_2.0/dir_a/nn/temp1
/nfs/disks/version_2.0/dir_a/nn/temp2
/nfs/disks/version_2.0/dir_a/nn/temp3
];
And, here's a program to do it:
#! /usr/bin/env perl
#
use strict;
use warnings;
use feature qw(say); #Can use `say` which is like `print "\n"`;
use File::Basename; #imports `dirname` and `basename` commands
use File::Find; #Implements Unix `find` command.
use constant DIR => "/nfs/disks/version_2.0";
# Find all duplicates
my %file_hash;
find (\&wanted, DIR);
# Print out all the duplicates
foreach my $file_name (sort keys %file_hash) {
if (scalar (#{$file_hash{$file_name}}) > 1) {
say qq(Duplicate File: "$file_name");
foreach my $dir_name (#{$file_hash{$file_name}}) {
say " $dir_name";
}
}
}
sub wanted {
return if not -f $_;
if (not exists $file_hash{$_}) {
$file_hash{$_} = [];
}
push #{$file_hash{$_}}, $File::Find::dir;
}
Here's a few things about File::Find:
The work takes place in the subroutine wanted.
The $_ is the name of the file, and I can use this to see if this is a file or directory
$File::Find::Name is the full name of the file including the path.
$File::Find::dir is the name of the directory.
If the array reference doesn't exist, I create it with the $file_hash{$_} = [];. This isn't necessary, but I find it comforting, and it can prevent errors. To use $file_hash{$_} as an array, I have to dereference it. I do that by putting a # in front of it, so it can be #$file_hash{$_} or, #{$file_hash{$_}}.
Once all the file are found, I can print out the entire structure. The only thing I do is check to make sure there is more than one member in each array. If there's only a single member, then there are no duplicates.
Response to Grace
Hi David W., thank you very much for your explainaion and sample script. Sorry maybe I'm not really clear in definding my problem statement. I think I can't use hash in my path finding for the data structure. Since the file*.abc is a few hundred and undertermined and each of the file*.abc even is having same filename but it is actually differ in content in each directory structures.
Such as the file1.abc resides under "/nfs/disks/version_2.0/dir_a/ln/temp1" is not the same content as file1.abc resides under "/nfs/disks/version_2.0/dir_a/ln/temp2" and "/nfs/disks/version_2.0/dir_a/ln/temp3". My intention is to grep the list of files*.abc in each of the directories structure (temp1, temp2 and temp3 ) and compare the filename list with a masterlist. Could you help to shed some lights on how to solve this? Thanks. – Grace yesterday
I'm just printing the file in my sample code, but instead of printing the file, you could open them and process them. After all, you now have the file name and the directory. Here's the heart of my program again. This time, I'm opening the file and looking at the content:
foreach my $file_name (sort keys %file_hash) {
if (scalar (#{$file_hash{$file_name}}) > 1) {
#say qq(Duplicate File: "$file_name");
foreach my $dir_name (#{$file_hash{$file_name}}) {
#say " $dir_name";
open (my $fh, "<", "$dir_name/$file_name")
or die qq(Can't open file "$dir_name/$file_name" for reading);
# Process your file here...
close $fh;
}
}
}
If you are only looking for certain files, you could modify the wanted function to skip over files you don't want. For example, here I am only looking for files which match the file*.txt pattern. Note I use a regular expression of /^file.*\.txt$/ to match the name of the file. As you can see, it's the same as the previous wanted subroutine. The only difference is my test: I'm looking for something that is a file (-f) and has the correct name (file*.txt):
sub wanted {
return if not -f $_ and /^file.*\.txt$/;
if (not exists $file_hash{$_}) {
$file_hash{$_} = [];
}
push #{$file_hash{$_}}, $File::Find::dir;
}
If you are looking at the file contents, you can use the MD5 hash to determine if the file contents match or don't match. This reduces a file to a mere string of 16 to 28 characters which could even be used as a hash key instead of the file name. This way, files that have matching MD5 hashes (and thus matching contents) would be in the same hash list.
You talk about a "master list" of files and it seems you have the idea that this master list needs to match the content of the file you're looking for. So, I'm making a slight mod in my program. I am first taking that master list you talked about, and generating MD5 sums for each file. Then I'll look at all the files in that directory, but only take the ones with the matching MD5 hash...
By the way, this has not been tested.
#! /usr/bin/env perl
#
use strict;
use warnings;
use feature qw(say); #Can use `say` which is like `print "\n"`;
use File::Find; #Implements Unix `find` command.
use Digest::file qw(digest_file_hex);
use constant DIR => "/nfs/disks/version_2.0";
use constant MASTER_LIST_DIR => "/some/directory";
# First, I'm going thorugh the MASTER_LIST_DIR directory
# and finding all of the master list files. I'm going to take
# the MD5 hash of those files, and store them in a Perl hash
# that's keyed by the name of file file. Thus, when I find a
# file with a matching name, I can compare the MD5 of that file
# and the master file. If they match, the files are the same. If
# not, they're different.
# In this example, I'm inlining the function I use to find the files
# instead of making it a separat function.
my %master_hash;
find (
{
%master_hash($_) = digest_file_hex($_, "MD5") if -f;
},
MASTER_LIST_DIR
);
# Now I have the MD5 of all the master files, I'm going to search my
# DIR directory for the files that have the same MD5 hash as the
# master list files did. If they do have the same MD5 hash, I'll
# print out their names as before.
my %file_hash;
find (\&wanted, DIR);
# Print out all the duplicates
foreach my $file_name (sort keys %file_hash) {
if (scalar (#{$file_hash{$file_name}}) > 1) {
say qq(Duplicate File: "$file_name");
foreach my $dir_name (#{$file_hash{$file_name}}) {
say " $dir_name";
}
}
}
# The wanted function has been modified since the last example.
# Here, I'm only going to put files in the %file_hash if they
sub wanted {
if (-f $_ and $file_hash{$_} = digest_file_hex($_, "MD5")) {
$file_hash{$_} //= []; #Using TLP's syntax hint
push #{$file_hash{$_}}, $File::Find::dir;
}
}

Perl script to find all unowned files and directories on Unix - How can I optimize further? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Following my findings and suggestions in my other post How to exclude a list of full directory paths in find command on Solaris, I have decided to write a Perl version of this script and see how I could optimize it to run faster than a native find command. So far, the results are impressive!
The purpose of this script is to report all unowned files and directories on a Unix system for audit compliance. The script has to accept a list of directories and files to exclude (either by full path or wildcard name), and must take as little processing power as possible. It is meant to be run on hundreds of Unix system that we (the company I work for) support, and has be able to run on all those Unix systems (multiple OS, multiple platforms: AIX, HP-UX, Solaris and Linux) without us having to install or upgrade anything first. In other words, it has to run with standard libraries and binaries we can expect on all systems.
I have not yet made the script argument-aware, so all arguments are hard-coded in the script. I plan on having the following arguments in the end and will probably use getopts to do it:
-d = comma delimited list of directories to exclude by path name
-w = comma delimited list of directories to exclude by basename or wildcard
-f = comma delimited list of files to exclude by path name
-i = comma delimited list of files to exclude by basename or wildcard
-t:list|count = Defines the type of output I want to see (list of all findinds, or summary with count per directory)
Here is the source I have done so far:
#! /usr/bin/perl
use strict;
use File::Find;
# Full paths of directories to prune
my #exclude_dirs = ('/dev','/proc','/home');
# Basenames or wildcard names of directories I want to prune
my $exclude_dirs_wildcard = '.svn';
# Full paths of files I want to ignore
my #exclude_files = ('/tmp/test/dir3/.svn/svn_file1.txt','/tmp/test/dir3/.svn/svn_file2.txt');
# Basenames of wildcard names of files I want to ignore
my $exclude_files_wildcard = '*.tmp';
my %dir_globs = ();
my %file_globs = ();
# Results will be sroted in this hash
my %found = ();
# Used for storing uid's and gid's present on system
my %uids = ();
my %gids = ();
# Callback function for find
sub wanted {
my $dir = $File::Find::dir;
my $name = $File::Find::name;
my $basename = $_;
# Ignore symbolic links
return if -l $name;
# Search for wildcards if dir was never searched before
if (!exists($dir_globs{$dir})) {
#{$dir_globs{$dir}} = glob($exclude_dirs_wildcard);
}
if (!exists($file_globs{$dir})) {
#{$file_globs{$dir}} = glob($exclude_files_wildcard);
}
# Prune directory if present in exclude list
if (-d $name && in_array(\#exclude_dirs, $name)) {
$File::Find::prune = 1;
return;
}
# Prune directory if present in dir_globs
if (-d $name && in_array(\#{$dir_globs{$dir}},$basename)) {
$File::Find::prune = 1;
return;
}
# Ignore excluded files
return if (-f $name && in_array(\#exclude_files, $name));
return if (-f $name && in_array(\#{$file_globs{$dir}},$basename));
# Check ownership and add to the hash if unowned (uid or gid does not exist on system)
my ($dev,$ino,$mode,$nlink,$uid,$gid) = stat($name);
if (!exists $uids{$uid} || !exists($gids{$gid})) {
push(#{$found{$dir}}, $basename);
} else {
return
}
}
# Standard in_array perl implementation
sub in_array {
my ($arr, $search_for) = #_;
my %items = map {$_ => 1} #$arr;
return (exists($items{$search_for}))?1:0;
}
# Get all uid's that exists on system and store in %uids
sub get_uids {
while (my ($name, $pw, $uid) = getpwent) {
$uids{$uid} = 1;
}
}
# Get all gid's that exists on system and store in %gids
sub get_gids {
while (my ($name, $pw, $gid) = getgrent) {
$gids{$gid} = 1;
}
}
# Print a list of unowned files in the format PARENT_DIR,BASENAME
sub print_list {
foreach my $dir (sort keys %found) {
foreach my $child (sort #{$found{$dir}}) {
print "$dir,$child\n";
}
}
}
# Prints a list of directories with the count of unowned childs in the format DIR,COUNT
sub print_count {
foreach my $dir (sort keys %found) {
print "$dir,".scalar(#{$found{$dir}})."\n";
}
}
# Call it all
&get_uids();
&get_gids();
find(\&wanted, '/');
print "List:\n";
&print_list();
print "\nCount:\n";
&print_count();
exit(0);
If you want to test it on your system, simply create a test directory structure with generic files, chown the whole tree with a test user you create for this purpose, and then delete the user.
I'll take any hints, tips or recommendations you could give me.
Happy reading!
Try starting with these, then see if there's anything more you can do.
Use hashes instead of the arrays that need to be searched using in_array(). This is so you can do a direct hash lookup in one step instead of converting the entire array to a hash for every iteration.
You don't need to check for symlinks because they will be skipped since you have not set the follow option.
Maximise your use of _; avoid repeating IO operations. _ is a special filehandle where the file status information is cached whenever you call stat() or any file test. This means you can call stat _ or -f _ instead of stat $name or -f $name. (Calling -f _ is more than 1000x faster than -f $name on my machine because it uses the cache instead of doing another IO operation.)
Use the Benchmark module to test out different optimisation strategies to see if you actually gain anything. E.g.
use Benchmark;
stat 'myfile.txt';
timethese(100_000, {
a => sub {-f _},
b => sub {-f 'myfile.txt'},
});
A general principle of performance tuning is find out exactly where the slow parts are before you try to tune it (because the slow parts might not be where you expect them to be). My recommendation is to use Devel::NYTProf, which can generate an html profile report for you. From the synopsis, on how to use it (from the command line):
# profile code and write database to ./nytprof.out
perl -d:NYTProf some_perl.pl
# convert database into a set of html files, e.g., ./nytprof/index.html
# and open a web browser on the nytprof/index.html file
nytprofhtml --open

How can I remove non-unique lines from a large file with Perl?

Duplicate data removal using Perl called within via a batch file within Windows
A DOS window in Windows called via a batch file.
A batch file calls the Perl script which carries out the actions. I have the batch file.
The code script I have works duplicate data is removal so long as the data file is not too big.
The problem that requires resolving is with data files which are larger, (2 GB or more), with this size of file a memory error occurs when trying to load the complete file in to an array for duplicate data removal.
The memory error occurs in the subroutine at:-
#contents_of_the_file = <INFILE>;
(A completely different method is acceptable so long as it solves this issue, please suggest).
The subroutine is:-
sub remove_duplicate_data_and_file
{
open(INFILE,"<" . $output_working_directory . $output_working_filename) or dienice ("Can't open $output_working_filename : INFILE :$!");
if ($test ne "YES")
{
flock(INFILE,1);
}
#contents_of_the_file = <INFILE>;
if ($test ne "YES")
{
flock(INFILE,8);
}
close (INFILE);
### TEST print "$#contents_of_the_file\n\n";
#unique_contents_of_the_file= grep(!$unique_contents_of_the_file{$_}++, #contents_of_the_file);
open(OUTFILE,">" . $output_restore_split_filename) or dienice ("Can't open $output_restore_split_filename : OUTFILE :$!");
if ($test ne "YES")
{
flock(OUTFILE,1);
}
for($element_number=0;$element_number<=$#unique_contents_of_the_file;$element_number++)
{
print OUTFILE "$unique_contents_of_the_file[$element_number]\n";
}
if ($test ne "YES")
{
flock(OUTFILE,8);
}
}
You are unnecessarily storing a full copy of the original file in #contents_of_the_file and -- if the amount of duplication is low relative to the file size -- nearly two other full copies in %unique_contents_of_the_file and #unique_contents_of_the_file. As ire_and_curses noted, you can reduce the storage requirements by making two passes over the data: (1) analyze the file, storing information about the line numbers of non-duplicate lines; and (2) process the file again to write non-dups to the output file.
Here is an illustration. I don't know whether I've picked the best module for the hashing function (Digest::MD5); perhaps others will comment on that. Also note the 3-argument form of open(), which you should be using.
use strict;
use warnings;
use Digest::MD5 qw(md5);
my (%seen, %keep_line_nums);
my $in_file = 'data.dat';
my $out_file = 'data_no_dups.dat';
open (my $in_handle, '<', $in_file) or die $!;
open (my $out_handle, '>', $out_file) or die $!;
while ( defined(my $line = <$in_handle>) ){
my $hashed_line = md5($line);
$keep_line_nums{$.} = 1 unless $seen{$hashed_line};
$seen{$hashed_line} = 1;
}
seek $in_handle, 0, 0;
$. = 0;
while ( defined(my $line = <$in_handle>) ){
print $out_handle $line if $keep_line_nums{$.};
}
close $in_handle;
close $out_handle;
You should be able to do this efficiently using hashing. You don't need to store the data from the lines, just identify which ones are the same. So...
Don't slurp - Read one line at a time.
Hash the line.
Store the hashed line representation as a key in a Perl hash of lists. Store the line number as the first value of the list.
If the key already exists, append the duplicate line number to the list corresponding to that value.
At the end of this process, you'll have a data-structure identifying all the duplicate lines. You can then do a second pass through the file to remove those duplicates.
Perl does heroic things with large files, but 2GB may be a limitation of DOS/Windows.
How much RAM do you have?
If your OS doesn't complain, it may be best to read the file one line at a time, and write immediately to output.
I'm thinking of something using the diamond operator <> but I'm reluctant to suggest any code because on the occasions I've posted code, I've offended a Perl guru on SO.
I'd rather not risk it. I hope the Perl cavalry will arrive soon.
In the meantime, here's a link.
Here's a solution that works no matter how big the file is. But it doesn't use RAM exclusively, so its slower than a RAM-based solution. You can also specify the amount of RAM you want this thing to use.
The solution uses a temporary file that the program treats as a database with SQLite.
#!/usr/bin/perl
use DBI;
use Digest::SHA 'sha1_base64';
use Modern::Perl;
my $input= shift;
my $temp= 'unique.tmp';
my $cache_size_in_mb= 100;
unlink $temp if -f $temp;
my $cx= DBI->connect("dbi:SQLite:dbname=$temp");
$cx->do("PRAGMA cache_size = " . $cache_size_in_mb * 1000);
$cx->do("create table x (id varchar(86) primary key, line int unique)");
my $find= $cx->prepare("select line from x where id = ?");
my $list= $cx->prepare("select line from x order by line");
my $insert= $cx->prepare("insert into x (id, line) values(?, ?)");
open(FILE, $input) or die $!;
my ($line_number, $next_line_number, $line, $sha)= 1;
while($line= <FILE>) {
$line=~ s/\s+$//s;
$sha= sha1_base64($line);
unless($cx->selectrow_array($find, undef, $sha)) {
$insert->execute($sha, $line_number)}
$line_number++;
}
seek FILE, 0, 0;
$list->execute;
$line_number= 1;
$next_line_number= $list->fetchrow_array;
while($line= <FILE>) {
$line=~ s/\s+$//s;
if($next_line_number == $line_number) {
say $line;
$next_line_number= $list->fetchrow_array;
last unless $next_line_number;
}
$line_number++;
}
close FILE;
Well you could use the inline replace mode of command line perl.
perl -i~ -ne 'print unless $seen{$_}++' uberbigfilename
In the "completely different method" category, if you've got Unix commands (e.g. Cygwin):
cat infile | sort | uniq > outfile
This ought to work - no need for Perl at all - which may, or may not, solve your memory problem. However, you will lose the ordering of the infile (as outfile will now be sorted).
EDIT: An alternative solution that's better able to deal with large files may be by using the following algorithm:
Read INFILE line-by-line
Hash each line to a small hash (e.g. a hash# mod 10)
Append each line to a file unique to the hash number (e.g. tmp-1 to tmp-10)
Close INFILE
Open and sort each tmp-# to a new file sortedtmp-#
Mergesort sortedtmp-[1-10] (i.e. open all 10 files and read them simultaneously), skipping duplicates and writing each iteration to the end output file
This will be safer, for very large files, than slurping.
Parts 2 & 3 could be changed to a random# instead of a hash number mod 10.
Here's a script BigSort that may help (though I haven't tested it):
# BigSort
#
# sort big file
#
# $1 input file
# $2 output file
#
# equ sort -t";" -k 1,1 $1 > $2
BigSort()
{
if [ -s $1 ]; then
rm $1.split.* > /dev/null 2>&1
split -l 2500 -a 5 $1 $1.split.
rm $1.sort > /dev/null 2>&1
touch $1.sort1
for FILE in `ls $1.split.*`
do
echo "sort $FILE"
sort -t";" -k 1,1 $FILE > $FILE.sort
sort -m -t";" -k 1,1 $1.sort1 $FILE.sort > $1.sort2
mv $1.sort2 $1.sort1
done
mv $1.sort1 $2
rm $1.split.* > /dev/null 2>&1
else
# work for empty file !
cp $1 $2
fi
}