Recursive delete files using perl script - perl

I would like to delete all the file in a directory using only "rmdir" in perl script.
I'm trying to clean up a directory first and then trying to write file.
I know i can just use rmtree (" directory path"); but i'm unable to use that for FTP server (use Net::FTP;). and rmdir looks for empty directory.
i have tried "remove_tree" and "rm -rf". i do have read/write access to the server, but i'm unable to delete files.
Perl script:
finddepth (\&remove_dir, "$path");
rmdir ( "$path" ) or die ("Could not remove $path");
sub remove_dir
{
# for a path, this will be 0
if ( ! (stat("$File::Find::name"))[7] )
{ $ftp->rmdir("$File::Find::name"); }
else
{ $ftp->unlink("$File::Find::name"); }
}

$ftp->rmdir($dir_name, 1);
https://metacpan.org/pod/Net::FTP#rmdir-DIR-RECURSE

Related

Code fails with stdin from SED

I have a bash script that finds files with particular extension and then pass the files into a function that checks every line in the file for only files that contain a library imported. For example:
function testing() {
while IFS='' read -r line; do
if [[ "$line" =~ .*log\" ]]; then
echo "log is imported in the file" $1
break
else
echo "log is not imported in the file" $1
break
fi
done < <(sed -n '/import (/,/)/p' "$1")
}
function main() {
for file in $(find "$1" -name "*.go"); do
if [[ $file == *test.go ]]; then
:
else
var1=$(testing $file)
echo "$var1"
fi
done;
}
main $1
The problem is the script works without the else block in the testing function but with the introduction of the else block in the testing function it just defaults to echoing the log is not imported in the file blah even if log is used in some of the files.
Any idea(s) on what is the problem?
Thanks.
Here is a sample input file:
package main
import (
"fmt"
"io/ioutil"
logger "log"
"net/http"
)
type webPage struct {
url string
body []byte
err error
}
...
And the output is basically to echo if log is imported or not.
You need to rewrite the logic of your testing function as it will only test the first line of the file. Indeed, each branch of the if [[ "$line" =~ .*log\" ]] has a break statement, so in practice, a break is reached whenever the first line is read.

'append' option in Net::SFTP::Foreign->get() is not working as expected

I am copying the file from remote machine to the local and this operation to be perform everyday once. In case of append of content in the remote file content, i will just copy the appended content to local file(as it is already exists at local machine). I am using Net::SFTP::Foreign module from CPAN, but seems like it is copying the full file in case of append(which is not expected).
use strict;
use warnings;
use Net::SFTP::Foreign;
my $file = '/home/user/temp/test.txt';
my $destination = '/home/user/dest.txt';
my $sftp = Net::SFTP::Foreign->new(
host => 'localhost', # using localhost for destination and source
more => [ -o => 'Compression yes', '-v' ]
);
$sftp->get( $file, $destination, copy_perm => 1, append => 1 );
if($sftp->error) {
print "get operation failed for $file : " . $sftp->error . "\n";
}
I checked the Net/SFTP/Foreign.pm module for get() implementation and found below code snippet in case of append -
my $flags = Fcntl::O_CREAT|Fcntl::O_WRONLY;
$flags |= Fcntl::O_APPEND if $append;
$lstart = sysseek($fh, 0, 1) if $append;
In case of append,$lstart contains the 0 only, which is beginning of the file. Am i missing something here?
Thanks for you comments, actually i found the reason why it was not working properly. It was keep overwriting the local file with remote one.
But when i use below code :
$sftp->get(
$file,
'/home/user/test.log',
append => 1,
overwrite => 0,
);
Now it won't overwrite the file, but append the whole file to the local file.
While i wanted to just append the text which is added to the remote file rather than the whole file.
This feature does not support by the Net::SFTP::Foreign.

Could not open file perl

I am trying to convert a plist files into a JUnit style XMLs. I have a xsl stylesheet which converts the plist to JUnit/ANT XML.
Here is the perl code which I run to convert the plist to XML:
my $parser = XML::LibXML->new();
my $xslt = XML::LibXSLT->new();
my $stylesheet = $xslt->parse_stylesheet_file("\\\~/Hudson/build/workspace/ui-automation/automation\\\ test\\\ suite/plist2junit.xsl");
my $counter = 1;
my #plistFiles = glob('Logs/*/*.plist');
foreach (#plistFiles){
#Escape the file path and specify abosulte path
my $plistFile = $_;
$plistFile =~ s/([ ()])/\\$1/g;
$path2plist = "\\\~/Hudson/build/workspace/ui-automation/automation\\\ test\\\ suite/$plistFile";
#transform the plist file to xml
my $source = $parser->parse_file($path2plist);
my $results = $stylesheet->transform($source);
my $resultsFile = "\\\~/Hudson/build/workspace/ui-automation/automation\\\ test\\\ suite/JUnit/results$counter.xml";
#create the output file
unless(open FILE, '>'.$resultsFile) {
# Die with error message
die "\nUnable to create $file\n";
}
# Write results to the file.
$stylesheet->output_file($results, FILE);
close FILE;
$counter++;
}
After running the perl script on Hudson/Jenkins, it outputs this error message:
Couldn't open ~/Hudson/build/workspace/ui-automation/automation\ test\ suite/Logs/Run\ 1/Automation\ Results.plist: No such
file or directory
The error is caused by my $source = $parser->parse_file($path2plist); in the code. I am unable to figure out why it cannot find/read the file.
Anyone know what might be causing the error?
There are three obvious error in the path mentioned in the error message.
~/Hudson/build/workspace/ui-automation/automation\ test\ suite/Logs/Run\ 1/Automation\ Results.plist
Those are:
There's no directory named ~ in the current directory. Perhaps you meant to use the value of $ENV{HOME} there?
There's no directory named automation\ test\ suite anywhere on your disk, but there is probably one named automation test suite.
Similarly, there's no directory named Run\ 1 anywhere on your disk, but there is probably one named Run 1.

What does this perl crash means?

Can someone tell me what this means?
if (not defined $config{'crontab'}) {
die "no crontab defined!";
}
I want to open a file crontab.txt but the perl script crashes at this line and I don't really know any perl.
EDIT 1
It goes like this:
sub main()
{
my %config = %{getCommandLineOptions()};
my $programdir = File::Spec->canonpath ( (fileparse ( Win32::GetFullPathName($PROGRAM_NAME) ))[1] );
my $logdir = File::Spec->catdir ($programdir, 'logs');
$logfile = File::Spec->catfile ($logdir, 'cronw.log');
configureLogger($logfile);
$log = get_logger("cronw::cronService-pl");
# if --exec option supplied, we are being invoked to execute a job
if ($config{exec}) {
execJob(decodeArgs($config{exec}), decodeArgs($config{args}));
return;
}
my $cronfile = $config{'crontab'};
$log->info('starting service');
$log->debug('programdir: '.$programdir);
$log->debug('logfile: '.$logfile);
if (not defined $config{'crontab'}) {
$log->error("no crontab defined!\n");
die "no crontab defined!";
# fixme: crontab detection?
}
$log->debug('crontab: '.$config{'crontab'});
And I'm trying to load this 'crontab.txt' file...
sub getCommandLineOptions()
{
my $clParser = new Getopt::Long::Parser config => ["gnu_getopt", "pass_through"];
my %config = ();
my #parameter = ( 'crontab|cronfile=s',
'exec=s',
'args=s',
'v|verbose'
);
$clParser->getoptions (\%config, #parameter);
if (scalar (#ARGV) != 0) { $config{'unknownParameter'} = $true; }
return \%config;
}
Probably I have to give the script an argument
Probably I have to give the script an argument
I would say so.
$ script --cronfile=somefile
That code looks to see whether there is a key 'crontab' in the hash %config. If not, then it calls die and terminates.
If that's not what you expect to happen, then somewhere else in your script there should be something that is setting $config{'crontab'}, but there is not currently enough information in your question to determine what that might be.
Probably the file path of crontab.txt is expected in %config hash, pointed by the 'crontab' key, but isn't there! If so, a DIRTY solution CAN BE:
$config{'crontab'}='FULLPATH/crontab.txt';
#if (not defined $config{'crontab'}) {
# die "no crontab defined!";
#}
but this may not work because there is something like $config{'prefix'} and what you will try to open is the path represented by the concatenation of both, or just because in $config{'crontab'} is expected any other value than full path!

How can my previously untainted data become tainted again?

I have a bit of a mystery here that I am not quite understanding the root cause of. I am getting an 'Insecure dependency in unlink while running with -T switch' when trying to invoke unlink from a script. That is not the mystery, as I realize that this means Perl is saying I am trying to use tainted data. The mystery is that this data was previously untainted in another script that saved it to disk without any problems.
Here's how it goes... The first script creates a binary file name using the following
# For the binary file upload
my $extensioncheck = '';
my $safe_filename_characters = "a-zA-Z0-9_.";
if ( $item_photo )
{
# Allowable File Type Check
my ( $name, $path, $extension ) = fileparse ( $item_photo, '\..*' );
$extensioncheck = lc($extension);
if (( $extensioncheck ne ".jpg" ) && ( $extensioncheck ne ".jpeg" ) &&
( $extensioncheck ne ".png" ) && ( $extensioncheck ne ".gif" ))
{
die "Your photo file is in a prohibited file format.";
}
# Rename file to Ad ID for adphoto directory use and untaint
$item_photo = join "", $adID, $extensioncheck;
$item_photo =~ tr/ /_/;
$item_photo =~ s/[^$safe_filename_characters]//g;
if ( $item_photo =~ /^([$safe_filename_characters]+)$/ ) { $item_photo = $1; }
else { die "Filename contains invalid characters"; }
}
$adID is generated by the script itself using a localtime(time) function, so it should not be tainted. $item_photo is reassigned using $adID and $extensioncheck BEFORE the taint check, so the new $item_photo is now untainted. I know this because $item_photo itself has no problem with unlink itself latter in the script. $item_photo is only used long enough to create three other image files using ImageMagick before it's tossed using the unlink function. The three filenames created from the ImageMagick processing of $item_photo are created simply like so.
$largepicfilename = $adID . "_large.jpg";
$adpagepicfilename = $adID . "_adpage.jpg";
$thumbnailfilename = $adID . "_thumbnail.jpg";
The paths are prepended to the new filenames to create the URLs, and are defined at the top of the script, so they can't be tainted as well. The URLs for these files are generated like so.
my $adpageURL = join "", $adpages_dir_URL, $adID, '.html';
my $largepicURL = join "", $adphotos_dir_URL, $largepicfilename;
my $adpagepicURL = join "", $adphotos_dir_URL, $adpagepicfilename;
my $thumbnailURL = join "", $adphotos_dir_URL, $thumbnailfilename;
Then I write them to the record, knowing everything is untainted.
Now comes the screwy part. In a second script I read these files in to be deleted using the unlink function, and this is where I am getting my 'Insecue dependency' flag.
# Read in the current Ad Records Database
open (ADRECORDS, $adrecords_db) || die("Unable to Read Ad Records Database");
flock(ADRECORDS, LOCK_SH);
seek (ADRECORDS, 0, SEEK_SET);
my #adrecords_data = <ADRECORDS>;
close(ADRECORDS);
# Find the Ad in the Ad Records Database
ADRECORD1:foreach $AdRecord(#adrecords_data)
{
chomp($AdRecord);
my($adID_In, $adpageURL_In, $largepicURL_In, $adpagepicURL_In, $thumbnailURL_In)=split(/\|/,$AdRecord);
if ($flagadAdID ne $adID_In) { $AdRecordArrayNum++; next ADRECORD1 }
else
{
#Delete the Ad Page and Ad Page Images
unlink ("$adpageURL_In");
unlink ("$largepicURL_In");
unlink ("$adpagepicURL_In");
unlink ("$thumbnailURL_In");
last ADRECORD1;
}
}
I know I can just untaint them again, or even just blow them on through knowing that the data is safe, but that is not the point. What I want is to understand WHY this is happening in the first place, as I am not understanding how this previously untainted data is now being seen as tainted. Any help to enlighten where I am missing this connection would be truly appreciated, because I really want to understand this rather than just write the hack to fix it.
Saving data to a file doesn't save any "tainted" bit with the data. It's just data, coming from an external source, so when Perl reads it it becomes automatically tainted. In your second script, you will have to explicitly untaint the data.
After all, some other malicious program could have changed the data in the file before the second script has a chance to read it.