I receive the following
yes: standard output: broken pipe
yes: write error
when executing the following line in perl
system("cd /tmp; yes | rm -r directory1 directory2 file3 > /dev/null 2>&1");
The files and directories still get deleted but the message is annoying and would prefer not to see it.
any help Thanks
Not really a Perl question.
The error comes from yes, so you would have to redirect the error stream of yes, too.
... ; yes 2>/dev/null | rm -r ...
But why don't you forget about yes and try rm -rf ... ?
You say Perl, but your question says Unix shell.
If you're going to use Perl, do it in Perl:
use File::Path qw(remove_tree);
use Cwd;
my $current_directory = getcwd();
chdir "tmp";
remove_tree("directory1", "directory2, "file3");
chdir $current_directory;
This will do the same thing as your system command. But, this will work not only on Unix systems, but also on strange and peculiar operating systems that no man should be subject to (Cough, Windows! Cough).
It also will work even if someone has created an alias rm command, or has implemented their own rm command. Whenever you shell out via the system command, you leave that nice safe Perl environment and head directly into the dangerous neighborhood of the Operating System. Even worse, you're placing your life into the hands of a shell that might not be the shell you use either.
The module File::Path is a standard module that comes with Perl. Same goes with Cwd I used to save your original directory, so we could come back to it at the end of your code.
By the way, you probably want some error checking there too:
use File::Path qw(remove_tree);
use Cwd;
my $current_directory = getcwd();
if chdir "tmp" {
remove_tree("directory1", "directory2, "file3");
chdir $current_directory;
}
else {
print "Whoops! I can't change to directory 'tmp'"
}
This way, you are only doing the remove_tree if you've actually changed into directory tmp.
If this is your temporary working directory. That is, you've created it, so you have a place to put files, , you might should look at the File::Temp module.
File::Temp (which is also a standard Perl module) can create the temporary directory for you, and makes sure that it's unique, so you're not trampling over someone else running your program at the same time. Even better, it will also clean up after itself and delete that temporary directory once complete.
use File::Temp;
my $temp_dir = File::Temp->newdir;
Now, you can put all of your temporary files into $temp_dir. When the program ends, the temporary directory and all files you placed there are automatically deleted.
Although not part of the standard Perl functions, these modules are part of Perl's distribution and are always available. There should be no reason at all not to use them. If you're using the system command, there is probably a standard Perl module that you should be using instead.
Browse through the list of Perl modules in the Perdoc Webpage. Just make sure you select the version of Perl you're using.
To prevent questions, use -f or --interactive=never if implemented. See man rm.
Related
I'm very new to Perl, and I would like to make a program that creates a directory and moves a file into that directory using the Unix command like:
mkdir test
Which I know would make a directory called "test". From there I would like to give more options like:
mv *.jpg test
That would move all .jpg files into my new directory.
So far I have this:
#!/usr/bin/perl
print "Folder Name:";
$fileName = <STDIN>;
chomp($fileType);
$result=`mkdir $fileName`;
print"Your folder was created \n";
Can anyone help me out with this?
Try doing this :
#!/usr/bin/perl
use strict; use warnings;
print "Folder Name:";
$dirName = <STDIN>;
chomp($dirName);
mkdir($dirName) && print "Your folder was created \n";
rename $_, "$dirName/$_" for <*.jpg>;
You will have a better control when using built-in perl functions than using Unix commands. That's the point of my snippet.
Most (if not all) Unix commands have a corresponding version as a function
e.g
mkdir - see here
mv - See here
Etc. either get a print out of the various manual pages (or probably have a trip down to the book shop - O'Reilly nut shell book is quite good along with others).
In perl you can use bash commands in backticks. However, what happens when the directory isn't created by the mkdir command? Your program doesn't get notified of this and continues on its merry way thinking that everything is fine.
You should use built in command in perl that do the same thing.
http://perldoc.perl.org/functions/mkdir.html
http://perldoc.perl.org/functions/rename.html
It is much easier to trap errors with those functions and fail gracefully. In addition, they run faster because you don't have to fork a new process for each command you run.
Perl has some functions similar to those of the shell. You can just use
mkdir $filename;
You can use backquotes to run a shell command, but it is only usefull if the command returns anything to its standard output, which mkdir does not. For commands without output, use system:
0 == system "mv *.jpg $folder" or die "Cannot move: $?";
I have a Perl script that runs a different utility (called Radmind, for those interested) that has the capability to edit the filesystem. The Perl script monitors output from this process, so it would be running throughout this whole situation.
What would happen if the utility being run by the script tried to edit the script file itself, that is, replace it with a newer version? Does Perl load the script and any linked libraries at the start of its execution and then ignore the script file itself unless told specifically to mess with it? Or perhaps, would all hell break loose, and executions might or might not fail depending on how the new file differed from the one being run?
Or maybe something else entirely? Apologies if this belongs on SuperUser—seems like a gray area to me.
It's not quite as simple as pavel's answer states, because Perl doesn't actually have a clean division of "first you compile the source, then you run the compiled code"[1], but the basic point stands: Each source file is read from disk in its entirety before any code in that file is compiled or executed and any subsequent changes to the source file will have no effect on the running program unless you specifically instruct perl to re-load the file and execute the new version's code[2].
[1] BEGIN blocks will run code during compilation, while commands such as eval and require will compile additional code at run-time
[2] Most likely by using eval or do, since require and use check whether the file has been loaded already and ignore it if it has.
For a fun demonstration, consider
#! /usr/bin/perl
die "$0: where am I?\n" unless -e $0;
unlink $0 or die "$0: unlink $0: $!\n";
print "$0: deleted!\n";
for (1 .. 5) {
sleep 1;
print "$0: still running!\n";
}
Sample run:
$ ./prog.pl
./prog.pl: deleted!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
Your Perl script will be compiled first, then run; so changing your script while it runs won't change the running compiled code.
Consider this example:
#!/usr/bin/perl
use strict;
use warnings;
push #ARGV, $0;
$^I = '';
my $foo = 42;
my $bar = 56;
my %switch = (
foo => 'bar',
bar => 'foo',
);
while (<ARGV>) {
s/my \$(foo|bar)/my \$$switch{$1}/;
print;
}
print "\$foo: $foo, \$bar: $bar\n";
and watch the result when run multiple times.
The script file is read once into memory. You can edit the file from another utility after that -- or from the Perl script itself -- if you wish.
As the others said, the script is read into memory, compiled and run. GBacon shows that you can delete the file and it will run. This code below shows that you can change the file and do it and get the new behavior.
use strict;
use warnings;
use English qw<$PROGRAM_NAME>;
open my $ph, '>', $PROGRAM_NAME;
print $ph q[print "!!!!!!\n";];
close $ph;
do $PROGRAM_NAME;
... DON'T DO THIS!!!
Perl scripts are simple text files that are read into memory, compiled in memory, and the text file script is not read again. (Exceptions are modules that come into lexical scope after compilation and do and eval statements in some cases...)
There is a well known utility that exploits this behavior. Look at CPAN and its many versions which is probably in your /usr/bin directory. There is a CPAN version for each version of Perl on your system. CPAN will sense when a new version of CPAN itself is available, ask if you want to install it, and if you say "y" it will download the newer version and respawn itself right where you left off without loosing any data.
The logic of this is not hard to follow. Read /usr/bin/CPAN and then follow the individualized versions related to what $Config::Config{version} would generate on your system.
Cheers.
Is it possible to run Perl script (vas.pl) with shell sript inside (date.sh & backlog.sh) in cron or vice versa?
Thanks.
0 19 * * * /opt/perl/bin/perl /reports/daily/scripts/vas_rpt/vasCIO.pl 2> /reports/daily/scripts/vas_rpt/vasCIO.err
Error encountered:
date.sh: not found
backlog.sh: not found
Perl script:
#!/opt/perl/bin/perl
system("sh date.sh");
open(FH,"/reports/daily/scripts/vas_rpt/date.txt");
#date = <FH>;
close FH;
open(FH,"/reports/daily/scripts/vas_rpt/$cat1.txt");
#array = <FH>;
system("sh backlog.sh $date[0] $array[0]");
close FH;
cron runs your perl script in a different working directory than your current working directory. Use the full path of your script file:
# I'm assuming your shell script reside in the same
# dir as your perl script:
system("sh /reports/daily/scripts/date.sh");
Or if your're allergic to hardcoding paths like I am you can use the FindBin package from CPAN:
use FindBin qw($Bin);
system("sh $Bin/date.sh");
If your shell script also needs to start in the correct path then it's probably better to first change your working directory:
use FindBin qw($Bin);
chdir $Bin;
system("sh date.sh");
You can do what you want as long as you are careful.
The first thing to remember with cron jobs is that you get almost no environment set.
The chances are, the current directory is / or perhaps $HOME. And the value of $PATH is minimal - your profile has not been run, for example.
So, your script didn't find 'date.sh' because it wasn't in the correct directory.
To get the data from the shell script into your program, you need to pipe it there - or arrange for the 'date.sh' to dump the data into the file successfully. Of course, Perl has built-in date and time handling, so you don't need to use the shell for it.
You also did not run with use warnings; or use strict; which would also help you. For example, $cat1 is not a defined variable.
Personally, I run a simple shell script from cron and let it deal with all the complexities; I don't use I/O redirection in the crontab file. That's partly a legacy of working on ancient systems - but it also leads to portable and reliable running of cron jobs.
It's possible. Just keep in mind that your working directory when running under cron may not be what you think it is - it's the value in your HOME environment variable, or that specified in the /etc/passwd file. Consider fully qualifying the path to your .shes.
There are a lot of things that need care in your script, and I talk about most of them in the "Secure Programming Techniques" chapter of Mastering Perl. You can also find some of it in perlsec/
Since you are taking external data and passing them to other external programs, you should use taint checking to ensure that the data are what you expect. What if someone were able to sneak something extra into those files?
When you want to pass data to external programs, use system in the list form so the shell doesn't get a chance to interpret possible meta-characters.
Instead of relying on the PATH to find the programs that you expect to run, specify their full paths explicitly to ensure you are at least running the file you think you are (and not something someone snuck into a directory that is earlier in PATH). If you were really paranoid (like taint checking is), you might also check that those files and directories had suitable permissions (e.g., not world-writeable).
Just as a bonus note, if you only want one line from a filehandle, you can use the line-input operator in scalar context:
my $date = <$fh>;
You probably want to chomp the data too to get rid of possible ending newlines. Even if you don't think a terminating newline should be there because another program created the file, someone looking at the file with a text editor might add it.
Good luck, :)
In Perl, how can I create a subdirectory and, at the same time, create parent directories if they do not exist? Like UNIX's mkdir -p command?
use File::Path qw(make_path);
make_path("path/to/sub/directory");
The deprecated mkpath and preferred make_path stemmed from a discussion in Perl 5 Porters thread that's archived here.
In a nutshell, Perl 5.10 testing turned up awkwardness in the argument parsing of the makepath() interface. So it was replaced with a simpler version that took a hash as the final argument to set options for the function.
Use mkpath from the File::Path module:
use File::Path qw(mkpath);
mkpath("path/to/sub/directory");
Kindly ignore if you are looking for a Perl module with 'mkdir -p' functionality but the following code would work:
my $dir = '/root/example/dir';
system ("mkdir -p $dir 2> /dev/null") == 0
or die "failed to create $dir. exiting...\n";
You can use a module but then you have to install it on each server you are going to port your code on. I usually prefer to use system function for a work like mkdir because it's a lesser overhead to import and call a module when I need it only once to create a directory.
ref http://perldoc.perl.org/File/Path.html
"The make_path function creates the given directories if they don't exists [sic!] before, much like the Unix command mkdir -p"
mkdir() allows you to create directories in your Perl script.
Example:
use strict;
use warnings;
my $directory = "tmp";
unless(mkdir($directory, 0755)) {
die "Unable to create $directory\n";
This program create a directory called "tmp" with permissions set to 0755 (only the owner has the permission to write to the directory; group members and others can only view files and list the directory contents).
Do you think changing directories inside bash or Perl scripts is acceptable? Or should one avoid doing this at all costs?
What is the best practice for this issue?
Like Hugo said, you can't effect your parent process's cwd so there's no problem.
Where the question is more applicable is if you don't control the whole process, like in a subroutine or module. In those cases you want to exit the subroutine in the same directory as you entered, otherwise subtle action-at-a-distance creeps in which causes bugs.
You can to this by hand...
use Cwd;
sub foo {
my $orig_cwd = cwd;
chdir "some/dir";
...do some work...
chdir $orig_cwd;
}
but that has problems. If the subroutine returns early or dies (and the exception is trapped) your code will still be in some/dir. Also, the chdirs might fail and you have to remember to check each use. Bleh.
Fortunately, there's a couple modules to make this easier. File::pushd is one, but I prefer File::chdir.
use File::chdir;
sub foo {
local $CWD = 'some/dir';
...do some work...
}
File::chdir makes changing directories into assigning to $CWD. And you can localize $CWD so it will reset at the end of your scope, no matter what. It also automatically checks if the chdir succeeds and throws an exception otherwise. Sometimes it use it in scripts because it's just so convenient.
The current working directory is local to the executing shell, so you can't affect the user unless he is "dotting" (running it in the current shell, as opposed to running it normally creating a new shell process) your script.
A very good way of doing this is to use subshells, which i often do in aliases.
alias build-product1='(cd $working-copy/delivery; mvn package;)'
The paranthesis will make sure that the command is executed from a sub-shell, and thus will not affect the working directory of my shell. Also it will not affect the last-working-directory, so cd -; works as expected.
I don't do this often, but sometimes it can save quite a bit of headache. Just be sure that if you change directories, you always change back to the directory you started from. Otherwise, changing code paths could leave the application somewhere it should not be.
For Perl, you have the File::pushd module from CPAN which makes locally changing the working directory quite elegant. Quoting the synopsis:
use File::pushd;
chdir $ENV{HOME};
# change directory again for a limited scope
{
my $dir = pushd( '/tmp' );
# working directory changed to /tmp
}
# working directory has reverted to $ENV{HOME}
# tempd() is equivalent to pushd( File::Temp::tempdir )
{
my $dir = tempd();
}
# object stringifies naturally as an absolute path
{
my $dir = pushd( '/tmp' );
my $filename = File::Spec->catfile( $dir, "somefile.txt" );
# gives /tmp/somefile.txt
}
I'll second Schwern's and Hugo's comments above. Note Schwern's caution about returning to the original directory in the event of an unexpected exit. He provided appropriate Perl code to handle that. I'll point out the shell (Bash, Korn, Bourne) trap command.
trap "cd $saved_dir" 0
will return to saved_dir on subshell exit (if you're .'ing the file).
mike
Consider also that Unix and Windows have a built in directory stack: pushd and popd. It’s extremely easy to use.
Is it at all feasible to try and use fully-quantified paths, and not make any assumptions on which directory you're currently in? e.g.
use FileHandle;
use FindBin qw($Bin);
# ...
my $file = new FileHandle("< $Bin/somefile");
rather than
use FileHandle;
# ...
my $file = new FileHandle("< somefile");
This will probably be easier in the long run, as you don't have to worry about weird things happening (your script dying or being killed before it could put the current working directory back to where it was), and is quite possibly more portable.