Perl - Directory Management on Different Operating Systems - perl

I am new in Perl. I am using the following command to remove a folder in Perl, under Windows:
system "del trash_folder";
But I intend to run the same script under Unix as well. I could get the OS name in the code and run a different command based on the OS. But is there no better way in Perl? I am thinking of possibly an API or so that is OS-ignorant.

The del command is never going to create a new directory, is it? :-)
If you want to create directories, use the mkdir command.
If you want to remove directories, use the rmdir command.
Update: In general, if you have a choice between using a Perl built-in function or an external command, then the Perl function will be the better choice. Firstly, your code will be more portable and, secondly, opening a sub-shell to run the external command will slow your program down.

Related

Unable to execute Perl script unless Perl is inserted before script name

Running Lubuntu -
Beginner Perl programmer
Script is XXX.pl located at ~/projects/XXX/XXX.pl
First line is the shebang
#!/usr/bin/perl
Permission to run is set to Anyone.
In directory ~/projects/XXX, the command
~/projects/XXX$ perl XXX.pl
works as desired, but the command
~/projects/XXX$ XXX.pl
Fails with XXX.pl: command not found
What am I missing ?
The two usual options to execute your Perl script are:
perl XXX.pl
or
./XXX.pl
Both ways assume that your current working directory contains the script XXX.pl, otherwise it won't work.
As already pointed out by jm666 in the comments, you can usually not execute a program or script from your current working directory without prepending ./, primarily because of security reasons. Now, you may wonder why it's necessary.
Explanation:
Your shell uses the contents of an environment variable called $PATH to find out about where external commands (non-builtin programs) are located in your filesystem. If you want to see what's in $PATH, just type the following in your shell:
echo $PATH
Now you can see that the $PATH variable does NOT contain your current working directory. The consequence is that your shell is not able to find the program XXX.pl. By prepending ./ you instruct the shell to execute the program which comes after.
But there are two requirements if you want to execute your Perl script with ./script.pl:
The script has to be executable (check with ls -l)
The first line (shebang line) has to be #!/path/to/your/perl because your shell needs that information to find the perl interpreter in order to run your script
However, #1 and #2 are NOT required when you execute your script with
perl XXX.pl
because it invokes the perl interpreter directly with your script.
See how to make Perl scripts executable on Linux and make the script itself directly executable with chmod for some more details.
Can the script be found?
Is . in your path? If it's not, add it to your path, or use ./XXX.pl instead of XXX.pl.
Can the script be executed?
Do you have execute permission to the file? Fix using chmod u+x XXX.pl.
Is the interpreter correct?
which perl will tell you which interpreter is used when you use perl XXX.pl. That's the path that should be on your shebang (#!) line.

How do I get Perl to run an alias I've defined in BASH?

I have a script that opens up different files at the same time.
My problem here is that when I run system() on the alias I've defined in bash which points to /home/user1/Software/nc, perl tells me that it can't execute the alias because there is no file/directory at the current location.
I know that the alias works because when I invoke it directly in a shell, it opens fine.
Funny enough, I can do system("firefox") within my script fine, but not the alias. How do I use this alias in this script without it breaking?
Perl won't run bash, it will execute the command directly. You can call
bash -c your_command
instead of calling the command itself in Perl.
As it is, this doesn't load your aliases. You need to open an interactive shell, as in #MortezaLSC's answer. There supposedly is a way of loading aliases correctly in a non-interactive shell, but I can't figure it out.
But why don't you just use the command you have aliased to directly in perl? The only reason I could see not to do this, is if your alias is going to change in the future, but you will still want to run whatever command it points to. This seems weird and dangerous to say the least.
Aliases are designed to reduce the typing you do if you invoke commands with the same options etc all the time. They're not all-purpose macros for bash. Bash has functions for doing more complicated stuff, but why would you want to call non-trivial bash code from a perl script? It doesn't seem like you really need this here. Keep the complexity, and the potential for modification and failure, in one place (the perl script).
See a couple of answers to similar questions:
https://unix.stackexchange.com/a/1499/41977
https://superuser.com/a/183980/187150
If you're smart, you made it so your alias is only defined for interactive shells, so you'll have to launch bash and specify that you want an interactive shell using -i.
system('bash', '-i', '-c', 'shell command');
Is it working?
system 'bash -i -c "your alias parameter"';

Using Perl modules vs. using system() calls

Quite recently, I wrote a few scripts in Perl for a cPanel plugin in which, though most of the code was in Perl, there was quite a lot of system() commands as well which I used to execute shell commands directly.
I am pretty sure that there are Perl modules that I could have used instead. Keeping in mind the time crunch, I thought using the system command was easier (to complete the project in time). In retrospective, I think that was a bad programming practice.
My question is, is there any tradeoff, memory-wise or otherwise when using Perl's modules and using system() commands. For example, what would be the difference in using:
my $directory = "temp";
mkdir $directory;
and
system ("mkdir temp");
Also, if I am to use Perl modules, wouldn't that involve installing a whole lot of modules in the beginning?
The most obvious economy is that, in the first case, your Perl process is creating the directory, while in the second, Perl is starting a new process that runs a command shell which parses the command line and runs the shell mkdir command to create the directory, and then the child process is deleted. You would be creating and deleting a process and running the shell for every call to system: there is no caching of processes or similar economy.
The second thing that comes to mind is that, if your original mkdir fails, it is simple to handle the error in Perl, whereas shelling out to run a mkdir command puts your program at a distance from the error, and it is far more awkward to handle the many different problems that may arise.
There is also the question of maintainability and portability, which will affect you even if you aren't expecting to run your program on more than one machine. Once you abandon control to a system command you have no control over what happens. I could have written a mkdir that will delete your home directory or, less disastrously, your program may find itself on a system where mkdir doesn't exist, or does something slightly different.
In the particular case of mkdir, this is a built-in Perl operator and is part of every Perl installation. There are also many core libraries that require you to put use Module in your program, but are already installed and need no further action.
I am sure others will come up with more reasons to prefer a Perl operator or module over a shell command. In general you should prefer to keep everything you can within the language. There are only a few cases where you have to run a third-party program, and they usually involve custom software that allows you act on proprietary data formats.

Series of Perl Scripts. BASH, BATCH, Shell?

I have a series of perl scripts that I want to run one after another on a unix system. What type of file would this be / could I reference it as in documentation? BASH, BATCH, Shell Script File?
Any help would be appreciated.
Simply put the commands you would use to run them manually in a file (say, perlScripts.sh):
#!/bin/sh
perl script1.pl
perl script2.pl
perl script3.pl
Then from the command line:
$ sh perlScripts.sh
Consider using Perl itself to run all of the scripts. If the scripts don't take command line arguments, you can simply use:
do 'script1.pl';
do 'script2.pl';
etc.
do 'file_name' basically copies the file's code into the current script and executes it. It gives each file its own scope, however, so variables won't clash.
This approach is more efficient, because it starts only one instance of the Perl interpreter. It will also avoid repeated loading of modules.
If you do need to pass arguments or capture the output, you can still do it in a Perl file with backquotes or system:
my $output = `script3.pl file1.txt`; #If the output is needed.
system("script3.pl","file1.txt"); #If the output is not needed.
This is similar to using a shell script. However, it is cross-platform compatible. It means your scripts only rely on Perl being present, and no other external programs. And it allows you to easily add functionality to the calling script.

Sourcing shell scripts in Perl

I want to source a shell script from within Perl and have the environment variables be available in Perl, but I'm not sure if there's an elegant way to do it. Obviously, using system() won't work since it runs in a forked process, and all environment changes will be lost. I think there's a CPAN module that can do it, but I prefer not to use external modules.
I've seen two solutions that would not work in my case:
Have a wrapper that calls the shell script, and then calls the Perl script. I do not know ahead of time which of my shell scripts I need to call.
Manually opening the shell script and scraping for arg=value pairs. This won't work either because the shell script is not a simple list of ARG=VALUE, but rather contain a bunch of conditionals, and variables can have different values depending on certain conditions.
sh -c "source script; env" should output the environment at the end of script as name=value pairs, which you then can parse from your perl script (as Perl is a language made for parsing, this should be easy).
You can do this by installing external module from CPAN which is Shell::Source
$env_path= Shell::Source->new(shell=>"tcsh",file=>"../path/to/file/temp.csh");
$env_path->inherit;
As perl creates its own instance while running on a shell, so we can not set environment path for the main shell as the perl's instance will be like sub shell of the main shell. Child can not set environment paths for parents.
Now till the perl's sub shell will run you'll be able to access all the paths present in temp.csh by using Shell::Source