Can sh itself check if a program exists or is in path?
I.e., not with the help of the "which" program.
I don't believe sh can directly. But perhaps something like:
which() {
save_IFS=$IFS
IFS=:
for d in $PATH; do
test -x $d/$1 && echo $d/$1
done
IFS=$save_IFS
}
and here's a nice variation that uses a subshell so that restoring IFS is not necessary:
which() (
IFS=:
for d in $PATH; do
test -x $d/$1 && echo $d/$1
done
)
Also, (in bash) if the command has been executed in the past and bash has already done the PATH search, you can see what it found with hash -t.
bash-3.2$ hash -t which
bash: hash: which: not found
bash-3.2$ which foo
bash-3.2$ hash -t which
/usr/bin/which
The utility command -v $CMD is apparently a portable option (in the sense of being part of POSIX); see also the very similar (though bash-specific) question, in particular this answer.
Related
I am searching for swipl the similar feature as perl -e
In particular, I want to run prolog code in this fashion:
swipl --wanted-flag "fact(a). message:-writeln('hello')." -g "message" -t halt
This is possible to do with
swipl -f file -g "message" -t halt
where the prolog clauses are written in file
I am running swipl on the server side that takes user input as prolog clauses, therefore writing a file on the server is not a good idea.
One thing you can do is to use load_files/2 with the option stream, and load from standard input, not from an argument (you can still pass the entry point as an argument, I guess):
Say in a file fromstdin.pl you have:
main :-
load_files(stdin, [stream(user_input)]),
current_prolog_flag(argv, [Goal|_]),
call(Goal),
halt.
main :- halt(1).
and with this you can do:
$ echo 'message :- format("hello~n").' | swipl -q -t main fromstdin.pl -- message
|: hello
The comments by #false to this answer and the question will tell you what this |: is, if you are wondering, but if it annoys you, just do:
$ echo 'message :- format("hello~n").' \
| swipl -q -t main fromstdin.pl -- message \
| cat
hello
instead.
This will let you read any Prolog from standard input and call an arbitrary predicate from it. Whether this is a clever thing to do, I don't know. I would also not be surprised if there is a much easier way to achieve the same.
If I have a basic .sh file containing the following script code:
#!/bin/sh
rm -rf "MyFolder"
How do I make this running script file display results to the terminal that will indicate if the directory removal was successful?
You don't really need to make it say it was successful. You could have it say something only on error ✖, and then silence means success ✔.
That's how the Unix philosophy works:
The rule of silence, also referred to as the silence is golden rule, is an important part of the Unix philosophy that states that when a program has nothing surprising, interesting or useful to say, it should say nothing. It means that well-behaved programs should treat their users' attention and concentration as being valuable and thus perform their tasks as unobtrusively as possible. That is, silence in itself is a virtue. http://www.linfo.org/rule_of_silence.html
That's the way rm itself behaves.
If you are asking about the general case, as suggested by your question's title, you can run your script with sh -x scriptname to see what it's doing. It's also quite common to write diagnostic output into the script itself, and control it with an option.
#!/bin/sh
verbose=false
case $1 in -v | --verbose )
verbose=true
shift ;;
esac
say () {
$verbose || return
echo "$0: $#" >&2
}
say "Removing $dir ..."
rm -rf "$dir" || say "Failed."
If you run this script without any options, it will run silently, like a well-behaved Unix utility should. If you run it with the -v option, it will print some diagnostics to standard error.
rm -rf "My Folder" && echo "Done" || echo "Error!"
You can read more on creating a sequence of pipelines in bash manual
In the bash (and other similar shells) the ? environment variable gives you the exit code of the last executed command. So you can do:
#!/bin/sh
rm -rf "My Folder"
echo $?
UPDATE
If once the rm command has been executed the directory doesn't exist (because it has been successfully removed or because it didn't exist when the command was executed) the script will print 0. If the directory exists (which will mean that the command has been unable to remove it) then the script will print an exit code other than 0. If I understand properly the question this is exactly the requested behavior. If it is not, please correct me.
The previous answers was wrong : rm don't exit with error code > 0 when the dir isn't present.
Instead, I recommend to use :
dir='/path/to/dir'
if [[ -d $dir ]]; then
rm -rf "$dir"
fi
If you want rm to return a status, remove -f flag.
Example on Linux Mint (the dir doesn't exists):
$ rm -rf /tmp/sdfghjklm
$ echo $?
0
$ rm -r /tmp/sdfghjklm
$ echo $?
1
I know of system() and qx(), but I need to execute ~15 bash commands. E.g.
mkdir, chown, edquota -p user1 -u user2, cp -r, su - username, git, rm, ln -s
Question
Is there an efficient way to execute many Bash commands in Perl?
I don't care in this case about the output.
First, I'd use the equivalent Perl function for as many of those bash command as I could, which is most of the ones you included in your post. Then, for the rest of them I'd either use system() or qx() or backticks or one of the IPC:: modules (such as IPC::Run or IPC::Open3).
Use bash syntax for many commands. Separate them with ; or && or whatever takes your fancy (man bash).
$ perl -E 'system qq{date; date}'
In Linux, I like POE framework's POE::Wheel::Run module for running system commands (and code blocks) asynchronously. You say you do not care about the output, but if you need it in the future POE::Wheel::Run has an elegant interface allowing us to interact with the process.
my $s = <<END;
echo "1"
echo "2"
echo "3"
END
system("$s");
I am calling many Perl scripts in my Bash script (sometimes from csh also).
At the start of the Bash script I want to put a test which checks if all the Perl scripts are devoid of any compilation errors.
One way of doing this would be to actually call the Perl script from the Bash script and grep for "compilation error" in the piped log file, but this becomes messy as different Perl scripts are called at different points in the code, so I want to do this at the very start of the Bash script.
Is there a way to check if the Perl script has no compilation error?
Beware!!
Using the below command to check compilation errors in your Perl program can be dangerous.
$ perl -c yourperlprogram
Randal has written a very nice article on this topic which you should check out
Sanity-checking your Perl code (Linux Magazine Column 91, Mar 2007)
Quoting from his article:
Probably the simplest thing we can tell is "is it valid?". For this,
we invoke perl itself, passing the compile-only switch:
perl -c ourprogram
For this operation, perl compiles the program,
but stops just short of the execution phase. This means that every
part of the program text is translated into the internal data
structure that represents the working program, but we haven't actually
executed any code. If there are any syntax errors, we're informed, and
the compilation aborts.
Actually, that's a bit of a lie. Thanks to BEGIN blocks (including
their layered-on cousin, the use directive), some Perl code may have
been executed during this theoretically safe "syntax check". For
example, if your code contains:
BEGIN { warn "Hello, world!\n" }
then you will see that message,
even during perl -c! This is somewhat surprising to people who
consider "compile only" to mean "executes no code". Consider the
code that contains:
BEGIN { system "rm", "-rf", "/" }
and you'll see the problem with
that argument. Oops.
Apart from perl -c program.pl, it's also better to find warnings using the command:
perl -w program.pl
For details see: http://www.perl.com/pub/2004/08/09/commandline.html
I use the following part of a bash func for larger perl projects :
# foreach perl app in the src/perl dir
while read -r dir ; do
echo -e "\n"
echo "start compiling $dir ..." ;
cd $product_instance_dir/src/perl/$dir ;
# run the autoloader utility
find . -name '*.pm' -exec perl -MAutoSplit -e 'autosplit($ARGV[0], $ARGV[1], 0, 1, 1)' {} \;
# foreach perl file check the syntax by setting the correct INC dirs
while read -r file ; do
perl -MCarp::Always -I `pwd` -I `pwd`/lib -wc "$file"
# run the perltidy inline
# perltidy -b "$file"
# sleep 3
ret=$? ;
test $ret -ne 0 && break 2 ;
done < <(find "." -type f \( -name "*.pl" -or -name "*.pm" \))
test $ret -ne 0 && break ;
echo "stop compiling $dir ..." ;
echo -e "\n\n"
cd $product_instance_dir ;
done < <(ls -1 "src/perl")
When you need to check errors/warnings before running but your file depends on mutliple other files you can add option -I:
perl -I /path/to/dependency/lib -c /path/to/file/to/check
Edit: from man perlrun
Directories specified by -I are prepended to the search path for modules (#INC).
I'm trying to run a perl script from within a bash script (I'll change this design later on, but for now, bear with me). The bash script receives the argument that it will run. The argument to the script is as follows:
test.sh "myscript.pl -g \"Some Example\" -n 1 -p 45"
within the bash script, I simple run the argument that was passed:
#!/bin/sh
$1
However, in my perl script the -g argument only gets "Some (that's with the quotes), instead of the Some Example. Even if I quote it, it cuts off because of the whitespace.
I tried escaping the whitespace, but it doesn't work... any ideas?
To run it as posted test.sh "myscript.pl -g \"Some Example\" -n 1 -p 45" do this:
#!/bin/bash
eval "$1"
This causes the $1 argument to be parsed by the shell so the individual words will be broken up and the quotes removed.
Or if you want you could remove the quotes and run test.sh myscript.pl -g "Some Example" -n 1 -p 45 if you changed your script to:
#!/bin/bash
"$#"
The "$#" gets replaced by all the arguments $1, $2, etc., as many as were passed in on the command line.
Quoting is normally handled by the parser, which isn't seeing them when you substitute the value of $1 in your script.
You may have more luck with:
#!/bin/sh
eval "$1"
which gives:
$ sh test.sh 'perl -le "for (#ARGV) { print; }" "hello world" bye'
hello world
bye
Note that simply forcing the shell to interpret the quoting with "$1" won't work because then it tries to treat the first argument (i.e., the entire command) as the name of the command to be executed. You need the pass through eval to get proper quoting and then re-parsing of the command.
This approach is (obviously?) dangerous and fraught with security risks.
I would suggest you name the perl script in a separate word, then you can quote the parameters when referring to them, and still easily extract the script name without needing the shell to split the words, which is the fundamental problem you have.
test.sh myscript.pl "-g \"Some Example\" -n 1 -p 45"
and then
#!/bin/sh
$1 "$2"
if you really have to do this (for whatever reason) why not just do:
sh test.sh "'Some Example' -n 1 -p 45"
in:
test.sh
RUN=myscript.pl
echo `$RUN $1
(there should be backticks ` before $RUN and after $1)