why ash shift command cause whole script exit? - sh

I have a script test.sh:
OUTPUT_FILE="$1"; shift || { echo "require arg1: output file path"; exit 1; }
But when i execute ./test.sh without arguments, it does not output require arg1: output file path The output is shift: nothing to shift
Who can tell me why?
My ash environment: Busybox Android 2.2~4.4

Even though I can't reproduce your problem, the cause seems clear:
your shift doesn't raise error as exit value, to proceed with script execution, when not enough parameters were specified, but otherwise ignores that condition silently. Instead it outputs an error of its own, and terminates further script execution.
Rather than relying on shift to produce that exit value - which yours doesn't seem to - you could think of testing command line for any parameters left to shift:
echo "${##}"
shows the length of remaining (yet unshifted) command line. If it's 0, you want your warning message. Note that while bash produces the number of command line parameters, busybox ash counts the number of characters remaining on command line.
Alternatively. test $1 for emptyness:
[ -z "$1" ] && echo "no args"

Related

Run command in powershell and ignore exit code in one line

I try to execute a command in powershell and ignore any non zeroexit code. Unfortunately I completely fail doing this :-(
Under Linux this is done with this trivial line:
command arg1 arg2 || echo "ignore failure"
The or clause is executed only in case of a failure and then the exit code of echo resets $?
I thought something like this would do the trick:
Invoke-Expression "command arg1 arg2" -ErrorAction Ignore
But $LASTEXITCODE is still set to a non zero value
PowerShell v7+'s pipeline-chain operators, && and ||, implicitly act on $LASTEXITCODE, but never reset it.
If you do want to reset it - which is generally not necessary - you can do the following:
command arg1 arg2 || & { "ignore failure"; $global:LASTEXITCODE = 0 }
Note that PowerShell scripts - unlike scripts for POSIX-compatible shells such as bash - do not implicitly exit with the exit code of the most recently executed command; instead, you must use exit $n explicitly, where $n is the desired exit code.
In the context of calling the PowerShell CLI from the outside, the above applies to using the -File parameter to call a script; for use with the -Command (-c) parameter, see the next section.
As for what you tried:
|| and && don't work in Windows PowerShell (versions up to v5.1) at all.
Invoke-Expression doesn't help here and should generally be avoided and used only as a last resort, due to its inherent security risks. In short: Avoid it, if possible, given that superior alternatives are usually available. If there truly is no alternative, only ever use it on input you either provided yourself or fully trust - see this answer.
If you're using the Windows PowerShell CLI with -Command (-c), and you need to make sure that the PowerShell process exits with exit code 0, do something like the following (... represents your command):
powershell.exe -noprofile -c "...; exit 0"
If you want to comment on the failure:
powershell.exe -noprofile -c "...; if ($LASTEXITCODE) { 'ignore failure' }; exit 0"
Note: In this case, ; exit 0 isn't strictly necessary, because the if statement alone, due to it succeeding, irrespective of the value of $LASTEXITCODE, is enough to make the exit code 0.
Also, note that PowerShell CLI sends all of PowerShell's output streams - including the error stream - to stdout by default, though you can selective redirect the error stream on demand with 2>.
This also applies to the PowerShell [Core] v7+ CLI, whose executable name is pwsh, and whose parameters are a superset of the Windows PowerShell CLI.
For more information on PowerShell with respect to process exit codes, see this answer.

Equivalent of bash "set -o errexit" for windows cmd.exe batch file?

What's the Windows batch file equivalent of set -o errexit in a bash script?
I have a long batch file filled with different programs to run on Windows command line... basically its an unrolled make file with every compiler command that needs to be run to build an exe in a long sequence of commands.
The problem with this method is that I want it to exit the batch command on the first non-zero return code generate by a command in the script.
As far as I know, Windows batch files have a problem where they don't automatically exit on the first error without adding a lot of repetitive boilerplate code between each command to check for a non-zero return code and to exit the script.
What I'm wondering about, is there an option similar to bash's set -o errexit for Windows cmd.exe? or perhaps a technique that works to eliminate too much boilerplate error checking code... like you set it up once and then it automatically exits if a command returns a non-zero return code without adding a bunch of junk to your script to do this for you.
(I would accept PowerShell option as well instead of cmd.exe, except PowerShell isn't very nice with old-unix-style command flags like: -dontbreak -y ... breaking those commands without adding junk to your command line like quotes or escape characters... not really something I want to mess around with either...)
CMD/Batch
As Ken mentioned in the comments, CMD does not have an equivalent to the bash option -e (or the equivalent -o errexit). You'd have to check the exit status of each command, which is stored in the variable %errorlevel% (equivalent to $? in bash). Something like
if %errorlevel% neq 0 then exit /b %errorlevel%
PowerShell
PowerShell already automatically terminates script execution on errors in most cases. However, there are two error classes in PowerShell: terminating and non-terminating. The latter just displays an error without terminating script execution. The behavior can be controlled via the variable $ErrorActionPreference:
$ErrorActionPreference = 'Stop': terminate on all errors (terminating and non-terminating)
$ErrorActionPreference = 'Continue' (default): terminate on terminating errors, continue on non-terminating errors
$ErrorActionPreference = 'SilentlyContinue': don't terminate on any error
PowerShell also allows more fine-grained error handling via try/catch statements:
try {
# run command here
} catch [System.SomeException] {
# handle exception of a specific type
} catch [System.OtherException] {
# handle exception of a different type
} catch {
# handle all other exceptions
} finally {
# cleanup statements that are run regardless of whether or not
# an exception was thrown
}

perl system() exit code 36096

I am trying to find the meaning of exit code 36096 from the following system() call to ksh script.
$proc_ret = system("/path/to/shellscript.sh");
$proc_ret returns "36096"
I have checked the output of shellscript.sh, It run fine until the another shell script (status.sh) inside shellscript.sh was invoked.
only the first line of that script was invoked, the rest of the script was not get invoked.
here is the content of status.sh
echo "a" > /tmp/a
echo "complete."
echo "b" >> /tmp/a
cat /path/to/mail.txt | mail -s "subject" email#domain
echo "mail complete."
echo "c" >> /tmp/a
I don't know why the script did not continue after the first line. the exit code of system call made to shellscript.sh looks strange to me.
If anyone know the meaning of 36096 then please let me know.
Note that 36096 is 141 * 256. As the system docs tell you, 141 is the exit status of the program. Note again, that an exit status >128 from a shell often means that the child process was killed due to a signal. That signal is obtained by subtracting 128 from the exit status (i.e. look at the low 7 bits).
So the script got signal 13, which is SIGPIPE - write on a pipe with no reader.
It looks as if the mail program could not be started (got the PATH right? Usually cron jobs have a very minimal PATH and you need to set it in your script with something like PATH=$(getconf PATH).)
Then cat pipes into a non-existing reader, and voila, there's your signal.
BTW, that's a useless use of cat, since mail -s subj recipient < /path/to/mail.txt would avoid an expensive fork and pipe.

Returning an exit code from a shell script that was called from inside a perl script

I have a perl script (verifyCopy.pl) that uses system() to call a shell script (intercp.sh).
From inside the shell script, I have set up several exit's with specific exit codes and I'd like to be able to do different things based on which exit code is returned.
I've tried using $?, I have tried assigning the value of system("./intercp.sh") to a variable then checking the value of that, but the error message is always 0.
Is this because even though something inside the shell script fails, the actual script succeeds in running?
I tried adding a trap in the shell script (ie trap testexit EXIT and testexit() { exit 222; } but that didn't work either.
$? should catch the exit code from your shell script.
$ cat /tmp/test.sh
#!/bin/sh
exit 2
$ perl -E 'system("/tmp/test.sh"); say $?'
512
Remember that $? is encoded in the traditional manner, so $? >> 8 gives the exit code, $? & 0x7F gives the signal, and $? & 0x80 is true if core was dumped. See perlvar for details.
Your problem may be one of several things: maybe your shell script isn't actually exiting with the exit code (maybe you want set -e); maybe you have a signal handle for SIGCHLD eating the exit code; etc. Try testing with the extremely simple shell script above to see if its a problem in your perl script or your shell script.

Run a command as cron would but from the command line

I have a script that I'm trying to run from cron. When I run it from bash, it work just fine. However when I let cron do it's thing, I get a:
myscript.sh: line 122: syntax error: unexpected end of file
What I want is a way to run a command as if it was a cron job, but do it in my shell.
As a side note: does anyone know what would be differnt under cron? (the script already has a #!/bin/sh line)
To answer my own question: I added this to my crontab:
* * * * * bcs for ((i=$(date +\%M); i==$(date +\%M) ;)) ; do find ~/.crontemp/ -name '*.run' -exec "{}" ";" ; sleep 1; done`
and created this script:
#!/bin/sh
tmp=$(mktemp ~/.crontemp/cron.XXXXX)
mknod $tmp.pipe p
mv $tmp $tmp.pre
echo $* '>' $tmp.pipe '1>&2' >> $tmp.pre
echo rm $tmp.run >> $tmp.pre
chmod 700 $tmp.pre
mv $tmp.pre $tmp.run
cat $tmp.pipe
rm $tmp.pipe
With that, I can run an arbitrary command with a delay of not more than one second.
(And yes, I know there are all kinds of security issue involved in that)
the problem was a fi vs. if problem. Doh!
When a script works interactively and fails in cron it's almost always a PATH problem. The default PATH in a cron job process is much much shorter than in an interactive session. The typical result is a "not found" error for some system utility you're trying to run that is not on the PATH in cron.
I would guess that some command you're trying to run is not on the path, therefore the file it was supposed to create is empty and the command that's trying to read that file is giving you this error message.
You may have a "%" in your crontab.
You must escape it (with "\") or it is changed in a newline character.
There are a number of things it could be - output will be redirected elsewhere; environment variables will almost certainly be different, etc. On the information you've given, it could be related to a difference between bash and /bin/sh (which on some systems - including Debian/Ubuntu flavors of Linux - are different, and support slightly different syntax). Cron will usually run the command you give to it using /bin/sh.
Try running:
/bin/sh -c '<command>'
where <command> comes from your crontab. (Of course, if that command uses '' quotes you will need to modify it accordingly...)