Capistrano run local command exit on failure - capistrano

I would like to run local commands and exit on failure of any command. whats the best way to do this with capistrano? run_locally will continue going on failure.
Do i have to check the last commands exist status everytime (or create a custom run locally function)?

I had to create my own function like this:
task :build_backend do
run_local("echo hello")
run_local("abcdef")
run_local("echo 'not run'")
end
def run_local(cmd)
system cmd
if($?.exitstatus != 0) then
puts 'exit code: ' + $?.exitstatus.to_s
exit
end
end
Using this

Generally in shell you can run multiple commands the way you want by command1 --some-argument foo && command2 && command3. the && operator will cause, that the chain will stop when one command fails (returns non-zero return value).

Related

AzCopy: How to know if the copy was success or not

I have a script in Ubuntu that copy only one file each hour to the Storage Account. I am using azcopy filename.tar https://<storage>.blob.core.windows.net/<container>.
This script is working but I'd like to check if the copy was success or not, for example:
validcopy = azcopy copy filename.tar https://<storage>.blob.core.windows.net/<container>
if(validcopy){
echo "Success"
} else {
echo "Failure"
}
Also, I tried using Power Shell in linux (pwsh), but unsuccess.
Please, can someone help me?
I got a alternative solution for this issue.
I used exit code in bash shell. Every Linux or Unix command executed by the shell script or user has an exit status. Exit status is an integer number. 0 exit status means the command was successful without any errors. A non-zero (1-255 values) exit status means command was a failure.
A particular shell variable called $? to get the exit status of the previously executed command
It was like this:
azcopy copy filename.tar https://<storage>.blob.core.windows.net/<container>
if [[ $? -gt 0 ]]
then
echo "Failure"
else
echo "Success"
fi

Equivalent of bash "set -o errexit" for windows cmd.exe batch file?

What's the Windows batch file equivalent of set -o errexit in a bash script?
I have a long batch file filled with different programs to run on Windows command line... basically its an unrolled make file with every compiler command that needs to be run to build an exe in a long sequence of commands.
The problem with this method is that I want it to exit the batch command on the first non-zero return code generate by a command in the script.
As far as I know, Windows batch files have a problem where they don't automatically exit on the first error without adding a lot of repetitive boilerplate code between each command to check for a non-zero return code and to exit the script.
What I'm wondering about, is there an option similar to bash's set -o errexit for Windows cmd.exe? or perhaps a technique that works to eliminate too much boilerplate error checking code... like you set it up once and then it automatically exits if a command returns a non-zero return code without adding a bunch of junk to your script to do this for you.
(I would accept PowerShell option as well instead of cmd.exe, except PowerShell isn't very nice with old-unix-style command flags like: -dontbreak -y ... breaking those commands without adding junk to your command line like quotes or escape characters... not really something I want to mess around with either...)
CMD/Batch
As Ken mentioned in the comments, CMD does not have an equivalent to the bash option -e (or the equivalent -o errexit). You'd have to check the exit status of each command, which is stored in the variable %errorlevel% (equivalent to $? in bash). Something like
if %errorlevel% neq 0 then exit /b %errorlevel%
PowerShell
PowerShell already automatically terminates script execution on errors in most cases. However, there are two error classes in PowerShell: terminating and non-terminating. The latter just displays an error without terminating script execution. The behavior can be controlled via the variable $ErrorActionPreference:
$ErrorActionPreference = 'Stop': terminate on all errors (terminating and non-terminating)
$ErrorActionPreference = 'Continue' (default): terminate on terminating errors, continue on non-terminating errors
$ErrorActionPreference = 'SilentlyContinue': don't terminate on any error
PowerShell also allows more fine-grained error handling via try/catch statements:
try {
# run command here
} catch [System.SomeException] {
# handle exception of a specific type
} catch [System.OtherException] {
# handle exception of a different type
} catch {
# handle all other exceptions
} finally {
# cleanup statements that are run regardless of whether or not
# an exception was thrown
}

How to create a sensu check

I have created a shell script. I want sensu to run that script on the selected node that are identified using the chef roles. I want to create a sensu check to monitor this particular check using the shell script.
Sensu can make use of any script which uses these exist statuses:
0 = OK
1 = Warning
2 = Critical
Write your shell script to run whichever tests you want, and exit with the correct value.
Next, configure your check to be called in a checks configuration file:
{
"checks": {
"<check_name>": {
"command": "<path_to_script> <arguments>",
... other check definitions here...
}
}
}
Lastly, make sure that the check is implemented as a standalone or subscription check.
Sensu standalone check
Sensu subscription check
Hope that helps :)
Just to add to the other answer, you can use this as a template. Is a very simple check in bash but it returns the correct outputs for sensu.
#!/bin/bash
CHECK="your check goes here"
if [CHECK something]; then
echo "WARNING!"
exit 1
else
echo "OK!"
exit 0
fi
echo "Unknown Error"
exit 3

Catching the error status while running scripts in parallel on Jenkins

I'm running two perl scripts in parallel on Jenkins and one more script which should get executed if the first two succeed. If I get an error in script1, script 2 still runs and hence the exit status becomes successful.
I want to run it in such a way that if any one of the parallel script fails, the job should stop with a failure status.
Currently my setup looks like
perl_script_1 &
perl_script_2 &
wait
perl_script_3
If script 1 or 2 fails in the middle, the job should be terminated with a Failure status without executing job 3.
Note: I'm using tcsh shell in Jenkins.
I have a similar setup where I run several java processes (tests) in parallel and wait for them to finish. If any fail, I fail the rest of my script.
Each test process writes its result to a file to be tested once done.
Note - the code examples below are written in bash, but it should be similar in tcsh.
To do this, I get the process id for every execution:
test1 &
test1_pid=$!
# test1 will write pass or fail to file test1_result
test2 &
test2_pid=$!
...
Now, I wait for the processes to finish by using the kill -0 PID command
For example test1:
# Check test1
kill -0 $test1_pid
# Check if process is done or not
if [ $? -ne 0 ]
then
echo process test1 finished
# check results
grep fail test1_result
if [ $? -eq 0 ]
then
echo test1 failed
mark_whole_build_failed
fi
fi
Same for other tests (you can do a loop to test all running processes periodically).
Later condition the rest of the execution based on mark_whole_build_failed.
I hope this helps.

Returning an exit code from a shell script that was called from inside a perl script

I have a perl script (verifyCopy.pl) that uses system() to call a shell script (intercp.sh).
From inside the shell script, I have set up several exit's with specific exit codes and I'd like to be able to do different things based on which exit code is returned.
I've tried using $?, I have tried assigning the value of system("./intercp.sh") to a variable then checking the value of that, but the error message is always 0.
Is this because even though something inside the shell script fails, the actual script succeeds in running?
I tried adding a trap in the shell script (ie trap testexit EXIT and testexit() { exit 222; } but that didn't work either.
$? should catch the exit code from your shell script.
$ cat /tmp/test.sh
#!/bin/sh
exit 2
$ perl -E 'system("/tmp/test.sh"); say $?'
512
Remember that $? is encoded in the traditional manner, so $? >> 8 gives the exit code, $? & 0x7F gives the signal, and $? & 0x80 is true if core was dumped. See perlvar for details.
Your problem may be one of several things: maybe your shell script isn't actually exiting with the exit code (maybe you want set -e); maybe you have a signal handle for SIGCHLD eating the exit code; etc. Try testing with the extremely simple shell script above to see if its a problem in your perl script or your shell script.