I am trying to run a test case via automation testing (sahi) , so I am running the command for it repeatedly after 1 hour (via crontab).
What I want is that is there any solution that whenever my test case fails i should receive the email otherwise not.. Right now I am reciving mail whether it passes or fails.
In short, can i send mail to a person depending upon the output i get in terminal.
I want to send mail when output will be:
1 scenario (1 failed)
4 steps (3 skipped, 1 failed)
0m2.476s
Thanks.
How do you can detect that the test is failing? If command is using the process exit status you could have something like:
if ! command ; then
echo "Error" | mail -s "Error" address#example.com
fi
If you want to keep the output:
if ! command > results 2>&1 ; then
cat results | mail -s "Error" address#example.com"
fi
Related
I am trying to find the meaning of exit code 36096 from the following system() call to ksh script.
$proc_ret = system("/path/to/shellscript.sh");
$proc_ret returns "36096"
I have checked the output of shellscript.sh, It run fine until the another shell script (status.sh) inside shellscript.sh was invoked.
only the first line of that script was invoked, the rest of the script was not get invoked.
here is the content of status.sh
echo "a" > /tmp/a
echo "complete."
echo "b" >> /tmp/a
cat /path/to/mail.txt | mail -s "subject" email#domain
echo "mail complete."
echo "c" >> /tmp/a
I don't know why the script did not continue after the first line. the exit code of system call made to shellscript.sh looks strange to me.
If anyone know the meaning of 36096 then please let me know.
Note that 36096 is 141 * 256. As the system docs tell you, 141 is the exit status of the program. Note again, that an exit status >128 from a shell often means that the child process was killed due to a signal. That signal is obtained by subtracting 128 from the exit status (i.e. look at the low 7 bits).
So the script got signal 13, which is SIGPIPE - write on a pipe with no reader.
It looks as if the mail program could not be started (got the PATH right? Usually cron jobs have a very minimal PATH and you need to set it in your script with something like PATH=$(getconf PATH).)
Then cat pipes into a non-existing reader, and voila, there's your signal.
BTW, that's a useless use of cat, since mail -s subj recipient < /path/to/mail.txt would avoid an expensive fork and pipe.
I'm running two perl scripts in parallel on Jenkins and one more script which should get executed if the first two succeed. If I get an error in script1, script 2 still runs and hence the exit status becomes successful.
I want to run it in such a way that if any one of the parallel script fails, the job should stop with a failure status.
Currently my setup looks like
perl_script_1 &
perl_script_2 &
wait
perl_script_3
If script 1 or 2 fails in the middle, the job should be terminated with a Failure status without executing job 3.
Note: I'm using tcsh shell in Jenkins.
I have a similar setup where I run several java processes (tests) in parallel and wait for them to finish. If any fail, I fail the rest of my script.
Each test process writes its result to a file to be tested once done.
Note - the code examples below are written in bash, but it should be similar in tcsh.
To do this, I get the process id for every execution:
test1 &
test1_pid=$!
# test1 will write pass or fail to file test1_result
test2 &
test2_pid=$!
...
Now, I wait for the processes to finish by using the kill -0 PID command
For example test1:
# Check test1
kill -0 $test1_pid
# Check if process is done or not
if [ $? -ne 0 ]
then
echo process test1 finished
# check results
grep fail test1_result
if [ $? -eq 0 ]
then
echo test1 failed
mark_whole_build_failed
fi
fi
Same for other tests (you can do a loop to test all running processes periodically).
Later condition the rest of the execution based on mark_whole_build_failed.
I hope this helps.
I have a very small office environment, and my team sends created pdfs to an sFTP server daily.
Occasionally, I will get a call that someone can't log in to upload the files.
My normal course of action is to connect to the sFTP server myself, run a commmand like ls to determine it is responding.
I would like to be able to automate this with notification if there is a failure:
Login to the sFTP server (with credentials).
Run an LS command
Email if connection times out or login fails.
I have limited experience with writing Batch files, but I can't seem to figure a way to get only a 'failed' / no response to send an email.
Could anyone help with ideas? I'd like to run this as a VB or Batch in Scheduled Tasks, as I have a Server 2000 machine this could run on. I know batch has issue sending emails, but i have another batch file that uses Blat.exe to send an email with passed variables, so i could use that if i could get batch to send failed responses...
You should be able to do this with a batch file.
Create a file called logon.ftp. This file contains the FTP logon script. Mine contains:
open Ftp_server
ftpuser
ftppassword
ls -l
quit
The testftp.bat file:
ftp.exe < logon.ftp | grep "Not connected" > nul && call :alert_someone
#echo Logon successful
goto exit
:alert_someone
#echo %date% %time% > alert.txt
#echo ftp_server appears to not be taking logins. >> alert.txt
blat alert.txt -to you -from ftp_watcher -subject "alert %date% %time% ftp_server not taking logins"
:exit
You'll need to get blat, and grep so you can do the string checking. My winxp ftp doesnt support errorlevels, so I'm using the errorlevel returned from grepping the 'Not connected' string to figure out if this worked or not.
You can get wget or curl to do this as well, and they do support errorlevels.
Batch files can be a bit too basic for this kind of thing.
If you were able and willing to experiment with the Python programming language ( http://www.python.org ) and additionally install the Paramiko module ( http://www.lag.net/paramiko/ ) then it would be possible to write a script along the lines of...
import paramiko
try:
t = paramiko.Transport(('TheHostname', 22))
t.connect(username='MyUsername', password='MyPassword')
sftp = paramiko.SFTPClient.from_transport(t)
dirlist = sftp.listdir('.')
except:
print "It's Broken"
#Send e-mails and such here
that you could then schedule to run on a regular basis.
I am using CakePHP 1.3 and I was able to successfully able setup the cron job to run shells using the example that was given in the CakePHP Book.
*/5 * * * * /full/path/to/cakeshell myshell myparam -cli /usr/bin -console /cakes/1.2.x.x/cake/console -app /full/path/to/app >> /path/to/log/file.log
This outputs the results into a log file but I want to receive email when there is an error so I can try to resolve the problem.
I tried the following with no luck.
If I remove the >> /path/to/log/file.log then even the successful run is emailed.
> /dev/null, my assumption was it would send a successful to /dev/null and error to email.
1> /dev/null, tried another variation of 2
Any help is appreciated.
Thanks
Huseyin,
This is not a CakePHP error then, and is maybe a question better suited for serverfault, as you would script your solution.
Bash's built-in facilities are up to the task, try The linux documentation project's neat introductory tutorials on shell scripting and #man bash.
Your solution basically has to use a temporary file or variable in which you store the output of the last cron job run. If there is an error:
cat THE_TMP_FILE | mail -s "Error from Server Huseyin's server" huseyin#fancy_domain.com
else:
cat THE_TMP_FILE >> blah.blah.log
Unfortunatly, you need a MTA available, in order to make the mail command. If you do not have access to the mail command, then you set another cron job following the first in time which then simply runs a if [ -e THE_FILE_CONTAINING_THE_LAST_ERROR]; then { echo $(cat THE_FILE_CONTAINING_THE_LAST_ERROR); rm -v THE_FILE... ;} ; fi
Of course this is not working code, but pretty close, so you'll get the idea.
I am running a script where it login to a server then executes the command
"passwd -n 0 -x 99999 -i -1 debug" for removing ageing of the debug user.
If the user debug is not present then I want to create the user debug, change the password it, and then execute the above command for ageing.
How can I do?
Regards,
vasistha
From perlfunc(1):
system LIST
[...]
The return value is the exit status of the program as returned
by the "wait" call. To get the actual exit value, shift right
by eight (see below).
Therefore:
my $ret = system(qw/passwd -n 0 -x 99999 -i -1 debug/);
if ($ret != 0) {
# failure handling code here
}
Use puppet.
If you really insist on doing it manually, use getent passwd debug to check whether the user exists:
if [ $(getent passwd debug | wc -l ) = 0 ]; then
adduser debug
fi
I suggest using something like Expect. It handles the interactivity for you. You can log in to the server, execute commands, inspect the output, send more input, and so on. If you are doing lots of remote server administration, it's a very handy tool to know. There's even an article about it in The Perl Review Issue 4.2 (Spring 2008)