On job failure (exit code > 0) Rundeck automatically add detailed status informations to the notification attachment:
Failed: NonZeroResultCode: Remote command failed with exit status 1
Execution failed: 3709 in project test_project_1: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [host1: NonZeroResultCode: Remote command failed with exit status 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:host1)=BaseDataContext{{exec={exitCode=0}}}, ContextView(node:host1)=BaseDataContext{{exec={exitCode=0}}}}, base=null)} ]}, Node failures: {host1=[NonZeroResultCode: Remote command failed with exit status 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:host1)=BaseDataContext{{exec={exitCode=0}}}, ContextView(node:host1)=BaseDataContext{{exec={exitCode=0}}}}, base=null)} ]}, status: failed]
Can this message by disabled / hiden to only send the script output like in attachment on a success job run?
You can force the "exit 0" in your step wrapping it on some inline script like this
#!/bin/bash
touch /root/test 2> /dev/null
if [ $? -eq 1 ]
then
# whatever you want
echo "Successfully created file"
exit 0
else
echo "Could not create file" >&2
exit 1
fi
Related
I can't install nodejs using the meta-nodejs library on qemux86-64.
bitbake nodejs gives the following error
Initialising tasks: 100%
|########################################################################################################################################################################|
Time: 0:00:05 Sstate summary: Wanted 7 Found 0 Missed 7 Current 780
(0% match, 99% complete) NOTE: Executing Tasks ERROR:
nodejs-7.10.0-r1.4 do_configure: Execution of
'/home/user/poky/build/tmp/work/core2-64-poky-linux/nodejs/7.10.0-r1.4/temp/run.do_configure.68465'
failed with exit code 127: /usr/bin/env: ‘python’: No such file or
directory WARNING: exit code 127 from a shell command.
ERROR: Logfile of failure stored in:
/home/user/poky/build/tmp/work/core2-64-poky-linux/nodejs/7.10.0-r1.4/temp/log.do_configure.68465
Log data follows: | DEBUG: Executing shell function do_configure |
/usr/bin/env: ‘python’: No such file or directory | WARNING: exit code
127 from a shell command. | ERROR: Execution of
'/home/user/poky/build/tmp/work/core2-64-poky-linux/nodejs/7.10.0-r1.4/temp/run.do_configure.68465'
failed with exit code 127: | /usr/bin/env: ‘python’: No such file or
directory | WARNING: exit code 127 from a shell command. | ERROR: Task
(/home/user/poky/meta-openembedded/meta-nodejs/recipes-devtools/nodejs/nodejs_7.10.0.bb:do_configure)
failed with exit code '1' NOTE: Tasks Summary: Attempted 2022 tasks of
which 2016 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/user/poky/meta-openembedded/meta-nodejs/recipes-devtools/nodejs/nodejs_7.10.0.bb:do_configure
Summary: There was 1 WARNING message shown. Summary: There was 1 ERROR
message shown, returning a non-zero exit code.
I installed python on both the host and on the target
can someone help me?
meta-nodejs is outdated, use nodejs from meta-oe
I have a python script on server_A that connects to server_B via SSH and calls a local rsync command to reset a directory B with a fresh set of files. Then the script on A proceeds to rsync over additional set of files to B. My hope was to run this on a schedule in Rundeck. However, it is erroring on me every time during run with this output. What am I doing wrong?
Remote command failed with exit status 1
Failed: NonZeroResultCode: Remote command failed with exit status 1
Execution failed: 9 in project Test: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [server_A: NonZeroResultCode: Remote command failed with exit status 1]}, Node failures: {server_A=[NonZeroResultCode: Remote command failed with exit status 1]}, flow control: Continue, status: failed]
Exit status 1 was returned by the command you called. What are you running?
I am attempting to set up Capistrano with a SilverStripe build and am running into a few troubles setting up the shared directories.
I set the linked_dirs in deploy.rb with the following:
set :linked_dirs, %w{assets vendor}
Since adding this line I get the following error:
[617afa7f] Command: /usr/bin/env mkdir -p /var/www/website/releases/20160215083713 /var/www/website/releases/20160215083713
INFO [617afa7f] Finished in 0.250 seconds with exit status 0 (successful).
DEBUG [88c3de20] Running /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [88c3de20] Command: [ -L /var/www/website/releases/20160215083713/assets ]
DEBUG [88c3de20] Finished in 0.258 seconds with exit status 1 (failed).
DEBUG [3d61c1c4] Running /usr/bin/env [ -d /var/www/website/releases/20160215083713/assets ] as capistrano#128.199.231.152
DEBUG [3d61c1c4] Command: [ -d /var/www/website/releases/20160215083713/assets ]
DEBUG [3d61c1c4] Finished in 0.254 seconds with exit status 1 (failed).
INFO [3016a8cd] Running /usr/bin/env ln -s /var/www/website/shared/assets /var/www/website/releases/20160215083713/assets as capistrano#128.199.231.152
I am a mega noob when it comes to Capistrano and a semi noob when it comes to server configuration and permissions, so any pointers would be appreciated.
It probably hasn't actually failed. One thing to know about Capistrano is that (success) and (failed) are actually returning the result of the exit status, (success) if 0 and (failed) if non-0.
If we look at the command in question, it says that /usr/bin/env [ -L /var/www/website/releases/20160215083713/assets ] failed. This command is saying "return 0 if /var/www/website/releases/20160215083713/assets exists and is a link (-L). This fails, but that just means it returns non-0, thus the link needs to be created. Note that the next command also fails (-d) with asserting that the path is a directory. And the last line in your output is actually creating the link in question.
You can see the test in the Capistrano codebase here: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/tasks/deploy.rake#L128
You can clean up and simplify the output with https://github.com/mattbrictson/airbrussh. This is developed by one of the primary Capistrano devs.
As a sidenote, similarly all the green text in your terminal is stdout and the red text is stderr. This can also be confusing.
I have the following piece code (extracted from larger script):
Write-Output "Syncing $directory"
Push-Location $directory
git pull origin $branch
$directoryName = [IO.Path]::GetFileName($directory)
git log -n 1 --pretty=format:"%H %cd %aN%n%B" --date=short > "..\$directoryName.lastcommit.txt"
Pop-Location
Which occasionally (~50% chance) produces an error when run in a Windows Azure WebJob:
[05/06/2014 22:20:43 > e5e3ee: INFO] Syncing D:\home\site\!roslyn-sources\DeclarationExpressions
[05/06/2014 22:20:45 > e5e3ee: ERR ] From https://git01.codeplex.com/roslyn
[05/06/2014 22:20:45 > e5e3ee: ERR ] * branch DeclarationExpressions -> FETCH_HEAD
[05/06/2014 22:20:45 > e5e3ee: INFO] Already up-to-date.
[05/06/2014 22:20:45 > e5e3ee: INFO] [ERROR] Window title cannot be longer than 1023 characters.
[05/06/2014 22:20:45 > e5e3ee: INFO] Returning exit code 1
[05/06/2014 22:20:45 > e5e3ee: SYS INFO] Status changed to Failed
[05/06/2014 22:20:45 > e5e3ee: SYS ERR ] Job failed due to exit code 1
I never get this error when running locally.
What might be the reason?
Maybe the exe you are running are changin the Window Title.
So, instead of call git directly in your powershell script run it with the cmdlet Start-Process
We use gridengine(extactly open grid scheduler 2011.11.p1) as batch-queuing system. Just now I added an execd host named host094, but when jobs were submitted there, errors issued, status of job is Eqw, logs in $SGE_ROOT/default/spool/host094/messages says:
shepherd of job 119232.1 exited with exit status = 26
can't open usage file active_jobs/119232.1/usage for job 119232.1: No such file or directory
What's the meaning?