How to gracefully stop job execution when one step fails in Rundeck? - rundeck

So I defined a Rundeck job which normally executes three steps:
run script to check remote directory for .csv files and rsync them
manipulate csv files
rsync the csvs back to remote dir
now I set up the script run on step 1 to finish with exit code 1 when there are no csv files in my remote directory, upon which it does not execute steps 2 and 3 - which is great! But the whole job is marked as having failed even though it just didn't need to execute the other steps.
Is it possible to conditionally execute steps 2 and 3 of my job such that if step 1 fails it is still marked as 'succeeded'?

It is possible with Rundeck Error Handlers.
You will need to use the job context variable ${result.resultCode} in your error handler code in order to get a return code.
As you don't want the job marked as failed after the Error Handler was successfully executed, you need to tick Keep going on success from WebUI or add keepgoingOnSuccess="true" to you job definition code.
But after the error handler was successfully executed, the job will continue step 2 and step 3, where you may need to inject your step 2 code for it.

Related

Task scheduler for PowerShell; last run successful but item was not copied

I need help understanding why the scheduled task is not working, even though the last successful Run Time says,
The operation completed successfully.
The result is that it's not copied to the destination folder, but if I run the .ps1 script manually, it completes successfully.
1
2
FIG 3
**
Add-ExecutionPolicy Bypass C:\auto_p\applied.ps1
4
5
6

Can a PowerShell script be dependent of another script's execution?

I have a situation where I want to make the execution of my scripts smarter. I have a set of scripts that execute at a given time, but because sometimes the input files are not posted at the correct times the scripts run into errors and get unexpected results. so one of the solutions I was thinking of is to make the execution of the scripts dependent of each other. Here is what I mean:
script 1 runs at 6 pm
validates that the file is there
if it's there, set a flag
the flag is active so execute script 2 at 9 pm
if it's NOT there, the flag is not set
the flag is not set so script 2 is not executed
Right now script 1 and script 2 are set with the Task Scheduler at those times, I checked the Scheduler for those type of conditions, but didn't find anything.
You can set triggers in Task Scheduler, like when an event happens for basically everything you can see in eventviewer.
I would suggest Write-Eventlog from the script which works on the file, and depending on the result the sched task would get triggerd.
I suggest you to have single script running every N-minutes on single scheduled task via Task Scheduler.
The master script will analyze activities and have all logical conditions those determine when and which external script to run. You can also have flag files.

VSTS build definition - prevent PowerShell exit behavior causing processes termination

I have a PowerShell task in my definition that calls another script file on its own which takes care of running several things on my build agent (starts several different processes) - emulators, node.js applications, etc.
Everything is fine up until the moment this step is done and the run continues. All of the above mentioned stuff gets closed with most of the underlying processes killed, thus, any further execution (e.g. tests run) is doomed to fail.
My assumption is that these processes are somehow dependent on the outermost (temporary) script that VSTS generates to process the step.
I tried with the -NoExit switch specified in the arguments list of my script, but to no avail. I've also read somewhere a suggestion to set this by default with a registry key for powershell.exe - still nothing.
The very same workflow was okay in Jenkins. How can I fix this?
These are the tasks I have:
The last PowerShell task calls a specified PowerShell file which calls several others on its own. They ensure some local dependencies and processes needed to start executing the tests, e.g. a running Node.js application (started in a separate console for example and running fine).
When the task is done and it is successful, the last one with the tests would fail because the Node.js application has been shut down as well as anything else that was started within the previous step. It just stops everything. That's why I'm currently running the tests within the same task itself until I find out how to overcome this behavior.
I am not sure how you call the dependencies and applications in your PowerShell script. But I tried with the following command in PowerShell script task to run a Node.js application:
invoke-expression 'cmd /c start powershell -Command {node main.js}'
The application keeps running after the PowerShell script task is passed and finished which should meet your requirement. Refer to this question for details: PowerShell launch script in new instance.
But you need to remember to close the process after the test is finished.
There is the Continue on error option (Control Options section). The build process will be continued if it is true (checked), but the build result will be partially succeeded.
You also can output the error or warning by using PowerShell or VSTS task commands (uncheck Fail on Standard Error option in the Advanced section) and terminate the current PowerShell process by using the exit keyword, for example:
Write-Warning “warning”
Write-Error “error”
Write-Host " ##vso[task.logissue type=warning;]this is the warning"
Write-Host " ##vso[task.logissue type=error;sourcepath=consoleapp/main.cs;linenumber=1;columnnumber=1;code=100;]this is an error "
More information about the VSTS task command, you can refer to: Logging Commands

Azure Powershell VSO agent task not failing for non-zero exit code

When putting together a release definition in VSO, adding an Azure PowerShell
task
backed by a file Script1.ps only containing exit 1 does not fail the step when it runs - which I would expect it to do, given that the Continue on error box is not checked
If I add the PowerShell task, writing exit 1 using the inline variant would indeed fail the step. This also comes with an 'advanced configuration option' where the Fail on Standard Error is checked by default.
What did I miss? How would I go about making the Azure Powershell fail in the same manner?
Using this code instead:
[Environment]::Exit(1)
The task will fail if the script throws an exception or writes to stderr stream.

Error handling in sets of batch files running in Windows task scheduler

Let's say I have 5 batch files that run sequentially one after another (executed via the Windows task scheduler on a normal Windows XP PC):
Script1.bat
Script2.bat
Script3.bat
Script4.bat
Script5.bat
Suppose one of the scripts fail (an error condition is detected -- details on how this happens is not important for my question here). How do I stop the other scripts from running if they all run within the task scheduler? For example, if Script1.bat fails, I don't want to run Script2-5.bat. If Script3.bat fails, I don't want to run Script4-5.bat, etc.
I thought about writing a flag value to a temporary file that each script would read from. At the beginning of each script (except for the first one), it will check to see if the flag is valid. The first script would clear out this flag at the beginning each time these set of batch files run.
Surely there is a better way to do this or maybe there is a standard for how to handle this type of situation? Thanks!
Write a master.bat file that conditionally calls each of the scripts in sequence. Then schedule the master instead of directly scheduling the 5 scripts.
#echo off
call Script1.bat
if %errorlevel%==0 call Script2.bat
if %errorlevel%==0 call Script3.bat
if %errorlevel%==0 call Script4.bat
if %errorlevel%==0 call Script5.bat