Create a Precheck job in jams powershell - powershell

I have set the jams workflow and jobs Retain Options = error.
I would like create a precheck job in jams powershell where I can cancel or block the instance of the job or workflow been started before the current workflow is finished.
So would anyone help me in creating a jams powershell inside the pscript or any other way.
I am new to this so I don't have a idea how to create it.

Related

How to schedule the execution of a Python script in Azure DevOps (after successful Build)?

I have an Azure Pipeline Build. The *.yaml file executes correctly a Python script (PythonScript#0). This script itself creates (if does not exist), executes and publishes Azure ML pipeline. It runs well when the Build is executed manually or is triggered by commits.
But I want to schedule the automated execution of the ML pipeline (Python script) on a daily basis.
I tried the following approach:
pipeline_id = published_pipeline.id
recurrence = ScheduleRecurrence(frequency="Day", interval=1)
recurring_schedule = Schedule.create(ws,
name=<schedule_name>,
description="Title",
pipeline_id=pipeline_id,
experiment_name=<experiment_name>,
recurrence=recurrence)
In this case the pipeline runs during 3-4 seconds and terminates successfully. However, the Python script is not executed.
Also, I tried to schedule the execution of a pipeline using Build, but I assume that it is a wrong approach. It rebuilds a pipeline, but I need to execute a previously published pipeline.
schedules:
- cron: "0 0 * * *"
displayName: Daily build
always: true
How can I execute my published pipeline daily? Should I use Release (which agents, which tasks?)?
Also, I tried to schedule the execution of a pipeline using Build, but
I assume that it is a wrong approach. It rebuilds a pipeline, but I
need to execute a previously published pipeline.
Assuming your python-related task runs after many other tasks, then it's not recommended to simply schedule the whole build pipeline, it will rerun the pipeline(other tasks+python script).
Only the pipeline can be scheduled the instead of tasks, so I suggest you can create a new build pipeline to run the python script. Also, a private agent is more suitable for this scenario.
Now we get two pipelines: Original A and B which used to run the python script.
Set B's build completion to be A, so that if A builds successfully the first time, B will run after that.
Add a command-line task or PS task as pipeline A's last task. This task(modify the yml and then push the change) will be responsible for updating the B's corresponding xx.yml file to schedule B.
In this way, if A(other tasks) builds successfully, B(pipeline to run python script) will execute. And B will run daily after that successful build.
Hope it helps and if I misunderstand anything, feel free to correct me.

Rerun sucessful oozie jobs

Is there a way to fully rerun successful oozie jobs. Let assume that we schedule creation a table and we want to rebuild it on demand - is there easy way to do it in oozie?
I try oozie -rerun command but if every action is successful it will not perform any results. It just checked that everything is successful and finish the job
Rerun with oozie.wf.rerun.failnodes set to false (it is true by default).
Example:
oozie job -rerun 0000092-141219003455004-oozie-oozi-W -config job.properties -Doozie.wf.rerun.failnodes=false
From Apache Oozie by Mohammad Kamrul Islam and Aravind Srinivasan
By default, workflow reruns start executing from the failed nodes in the prior run.... The property oozie.wf.rerun.failnodes can be set to false to tell Oozie that the entire workflow needs to be rerun.
If your job ran successfully and you want to rerun on demand you will have to find out the action number first by running this command: oozie job -info xxxxx-xxxxxxxx-xxx-C
and once you have the action number run this: oozie job -rerun xxxxxxx-xxxxxxxx-C -action xx
and you should be good then

In an Azure DevOps release pipeline with several jobs, can I capture the status of the previous jobs in a script?

In my Azure DevOps release pipeline I have a powershell script that sends a REST request to another application with the status 'failed' or 'successful'.
I want the status to send 'failed' if any of the previous jobs failed. So basically something like this:
if (($Agent -eq "Succeeded") -and ($LastJobsFailed -ne "true")) {
$change_status="successful"
}
else {
$change_status="failed"
}
Now I know that Azure Devops uses this status somewhere, since you can specify whether a job starts or not based on the results of the last jobs.
As a workaround I copied my script twice one time with status "successful" and this only runs when all jobs succeed and vice versa. But i'd like to do everything in one script :)
so I would expect it would be possible to find a list with all previous job statuses or something.
Anyone any ideas?
thanks!
I think you don't need to to check the status of previous jobs in the powershell script. A workaround for that is, you can create one job(named jobSendOK) which sepecify the run condition as "Only when all previous jobs have succeed", and create another one job (named jobSendNG)which sepecify the run condition as "Only when a previous job has failed".
In jobSendOK, add a powershell task for sending 'successful', while jobSendNG has a powershell task for sending 'failed'.

How to run an Event (FileSystemWatcher) through Task Scheduler

I currently have a pretty simple Powershell Script that creates an IO.FileSystemWatcher object, and calls an executable upon that event being triggered.
I can run this script without issue from Administrator Powershell on my 2012 Windows Server, however it seems to run into issues when I have my script being run from Task Scheduler.
I've attempted running the task while logged on, and on a trigger while I'm logged off and in both instances the Event status reads: "Running" when I check. However interacting with the folder that should be watched produces no results. I've added a log file to document which parts of the code are functioning and the script DOES create the event, however it is the event triggering that seems to be the issue. Has anyone heard of an issue with creating events through Task Scheduler?
I've read some forums that say it might be a domain user issue
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa
Change the ‘REG_DWORD’ with ValueName ‘disabledomaincreds’ to a Value to “0
Although this was already the case, and I've tried multiple variations of settings in the Task Properties as per Scripting Guy and SpiceWorks. The general consensus I've found is that it needs to be ran with a -NoExit argument in order for the event to properly run when the user is not logged in.
Extra notes:
Powershell script is located on a network location rather than physically on the computer (\serverName\FTP\Folder\script.ps1
I came across the same problem. I don't know why this works, but in your Scheduled Task, when referring to the PowerShell Script, instead of using
\serverName\FTP\Folder\script.ps1
use
. \serverName\FTP\Folder\script.ps1
(noting the .).
As I understand, as a powershell novice, the events you register with FileSystemWatcher will only fire if the powershell instance is still running. I wouldn't trust that task manager says the task is running since it is notoriously unreliable, which seems to be the Microsoft standard. I think once your script finishes executing it kills the powershell instance and all event listeners are garbage collected.
I just put my script to sleep forever and it works. At the end of my script, it has
while ($true) {sleep 1}
It probably wouldn't hurt to increase the sleep time, but this works.

Obtaining the DistributedTaskContext in a custom TFS Build/Release Script

I'm using TFS 2015 Update 2 along with the new Build/Release system. I have a powershell script I'm executing via the Powershell Task. This task executes a powershell script that needs access to the $distributedExecutionContext magic variable I see in many different VSTS Task code samples.
This script in question is not technically a task, but instead is being executed by the Powershell task that comes delivered with TFS.
No matter what I do, I can't see to obtain the $distributedExecutionContext variable. It's always null. Here is an example:
Import-Module "Microsoft.TeamFoundation.DistributedTask.Task.Internal"
if($distributedTaskContext)
{
#... this never happens
}
Is this variable only available if the powershell being run is being run inside an actual task?
The default powershell task that you are using runs the script entirely as a different process and the $distributedTaskContext variable is not available to the script.
It is only available only to the task's powershell script.
If you are going to write a custom task, I would like you to use the new vsts-task-lib SDK which improves a lot over old SDK.