run azure pipeline task as per dynamic conditions - azure-devops

Can we run task of azure pipeline as per dynamic condition?
Actually i want to run some of the task from the listed task, and every time choice of the task to run maybe different as per user requirement.

For your issue,I think you can set conditions through Control Options in task to control the run of task. If this does not meet your needs, you can give a specific example, so that I can better understand your request.
Inside the Control Options of each task you can specify the conditions under which the task will run.
If the built-in conditions don't meet your needs, then you can specify custom conditions
Conditions are written as expressions. The agent evaluates the expression beginning with the innermost function and works its way out. The final result is a boolean value that determines if the task should run or not. See the expressions topic for a full guide to the syntax.
Here is a document provided some examples,you can refer to it.

Related

Trigger Date for reruns

My pipelines activities need the date of the run as a parameter. Now I get the current date in the pipeline from the utcnow() function. Ideally this would be something I could enter dynamically in the trigger so I could rerun a failed day and the parameter would be set right, now a rerun would lead to my pipeline being rerun but with the date of today not the failed run date.
I am used to airflow where such things are pretty easy to do, including scheduling reruns. Probably I think too much in terms of airflow but I can't wrap my head around a better solution.
In ADF,it is not supported directly to pass trigger date at which pipeline got failed to trigger.
You can get the trigger time using #pipeline().TriggerTime .
This system variable will give the time at which the trigger triggers the pipeline to run.
You can store this trigger value for every pipeline and use this as a parameter for the trigger which got failed and rerun the pipeline.
Reference: Microsoft document on System Variables on ADF
To resolve my problem I had to create a nested structure of pipelines, the top pipeline setting a variable for the date and then calling other pipelines passing that variable.
With this I still can't rerun the top pipeline but rerunning Execute Pipeline1/2/3 reruns them with the right variable set. It is still not perfect since the top pipeline run stays an error and it is difficult to keep track of what needs to be rerun, however it is a partial solution.

Azure DevOps - Passing Variables in release tasks

Basically I want two tasks, I want the second task (Task B) to look at the status of first task (Task A).
All the examples I see use yaml, within the Release section of setting up deployments, they all use a user interface.
If I use Agent.JobStatus in Step A or Step B, it shows the Job Status of what we are currently in. I figure I need to capture the value between Task A and Task B (not within either one), how does one capture that? I either can't find it, or not understanding something.
I have put it in the agent job variable expression of Task B....hoping it gathered what was the last job status, but it is null.
in(variables['Agent.JobStatus'], 'Failed', 'SucceededWithIssues')

Azure DevOps classic pipeline difference between linked parameters and variables?

What is the difference between linked task parameters (process parameters) and variables in classic Azure DevOps build pipeline? Don't they all allow having a single place where to change values?
What I mean by "linked" task parameters are what you get by clicking the link icon when configuring tasks like below
which leads to adding a textbox for the linked value in settings page for the pipeline as you see below
Regarding parameters in the classic pipeline, we generally use Process parameters. You can link all important arguments for tasks used across the build definition as process parameters, which are then shown at one place-the Pipeline view. This means you can quickly edit these arguments without needing to click through all the tasks. Templates come with a set of predefined process parameters.
Variables give you a convenient way to get key bits of data into various parts of the pipeline. The most common use of variables is to define a value that you can then use in your pipeline. All variables are stored as strings and are mutable. The value of a variable can change from run to run or job to job of your pipeline.
The difference between them is:
Variables can be a convenient way to collect information from the
user up front. You can also use variables to pass data from step to
step within a pipeline.Unlike variables, pipeline parameters can't be
changed by a pipeline while it's running.
Parameters have data types such as number and string, and they can be
restricted to a subset of values. Restricting the parameters is
useful when a user-configurable part of the pipeline should take a
value only from a constrained list. The setup ensures that the
pipeline won't take arbitrary data.
Process parameters differ from variables in the kind of input supported by them. Variables only take in string inputs while process parameters in addition to string inputs support additional data types like check boxes and drop-down list boxes.
For detailed information, please refer to the following documents:
Define variables
Process parameters
Variables and parameters

Automation for ADF V2 Pipeline

I need help with implementation for below requirement:
There is one ADF pipeline that runs every two hours (with Tumbling window trigger), now i need to create one more pipeline that will be used for performing maintenance job . This pipeline is scheduled to run once a month (with schedule trigger). Here is the requirement that i'm trying to implement:
Now before running the second pipeline i need to make sure the first pipeline is not running (basically get the status and if its running wait for its completion) and then disable the trigger associated with it.
Run the second pipeline and after its completion , enable the trigger that is associated with first pipeline
Please let me know if this can be achieved within ADF or some kind of custom scripting needed to achieve the result.
First, your idea is achievable.
Second, if you want to use built-in feature in Azure Datafactory, then there is no way.
Basically, you need to use azure function(simple httptrigger, dont give any input, then you can hit and execute it directly.) to achieve your requirement that ADF can't do. From your description, the executing of these two pipelines are mutually exclusive, so you can use sdk to check to status of another pipeline in azure function. If another pipeline is running, then wait a few seconds then re-check the status of another pipeline.(In short, put the main logic and code in the azure function.)
Simple azure function:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=csharp
Use SDK to monitor:
https://learn.microsoft.com/en-us/azure/data-factory/monitor-programmatically#net
(The link I give is C#, you can choose other supported language.)

Is a JOB card necessary?

I guess Job Cards are such like global attributes of a Java Class. In my job, I never used these job cards attributes. So job card is necessary in a job ? Could you please look at the job card below and tell me if that's required and why I need it ?
Best Regards
//BJ03H03 JOB (BBO09272,0000),
// 'NHS-STAT $',
// USER=BPB,
// SCHENV=HDZ2PO,
// CLASS=E,
// TIME=270,
// MSGCLASS=2
What is and isn't required on a job card will be system/installation dependent. The minimum requirement is that a JOB statement with a JOBNAME exist. i.e. //JOBNAME JOB (an EXEC statement is also required)
However, your installation will likely require other parameters, it may implement defaults. In short you need to either speak to the system programmers or alternately experiment by omitting parameters (this latter method could end up resulting in discussions with the Systems Programmers (perhaps angry ones)).
The system is designed to enable users to perform many types of job
control in many ways. To allow this flexibility, only two job entry
tasks are required:
Identification: The job must be identified in the jobname field of a JOB statement.
Execution: The program or procedure to be executed must be named in a PGM or PROC parameter on an EXEC statement.
Therefore, the following statements are the minimum needed to perform
a job control task:
//jobname JOB
// EXEC {PGM=program-name }
{PROC=procedure-name}
{procedure-name}`
As from Task Charts z/OS MVS JCL Reference SA23-1385-00 which wouldn't be the worst starting place to find out more.