How to queue multiple runs of same azure pipeline on one agent - azure-devops

My pipeline triggers on resources, schedule and merges. Sometimes these can happen almost at the same time and many pipeline runs can be created. I've noticed that the jobs that run don't always belong to the same run.
Example
one pipeline A includes 2 jobs j.1 and j.2
a resource triggers A.1 and starts j.1
another resource triggers A.2 also and queues j.1.
A.1 finishes a job and instead of starting j.2 it is A.2 j.1 that starts.
How do I lock the run so that A.1 j.1 and j.2 runs to completion before A.2 starts?

On the agent, the queue is for the job-level not pipeline-level. So, normally the agent will be allocate to the higher priority jobs in the pipelines regardless of whether the jobs are in the same pipeline run.
Currently, we have not method or settings to manager the sort of the queued jobs.

Related

Waiting on job "x" to finish in pipeline "1" before running job "x" in pipeline "2" (Azure DevOps)

We're using SonarQube for tests, and there's one token it uses, as long as one pipeline is running, it goes fine, but if I run different pipelines (all of them have E2E tests as final jobs), they all fail, because they keep calling a token that expires as soon as its used by one pipeline (job). Would it be possible to have -all- pipelines pause at job "x" if they detect some pipeline running job "x" already? The jobs have same names across all pipelines. Yes, I know this is solved by just running one pipeline at a time, but that's not what my devs wanna do.
The best way to make jobs run one by one is set demands for your agent job to run on a specific self-hosted agent. Just as below, set a user-defined capabilities for the self-hosted agent and then require run on the agent by setting demands in agent job.
In this way, agent jobs will only run on this agent. Build will run one by one until the previous one complete.
Besides, you could control if a job should run by defining approvals and checks. By using Invoke REST API check to make a call to a REST API such as Gets a list of builds, and define the success criteria as build count is zero, then, next build starts.

Azure DevOps Pipelines - how agents in an agent pool are selected for running jobs

I have the following question on how jobs are scheduled onto agents in an Agent pool.
AzDO Job Scheduling on Agents
This pertains to HOW the AzDO pipeline decides to pick which of the agents from the pool to run jobs.
The expectation is that jobs will be evenly distributed across the agents in the pool. However, we are noticing that only one of the agents is repeatedly the target of job executions, and this is skewing up the agent usage and rest of the agents are idling, while jobs are waiting.
I examined if there are any demand/capabilities placed on the agents and there are none.
Questions: -
What is the algorithm or job scheduling policy used to pick the
agents? Is there any default stickiness once the job starts landing
in an agent, meaning once an agent is selected from a pool then
subsequent jobs get sticky to the same agent?
Why is only a single agent out of multiple agents in a pool getting used, while rest of agents are idling.
ADO does not pick an agent. The agents "ask" ADO if there is new work for them: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#communication-with-azure-pipelines
You mention "jobs". I'm not sure if you mean the technical term of an ADO job. If so: Jobs belong to a stage. An entire stage will always be executed on the same agents. Subsequent stages might be running on different agents.
I assume you are not using "Capabilities"?! Otherwise that might explain the behavior that you are seeing.

How can I kill (not cancel) an errant Azure Pipeline run, stage, job, or task?

I want to know how to kill an Azure Pipeline task (or any level of execution - run, stage, job, etc.), so that I am not blocked waiting for an errant pipeline to finish executing or timeout.
For example, canceling the pipeline does not stop it immediately if a condition is configured incorrectly. If the condition resolves to true the task will execute even if the pipeline is cancelled.
This is really painful if your org/project only has 1 agent. :(
How can I kill (not cancel) an errant Azure Pipeline run, stage, job, or task?
For the hosted agent, we could not kill that azure pipeline directly, since we cannot directly access the running machine.
As workaround, we could reduce the time that the task continues to run after the job is cancelled by setting a shorter Build job cancel timeout in minutes:
For example, I create a pipeline with task, which will still run for 60 minutes after the job is cancelled. But if I set the value of Build job cancel timeout in minutes to 2 mins, the azure pipeline will be cancelled completely.
For the private agent, we could run services.msc, and look for "VSTS Agent (name of your agent)". Right-click the entry and then choose restart.

How to ensure that a release stage is executed by every agent of the agent pool

I have set up a multi-agent job for an Azure Release Pipeline. There are two agents in the agent pool. The job needs to be executed by every agent in the agent pool.
The settings shown in the above schedule two agent jobs whenever a release is triggered. If both agents are idle while the deployment starts, everything works as expected and both agents execute the job. But, as soon as one agent is busy at that time the behavior becomes unexpected and both jobs are executed by the same agent consecutively.
How can I ensure that every agent of the agent pool is executing the defined agent job?
This is as designed. If one agent pool has two agents A and B, while A is busy, the build job will running with B. So in pool, if some agents are busy, the pipeline will running with other free and available agent.
The precondition of Parallel job is that you must has enough agents free and available. That's why as soon as one agent is busy at that time the behavior becomes unexpected.

How to launch scheduled spark jobs even if previous jobs are still executing on rundeck?

Why rundeck not launching scheduled spark jobs even if the previous job is still executing?
Rundeck is skipping the jobs set to launch during the execution of the previous job, then after the completion of its execution launch new job based on the schedule.
But I want to launch a scheduled job even if the previous job is executing.
Check your workflow strategy, here you have an explanation about that:
https://www.rundeck.com/blog/howto-controlling-how-and-where-rundeck-jobs-execute
You can design a workflow strategy based on "Parallel" to launch the jobs simultaneously on your node.
Example using the parallel strategy with a parent job.
Example jobs:
Job one, Job two and Parent Job (using parallel strategy).