I currently have a release pipeline with a number of stages. In the Deployment group job I have to specify Deployment group (which is fine), but in order to run my pipeline on a specific VM(agent) in that group I also have to specify some tags to limit the number of matching targets.
Instead of using tags in the Deployment group job I would like to use a variable to specify which VM(agent) to use. This variable can then be set when the release is created. Is this something that can be achieved in some way?
I think that the only way to do this would be through Agent capabilities.
In self-hosted agents a capability is:
Capabilities are name-value pairs that are either automatically
discovered by the agent software, in which case they are called system
capabilities, or those that you define, in which case they are called
user capabilities.
Under your Agent pools if you select your agent you will discover a tab Capabilities.
For self-hosted, you could determine a user-defined capability. These means to have a capability you will later check during release time.
How could you provide that info? In your YAML you will specify:
pool:
name: Default
demands: SpecialSoftware # Check if SpecialSoftware capability exists
What this will do is only agents that meet those capabilities will be suitable to perform the deploy
And remember this if anyone finds this answer, for Microsoft hosted agents:
Demands and capabilities apply only to self-hosted agents. When using
Microsoft-hosted agents, you select an image for the hosted agent. You
cannot use capabilities with hosted agents.
Related
I want to use Library Varibles as Azure Deployement Group name in Azure Dev-Ops Release pipelines , is it possible ? And Is it a good practise to do it ?
Purpose : I want to use same pipeline and same stage to deploy on different environments , so that If I change Deployment Group Name (Dev,QA,UAT) in library variable , it will deploy to that env.
I am afraid that the Deployment Group name field doesn't support to use Pipeline Variable(e.g. Library Variable) to define the name.
It currently only supports hard-coded deployment group name in Deployment Group job.
For a workaround, you can add multiple deployment group jobs with different deployment group name. Then you can set the condition to determine which job can run
based on the Library variable value.
Here is an example:
On the other hand, I can fully understand your requirement, I suggest that you can create a suggestion ticket in the Developer Community.
I am using Windows Self hosted agent for my Azure DevOps pipelines. Currently the pipelines are executed sequentially. If more than one pipelines triggered from different ADO projects, then it has to wait in queue to get the agent. In order to execute the pipeline in parallel, I came to know from some tutorials if we increase the paid parallel jobs for self hosted agent under billing section of Organization setting. Is my understanding correct? If so what are the precautionary steps I need to take. Do we have any control of when the pipelines to be executed in parallel?
Thanks.
In order to run self-hosted parallel jobs, you need to purchase parallel jobs and register several self-hosted agents.
For parallel jobs, you can register any number of self-hosted agents in your organization. If you want to run 3 jobs in parallel, then you must register at least 3 self-hosted agents in one agent pool. DevOps charges based on the number of jobs you want to run at a time, not the number of agents registered. There are no time limits on self-hosted jobs. For private projects, you can have one job and one additional job for each active Visual Studio Enterprise subscriber who is a member of your organization.
About how to purchase parallel jobs, please refer to Buy parallel jobs.
For how to control the use of parallel jobs, please refer to the following:
For classic pipeline, you can specify when to run the job through dependencies and Run this job in Additional options in the agent job. Then the pipeline will run in sequence according to your settings.
For YAML pipeline, you can specify the conditions under which the job should run with "dependsOn" and "condition".
For example:
For more info about conditions, please refer to Specify conditions
If you don't specify a specific order, the jobs will run in parallel based on the parallel jobs you purchased.
I don't know if my experience can help. I'll try. I started a new job and we use self-hosted TFS / Azure DevOps. I am changing our build process to create 3 product SKUs (it uses conditional compilation). Let's call them Good, Better & Best.
I edited the Build definition. First I switched to the Variables tab. I created a Process variable named SKUs and set it to Good,Better,Best. The commas are important.
Next I switched to the Tasks tab. I located the Agent Phase. Mine was called Phase 1. Select it. On the right, under Parallelism, I selected Multi-configuration. In the Multipliers text field I entered SKUs. I set Maximum number of agents to 3.
What I don't yet know is the TFS back-end administration and options that the company purchased beforehand.
We're using yaml pipelines with environment agents installed on local infrastructure. Each Environment is backed by a Deployment Pool which is implicitly created by AzDO. The pools reside at the org level.
Every time someone adds an environment to a pipeline it requires the project collection admin to authorize the pipeline. The devs cannot authorize the pipeline because they don't have permission at the org level. Image below shows prompt received.
Is there anyway to simplify this so the PCA is not required to authorize every on-prem pipeline?
No, we do not have any methods to skip the permit from PCA, when a pipeline is trying to target to an environment at the first time.
On the Organization Settings and Project Settings, we also do not have any built-in options to defaultly permit all new environments on all pipelines.
This is aiming to prevent the environments from being abused.
I want to specify list of agents, for example: I have agents available from agent1 to agent10 and agent40 to agent60 from this range I would like to fetch any available agent for the pipeline execution. And all these agents are in the same pool.
Currently I am parameterizing agent value and passing it during Queue time as shown below:
And it is fetched in yaml file as shown below:
Sorry, but as far as I know Azure Devops doesn't support specifying the list of agent machines.
My understanding can be supported using this note available on the documentation of demands
Checking for the existence of a capability (exists) and checking for a specific string in a capability (equals) are the only two supported operations for demands.
So exists and equals are the only two possibilities, and nothing like in
Instead you can define a user-defined capability for the agents that you need.
For example:
if you add "Test" capability to agent1 to agent10 and agent40 to agent60 of AutomationAgent pool. Then you can use following demand.
pool:
name: AutomationAgent
demands: Test
This would be same as below. But like I said earlier that in is not available yet.
pool:
name: AutomationAgent
demands:
- Agent.Name -in (agent1, agent2, agent3.... agent10, agent40.... agent60)
We currently have 4 Azure DevOps team projects that require two Deployment Groups to be created for their SIT and UAT release pipelines. All 4 team projects will share the two Deployment Groups, the idea being to create the deployment group from one team project and then sharing or extending it to the other 3 (which I believe is common practice).
My main concern though is that due to some budget constraints, the decision has been taken to create both SIT and UAT Deployment Groups on a single target server. Much as I strongly believe this is probably not best practice, are there any technical reasons why this cannot or shouldn't be implemented?
In simple terms, deployment groups is that:
A deployment group is a logical set of deployment target machines that
have agents installed on each one. Deployment groups represent the
physical environments; for example, "Dev", "Test", "UAT", and
"Production". In effect, a deployment group is just another grouping
of agents, much like an agent pool.
We support registering the same machine to multiple deployment groups. However you would need to edit the agent name in our PS "registration script" provided in the UI. Or log into the machine physically and execute the script in a different folder than the default one specified in the script.
Normally, we set up deployment groups with multiple agents and run deployment just to target agent according to requirements.
What you can do is assign tags to deployment agents and use tags to assign releases to specific agents.
In summary, it's able to register the same machine/server to multiple deployment groups if you insist on.
But due to server performance, environmental isolation , Disaster Tolerance and other factors.