Why do you specify a pool in a deployment job in Azure Pipelines? - azure-devops

I find it hard to grasp the concept of deployment jobs and environments in Azure Pipelines. From what I understand, a deployment job is a sequence of steps to deploy your application to a specific environment, hence the environment field.
If so, why is there also a pool definition for agent pool for that job definition?
EDIT
What bothers me is that, from what I understand, an Environment is a collection of resources that you can run your application on. So you'll define some for dev, some for stage, prod, etc. So you want to run the job on these targets. So why do we need to specify an agent pool to run the deployment job on? Shouldn't it run on the resources that belong to the specified environment?
EDIT
Take this pipeline definition for example:
jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment:
name: 'Stage'
resourceType: VirtualMachine
strategy:
# Default deployment strategy, more coming...
runOnce:
preDeploy:
steps:
- script: echo "Hello"
deploy:
steps:
- checkout: self
- script: echo my first deployment
I have an environment called "Stage" with one virtual machine on it.
When I run it, I can see both jobs run on my VM
The agent pool specified is NOT USED at all.
However, if I target another environment with no machines on it, it will run on Azure Pipelines vm

Why do you specify a pool in a deployment job in Azure Pipelines?
That because the environment is a collection of resources that you can target with deployments from a pipeline.
In other words, it is like the machine that hosts our private agent, but it can now be a virtual environment, like K8s, VM and so on.
When we specify an environment, it only provide us one virtual environment(You can think of it as a machine). However, there is no agent installed on these virtual environments for us to run the pipeline, so we need to specify an agent to run the pipeline.
For example, we execute our pipeline in our local machine, we still need to create our private agent, otherwise, we only have the target environment, but there is no agent that hosts the pipeline running.

The environment field denotes the target environment to where your artifact is deployed. There are commonly multiple environments like through which the artifacts flow, for example development -> test -> production. Azure DevOps uses this field to keep track of what versions are deployed to what environment etc, from the docs:
An environment is a collection of resources that you can target with
deployments from a pipeline. Typical examples of environment names are
Dev, Test, QA, Staging, and Production.
The pool is a reference to the agent pool. The agent is the machine executing the logic inside the job. For example, a deployment job might have several logical steps, such as scripts, file copying etc. All this logic is executed on the agent that comes from the agent pool. From the docs:
To build your code or deploy your software using Azure Pipelines, you
need at least one agent. As you add more code and people, you'll
eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent
is computing infrastructure with installed agent software that runs
one job at a time.

Related

Use different environment variables per deployment in GitLab

I'm trying to migrate a BitBucket pipeline to GitLab. In BitBucket we use a different set of environment variables for each deployment (staging/production etc).
I don't know how to specify this in GitLab.
I've set up just group variables and variables specific to the repository but I've not found how to override e.g. DB name for different deployments.
Thank you in advance for your help.
You can define variables and limit their scope
By default, all CI/CD variables are available to any job in a pipeline. Therefore, if a project uses a compromised tool in a test job, it could expose all CI/CD variables that a deployment job used. This is a common scenario in supply chain attacks.
GitLab helps mitigate supply chain attacks by limiting the environment scope of a variable.
GitLab does this by defining which environments and corresponding jobs the variable can be available for.
See "Scoping environments with specs" and "CI/CD variable expression"
deploy:
script: cap staging deploy
environment: staging
only:
variables:
- $RELEASE == "staging"
- $STAGING

Restrict Azure DevOps Self Hosted Agent to run for 1 JOb

We have deployed self-hosted agents on Docker Image managed by Azure Container Images. We run release jobs parallelly and hence want to restrict the use of 1 agent for 1 job. The agent should either restart or delete after completion
In the release pipeline, you can set Demand for the job to specify a specific agent to run.
Demands: Specify which capabilities the agent must have to run this pipeline.
Or you can add an additional job, use the hosted agent to run the job, add a powershell task to this job, and delete the self-hosted agent that has finished running the job by calling the Agents-Delete rest api
DELETE https://dev.azure.com/{organization}/_apis/distributedtask/pools/{poolId}/agents/{agentId}?api-version=6.0

What are the differences between an Agent Job and a Deployment Group Job in Azure DevOps?

What are the differences between an Agent Job and a Deployment Group Job in Azure DevOps? What are the reason to create one or the other?
What are the differences between an Agent Job and a Deployment Group
Job in Azure DevOps?
Agent job:
Run steps on an agent which in an agent pool.
Deployment group jobs:
Run on machines in a deployment group.
These are the definition of them. You can see that, the fundamental difference between them is that the target when running the job are different.
For agent job, it can only run on one target at a time (unless set up parallel to run on multiple targets at a time, but parallel is essentially multiple jobs). And deployment group job is, since the deployment group is multiple machines are bound in a group, it can run a job on multiple machines at a time.
In the usage scenario, Agent job can used in both Build and Release pipeline. But for Deployment agent job, it can only be used in Release pipeline for application/project deploy.
What are the reason to create one or the other?
In build pipeline, it should no doubt that you can only use Agent job (or Agentless) job.
I think what you concerned should be the usage in Release pipeline. As I mentioned above, these different jobs can all be used in Release pipeline, and they can all be used for project deployed.
But in terms of specific use, it depends on the task you will use and the number of target servers you want to deploy to.
Agent job:
If your deployment target server number less than 5 objects, and need to deploy to multiple machines at the same time, you can set up a parallel job for Agent job. The Agent job may take a little longer time than Deployment group job. But because the number of deployed targets is not too much, the difference is not obvious.
Deployment group job:
For medium and large companies, the Deployment target objects are generally more than 10, even 100. It is most appropriate to use Deployment group job, because it can be deployed on different machines in one job.
In release, recommend you use Deployment group job if you have multiple targets to deploy to:

What is the recommended release definition for starting and stopping an Azure VM?

I'd like to enhance the release definition so that I don't need to have a separate environment that only starts an Azure VM.
If we take a scenario where we have a Test, Beta, Production environments. The client wants the application to be installed in Beta and Production on their local network. We internally want a Test environment to run E2E tests against, allow for non-technical folks to exercise the app without needing VPN access to the customer beta environment, etc.
So here we have Environment followed by where the Agent is running:
Test - Azure VM
Beta - Client machine
Production - Client machine
How we've solved this is to install the VSTS Agent on a machine at the client, which allows us to target that agent queue in the Beta and Production environments defined for that release. Then we typically build an Azure VM and target that agent queue for the Test environment.
We don't want to run that Azure VM 24/7/365. However if it's not running, then it can't respond to requests from Release Management.
What I've done is to create a environment named Start Test VM and Stop Test VM that use the Azure Resource Group Deployment to start and stop the VM. Those 2 additional environments can have their agent queue set to Hosted.
I'd like to figure out how to combine the first 3 environments into a logical Test instead of having to create 3 release management environments.
Start Test VM - Hosted
Test - Azure VM
Stop Test VM - Hosted
Beta - Client machine
Production - Client machine
The problem is that can be rather ugly and confusing when handing this over to one of our PM's or even myself when I circle back around 3 months later and think, "What the hell is this environment? Oh it's just there to start/stop the VM."
Options:
Stay with status quo - keep it like it is, it can't be fixed
We could open up a port on the Azure VM and use Powershell remoting. Then run on the Hosted agent or on an on-premise agent to start the VM, then deploy the application, then stop the VM. - we really dislike this because the deployment would not be the same as the client on-premise deploy. We'd like each environments' tasks to be the same, just with different variables.
You can use "Azure PowerShell" and "Azure SQL Database Deployment" tasks to configure your Azure VM and SQL or call other script to run on the Azure VM.
There isn't any way to set the agent for tasks. You can submit a feature request on VSTS User Voice for this.
And another way to reduce the environment is that: if you deploy every build linked to the release, then you can add "Start Test VM" task into your build definition to start the VM when build is successful and add "Stop Test VM" task into "Beta" environment.
What we've currently settled on is to continue with having an environment that isn't really what I would consider an environment, but more of a stage in the release pipeline that starts and/or shuts down a VM. We run that on a hosted agent so it can start the VM and make sure to check Skip artifacts download on the environment.
For a continuous integration build, we set a chain so the VM gets started, CI environment gets kicked off and then VM gets stopped. The remaining environments are then manually deployed up the chain as desired.
So here's an example:
Start CI VM
CI
Stop CI VM
Beta
Production
And here's an image of how it looks in Release Management as of 2016.06.27:
I put single quotes around environment because I think I agree with this user voice request in that, it's really more of a stage in the release pipeline. Much like database development, the logical and the physical don't necessarily map 1 to 1.

Executing Presto Task for QA and Production but not in Dev

I have a task that needs to run in QA and prod, but not dev. The task is to stop a clustered application. The problem is that the dev servers aren’t clustered and the task to stop the cluster fails on these servers. Is there a way to handle this?
We used to have that issue as well. When the task ran to stop the cluster, it would fail in dev:
The system cannot find the path specified
C:\Windows\Sysnative\Cluster.exe /cluster:server resource "Company Name Product" /offline
To get this to work, we can move the cluster commands to variables instead of directly in the task. That way we can have the dev version of stopping the cluster just do a no-op: cmd /exit. The QA version will run the real cluster stop command.
Task:
Dev Server Variable Group:
QA Server Variable Group: