defining replicas in docker-compose as integer variables - docker-compose

VSTS release fails with an error for using an integer variable for docker replicas.
I want to define different replicas values in different environment in azure vsts release pipelines.
I have docker-compose with following setting.
replicas: ${REPLICAS}
REPLICAS is defined as vsts build(set to 1) and release variable(set to 1 for dev , qa and 3 for prod).
The build is success and release returns error stating
[error]services.serviceName.deploy.replicas must be a integer
Successful release is expected result.

Integer value can not be defined for a vsts release variable. The work around solution is to create a docker service scale task in prod stage to scale service to 3 or more containers.This is defined just after the docker service deploy task in release pipeline . This task will run docker service scale command after service deployment. Here is the command specified -> service update --replicas=$(REPLICA) $(SERVICENAME) . This command in vsts stage task can accept string value of REPLICA . REPLICA and SERVICENAME are both vsts release variables.

Related

Why do you specify a pool in a deployment job in Azure Pipelines?

I find it hard to grasp the concept of deployment jobs and environments in Azure Pipelines. From what I understand, a deployment job is a sequence of steps to deploy your application to a specific environment, hence the environment field.
If so, why is there also a pool definition for agent pool for that job definition?
EDIT
What bothers me is that, from what I understand, an Environment is a collection of resources that you can run your application on. So you'll define some for dev, some for stage, prod, etc. So you want to run the job on these targets. So why do we need to specify an agent pool to run the deployment job on? Shouldn't it run on the resources that belong to the specified environment?
EDIT
Take this pipeline definition for example:
jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment:
name: 'Stage'
resourceType: VirtualMachine
strategy:
# Default deployment strategy, more coming...
runOnce:
preDeploy:
steps:
- script: echo "Hello"
deploy:
steps:
- checkout: self
- script: echo my first deployment
I have an environment called "Stage" with one virtual machine on it.
When I run it, I can see both jobs run on my VM
The agent pool specified is NOT USED at all.
However, if I target another environment with no machines on it, it will run on Azure Pipelines vm
Why do you specify a pool in a deployment job in Azure Pipelines?
That because the environment is a collection of resources that you can target with deployments from a pipeline.
In other words, it is like the machine that hosts our private agent, but it can now be a virtual environment, like K8s, VM and so on.
When we specify an environment, it only provide us one virtual environment(You can think of it as a machine). However, there is no agent installed on these virtual environments for us to run the pipeline, so we need to specify an agent to run the pipeline.
For example, we execute our pipeline in our local machine, we still need to create our private agent, otherwise, we only have the target environment, but there is no agent that hosts the pipeline running.
The environment field denotes the target environment to where your artifact is deployed. There are commonly multiple environments like through which the artifacts flow, for example development -> test -> production. Azure DevOps uses this field to keep track of what versions are deployed to what environment etc, from the docs:
An environment is a collection of resources that you can target with
deployments from a pipeline. Typical examples of environment names are
Dev, Test, QA, Staging, and Production.
The pool is a reference to the agent pool. The agent is the machine executing the logic inside the job. For example, a deployment job might have several logical steps, such as scripts, file copying etc. All this logic is executed on the agent that comes from the agent pool. From the docs:
To build your code or deploy your software using Azure Pipelines, you
need at least one agent. As you add more code and people, you'll
eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent
is computing infrastructure with installed agent software that runs
one job at a time.

Restrict Azure DevOps Self Hosted Agent to run for 1 JOb

We have deployed self-hosted agents on Docker Image managed by Azure Container Images. We run release jobs parallelly and hence want to restrict the use of 1 agent for 1 job. The agent should either restart or delete after completion
In the release pipeline, you can set Demand for the job to specify a specific agent to run.
Demands: Specify which capabilities the agent must have to run this pipeline.
Or you can add an additional job, use the hosted agent to run the job, add a powershell task to this job, and delete the self-hosted agent that has finished running the job by calling the Agents-Delete rest api
DELETE https://dev.azure.com/{organization}/_apis/distributedtask/pools/{poolId}/agents/{agentId}?api-version=6.0

Run Kubectl DevOps task with run-time specified resource details

We're building out a release pipeline in Azure DevOps which pushes to a Kubernetes cluster. The first step in the pipeline is to run an Azure CLI script which sets up all the resources - this is an idempotent script so we can run it each time we run the release pipeline. Our intention is to have a standardised release pipeline which we can run against several clusters, existing and new.
The final step in the pipeline is to run the Kubectl task with the apply command.
However, this pipeline task requires specifying in advance (at the time of building the pipeline) the names of the resource group and cluster against which it should be executed. But the point of the idempotent script in the first step is to ensure that the resources and to create if not.
So there's the possibility that neither the resource group nor the cluster will exist before the pipeline is run.
How can I achieve this in a DevOps pipeline if the Kubectl task requires a resource group and a cluster to be specified at design time?
This Kubectl task works with service connection type: Azure Resource Manager. And it requires to select Resource group field and Kubernetes cluster field after you select the Azure subscription, as below.
After testing, we find that these 2 fields supports variable. Thus you could use variable in these 2 fields, and using PowerShell task to set variable value before this Kubectl task. See: Set variables in scripts for details.

how to handle ECS deploys in CodePipeline for changes in task definition

I am deploying an ECS Fargate task with two containers: 1 reverse proxy nginx and 1 python server. For each I have an ECR repository, and I have a CI/CD CodePipeline set up with
CodeCommit -> CodeBuild -> CodeDeploy
This flow works fine for simple code changes. But what if I want to add another container? I can certainly update my buildspec.yml to add the building of the container, but I also need to 1) update my task definition, and 2) assign this task definition to my service.
Questions:
1) If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
2) This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
Thanks!
The CodePipeline's ECS Job Worker copies the Task Definition and updates the Image and ImageTag for the container specified in the 'imagedefinitions.json' file, then updates the ECS Service with this new TaskDef. The job worker cannot add a new container in the task definition.
If I use the CLI in my CodeBuild stage to create a new task definition and associate it with my service, won't this trigger a deploy? And then my CodeDeploy will try to deploy again, so I'll end up deploying twice?
I don't think so. Is there a CloudWatch event rule that triggers CodeDeploy in such fashion?
This approach ends up creating a new task definition and updating the service on every single deploy. Is this bad? Should I have some logic to pull down the LATEST task revision and diff the JSON from CodeCommit version and only update if there is a difference?
The ECS Deploy Job worker creates a new task definition revision every time a deployment occurs so if this is official behaviour, I wouldn't consider it bad as such.
I will question why you need to add new containers to your Task definition in runtime during deploys. Your task definition in general should be modified less often, and only the image:tag in it should be modified regularly - something the ECS Deploy action helps you achieve.

Executing Presto Task for QA and Production but not in Dev

I have a task that needs to run in QA and prod, but not dev. The task is to stop a clustered application. The problem is that the dev servers aren’t clustered and the task to stop the cluster fails on these servers. Is there a way to handle this?
We used to have that issue as well. When the task ran to stop the cluster, it would fail in dev:
The system cannot find the path specified
C:\Windows\Sysnative\Cluster.exe /cluster:server resource "Company Name Product" /offline
To get this to work, we can move the cluster commands to variables instead of directly in the task. That way we can have the dev version of stopping the cluster just do a no-op: cmd /exit. The QA version will run the real cluster stop command.
Task:
Dev Server Variable Group:
QA Server Variable Group: