Executing Presto Task for QA and Production but not in Dev - deployment

I have a task that needs to run in QA and prod, but not dev. The task is to stop a clustered application. The problem is that the dev servers aren’t clustered and the task to stop the cluster fails on these servers. Is there a way to handle this?

We used to have that issue as well. When the task ran to stop the cluster, it would fail in dev:
The system cannot find the path specified
C:\Windows\Sysnative\Cluster.exe /cluster:server resource "Company Name Product" /offline
To get this to work, we can move the cluster commands to variables instead of directly in the task. That way we can have the dev version of stopping the cluster just do a no-op: cmd /exit. The QA version will run the real cluster stop command.
Task:
Dev Server Variable Group:
QA Server Variable Group:

Related

Why do you specify a pool in a deployment job in Azure Pipelines?

I find it hard to grasp the concept of deployment jobs and environments in Azure Pipelines. From what I understand, a deployment job is a sequence of steps to deploy your application to a specific environment, hence the environment field.
If so, why is there also a pool definition for agent pool for that job definition?
EDIT
What bothers me is that, from what I understand, an Environment is a collection of resources that you can run your application on. So you'll define some for dev, some for stage, prod, etc. So you want to run the job on these targets. So why do we need to specify an agent pool to run the deployment job on? Shouldn't it run on the resources that belong to the specified environment?
EDIT
Take this pipeline definition for example:
jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment:
name: 'Stage'
resourceType: VirtualMachine
strategy:
# Default deployment strategy, more coming...
runOnce:
preDeploy:
steps:
- script: echo "Hello"
deploy:
steps:
- checkout: self
- script: echo my first deployment
I have an environment called "Stage" with one virtual machine on it.
When I run it, I can see both jobs run on my VM
The agent pool specified is NOT USED at all.
However, if I target another environment with no machines on it, it will run on Azure Pipelines vm
Why do you specify a pool in a deployment job in Azure Pipelines?
That because the environment is a collection of resources that you can target with deployments from a pipeline.
In other words, it is like the machine that hosts our private agent, but it can now be a virtual environment, like K8s, VM and so on.
When we specify an environment, it only provide us one virtual environment(You can think of it as a machine). However, there is no agent installed on these virtual environments for us to run the pipeline, so we need to specify an agent to run the pipeline.
For example, we execute our pipeline in our local machine, we still need to create our private agent, otherwise, we only have the target environment, but there is no agent that hosts the pipeline running.
The environment field denotes the target environment to where your artifact is deployed. There are commonly multiple environments like through which the artifacts flow, for example development -> test -> production. Azure DevOps uses this field to keep track of what versions are deployed to what environment etc, from the docs:
An environment is a collection of resources that you can target with
deployments from a pipeline. Typical examples of environment names are
Dev, Test, QA, Staging, and Production.
The pool is a reference to the agent pool. The agent is the machine executing the logic inside the job. For example, a deployment job might have several logical steps, such as scripts, file copying etc. All this logic is executed on the agent that comes from the agent pool. From the docs:
To build your code or deploy your software using Azure Pipelines, you
need at least one agent. As you add more code and people, you'll
eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent
is computing infrastructure with installed agent software that runs
one job at a time.

Cannot run ASP.NET Core Web API on Azure Devops deployment group (self-hosted)

Im working on a simple deployment pipeline with azure devops. I created a deployment pipeline running on a self hosted ubuntu deployment group.
The pipeline looks like this:
Download artifacts from CI pipeline (created with dotnet publish)
Stop running deployment
Unzip the ASP.NET Core Web API to the deployment directory
Run new deployment with dotnet MyApp.dll
The first two steps work as expected. However, when the dotnet My App.dll command is run, the process runs for 10 seconds with following "error" message being printed at the end:
The STDIO streams did not close within 10 seconds of the exit event from process '/usr/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited.
The deployment task is successful despite the message and the app not running. I tried to work around this feature by using nohup & and relocating the command output. After some research I found that all processes started by a pipeline agent are stopped after the agent's work is done - meaning this behaviour is intended and my understanding of azure deployments/agents is wrong.
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
You are already on the right way.
All the process launched in the pipeline will be finished/clean up in “Finalize Job” step when the pipeline is over.
If you don't want the process to be closed, please try set variable Process.clean= false to stops the "finalize job" step from killing all processes.
But when you create a new pipeline next time, you need to close the app before starting it.

VSTS/DevOps BizTalk automatic deployment to multiple servers in BizTalk Group (BizTalk Server Application Deployment)

We are thinking of moving from BTDF to the new VSTS automatic deployment mechanism.
In my test setup, deploying to a single node BTS server worked just fine, but I wonder how it is done having a BizTalk group with multiple servers.
In BTDF the .msi's needed to be run on all nodes (once with 'this is the first server in the group' checked) in order to once create the application and on the other nodes to just install and GAC resources...
Is this being done automatically by the 'Deploy BizTalk Server Application' deployment task or do I have to run it once with 'Create new BizTalk Server application' and on the other servers with 'Install BizTalk Server Application' set?
If yes, do I simply run it on the deployment agent of the node with the management db or would I deploy to a deployment group/environment resource group containing all nodes?
You must to run the task with "Deploy..." to import and install GAC on a primary server (anyone of your servers). This deployment will create a share with the full MSI.Then, run the deploy task with "Install..." to install only GAC on the secondary servers. I have setted up a CI-CD pipeline and below what i created(farme of 3 servers):
Create deployment group with 3 servers(one agent/server)
Create a tag on a server primary
Create a tag on secondary servers to be
tagged secondary
In the pipeline , you add 2 jobs: one to run only
on primary server, filtering on the primary capability. and the
second to filter only the secondary ones.
On the first job, the
deploy task will run to import to biztalk db and run msi, the second
will only run msi on secondaries
I hope you have already visited Microsoft's documentation page Configure automatic deployment with Visual Studio Team Services in BizTalk Server. Please see Provision deployment groups to create a deployment group of multiple servers(this is part of the 'Step 2'). Once the Deployment groups are created please make use of them in the 'Release' Phase of the CICD pipeline as shown in this GIF image Release Pipeline BTS Deploy Groups
Get started
Step 1: Add Application project & update .json template
Step 2: Create the VSTS token & install the build agent
Step 3: Create the build and release definitions

'Fail on Standard Error' option ignored during deployment

I've recently had an issue using Azure DevOps pipelines when deploying a release.
I'm using a set of Tasks to take a VM snapshot, install the application, start services to multiple websphere application servers in parallel and then delete the VM snaps if release is successful.
The 'delete snapshot' step is marked to run 'only when all previous jobs have succeeded' and the application install step has the 'Fail on Standard Error' checked in the Task's Advanced option.
The release deployed successfully to 1 server and failed on another because the service didn't start. both the checks were then ignored and the snapshot was deleted.
How can I get the Pipeline to fail when any one node has failed, instead of all of them?

What is the recommended release definition for starting and stopping an Azure VM?

I'd like to enhance the release definition so that I don't need to have a separate environment that only starts an Azure VM.
If we take a scenario where we have a Test, Beta, Production environments. The client wants the application to be installed in Beta and Production on their local network. We internally want a Test environment to run E2E tests against, allow for non-technical folks to exercise the app without needing VPN access to the customer beta environment, etc.
So here we have Environment followed by where the Agent is running:
Test - Azure VM
Beta - Client machine
Production - Client machine
How we've solved this is to install the VSTS Agent on a machine at the client, which allows us to target that agent queue in the Beta and Production environments defined for that release. Then we typically build an Azure VM and target that agent queue for the Test environment.
We don't want to run that Azure VM 24/7/365. However if it's not running, then it can't respond to requests from Release Management.
What I've done is to create a environment named Start Test VM and Stop Test VM that use the Azure Resource Group Deployment to start and stop the VM. Those 2 additional environments can have their agent queue set to Hosted.
I'd like to figure out how to combine the first 3 environments into a logical Test instead of having to create 3 release management environments.
Start Test VM - Hosted
Test - Azure VM
Stop Test VM - Hosted
Beta - Client machine
Production - Client machine
The problem is that can be rather ugly and confusing when handing this over to one of our PM's or even myself when I circle back around 3 months later and think, "What the hell is this environment? Oh it's just there to start/stop the VM."
Options:
Stay with status quo - keep it like it is, it can't be fixed
We could open up a port on the Azure VM and use Powershell remoting. Then run on the Hosted agent or on an on-premise agent to start the VM, then deploy the application, then stop the VM. - we really dislike this because the deployment would not be the same as the client on-premise deploy. We'd like each environments' tasks to be the same, just with different variables.
You can use "Azure PowerShell" and "Azure SQL Database Deployment" tasks to configure your Azure VM and SQL or call other script to run on the Azure VM.
There isn't any way to set the agent for tasks. You can submit a feature request on VSTS User Voice for this.
And another way to reduce the environment is that: if you deploy every build linked to the release, then you can add "Start Test VM" task into your build definition to start the VM when build is successful and add "Stop Test VM" task into "Beta" environment.
What we've currently settled on is to continue with having an environment that isn't really what I would consider an environment, but more of a stage in the release pipeline that starts and/or shuts down a VM. We run that on a hosted agent so it can start the VM and make sure to check Skip artifacts download on the environment.
For a continuous integration build, we set a chain so the VM gets started, CI environment gets kicked off and then VM gets stopped. The remaining environments are then manually deployed up the chain as desired.
So here's an example:
Start CI VM
CI
Stop CI VM
Beta
Production
And here's an image of how it looks in Release Management as of 2016.06.27:
I put single quotes around environment because I think I agree with this user voice request in that, it's really more of a stage in the release pipeline. Much like database development, the logical and the physical don't necessarily map 1 to 1.