Azure Pipeline Parallelism option - azure-devops

I am new to azure pipelines, started learning & in the process of creating my very 1st yaml pipeline.
My project is private, I am using a multi-stage templated pipeline, self-hosted as need to concurrently deploy a java web application to 7 VMs using mvn tomcat7 plugin run: command
so as to run selenium automation tests in parallel across all the VMs. A template pipeline which is called 7 times to deploy to all the VMs is such that it needs to stay running as
necessitated by the embedded tomcat instance on each of the VMs which in turn requires the ablitiy to have parallelism enabled, pay extra for to achieve this.
My question is; is there another way without having to pay extra for parallelism or turning my project to be a public one ?

I think what you want is parallel job. Only job could execute publish tasks in parallel.
And from this document, you could use parallel job freely when you change your project to public. And the job can run for up to 360 minutes(6 hours).
You need is that under Project Settings--> Overview--> change Visibility to public.
After that, under pipeline, add the publish task for each new agent job. So that, you could executes the publish task in parallel.

Related

Github Action avoid approval on same environment rule within same workflow

Reusing same environment rule within same workflow
Running our workflow in Github, we split our tasks up into 2 jobs; Building docker image & attach tags and deploying to AWS using CodeDeploy. The reason for splitting the tasks up is to avoid creating new tags whenever our deployment fails.
However... using environment protection rules creates a roadblock as every job needs to be approved(even though we already ran the same environment previously)
The deployment job is a conditional job, meaning it depends on the success of the Build job.
Is there any way to get around this?
Github workflow

Can an Azure Pipeline trigger a second pipeline, run as a different user?

I'm running Azure pipelines on a Windows self-hosted agent. One of my pipelines can do both a 32-bit build and a 64-bit build. I want to use the matrix and maxParallel capabilities to do both builds at once, on the same agent, to save time.
This isn't possible, because the 32-bit build and the 64-bit build both write to the registry, and whoever gets there second, errors out.
The obvious solution is to get a second Azure VM and run a second self-hosted agent on that VM. But I want to see if I can run the two build tasks as two different users, on the theory that they will then write to their own HKCU and not clobber each other.
This would require the default pipeline to trigger a second pipeline, or perhaps run a template, and run it as a different user.
Can this be done?
OTHER USEFUL INFO:
On an Azure DevOps skill-level scale of Beginner-Intermediate-Expert, I'm smack in the middle of Intermediate. Still learning.
The build step uses the built-in VSBuild task.
You can trigger a second pipeline (i.e. by using Trigger Build Task), but pipelines don't have a concept of running as a user - they run on an agent. That agent runs as specific user and it would be tricky to try and execute code as a different user.
Running a second self-hosted agent is a good direction. You don't necessarily need another VM - you could run another agent on the same machine, but as a different user, using different work directory.
You could use agent capabilities and demands to fine tune which kind of build runs on which agent.

Azure Devops Pipeline: Possible to cache task container?

I'm setting up a multi-stage Azure Devops yaml pipeline for a .Net Framework application.
Part of the pipeline will involve using the AWSPowerShellModuleScript task to configure load balancer rules in AWS.
My Task looks like so...
- task: AWSPowerShellModuleScript#1.7.0
name: SetupLoadBalancerRules
inputs:
awsCredentials: 'My AWS Service Connection'
regionName: 'ap-southeast-2'
scriptType: 'filepath'
filePath: 'pipeline-scripts/manage-aws-load-balancer-rules.ps1'
Everything is working correctly. However the AWSPowerShellModuleScript tasks are quite slow to initialise. The powershell itself is very fast, but the task requires approximately 1.5 minutes to setup.
I'm running 2 of these tasks in different stages of my pipeline, so this adds 3 minutes to the total time. This may not seem like a lot, but the application itself is quite small, so the setup for these tasks is actually the most time consuming part of the pipeline.
As far as I can tell, it seems that the pipeline is starting a generic container, and then installing the AWS Powershell tools, every time it needs to run one of these tasks.
This seems to be very wasteful and inefficient, so I was wondering if there might be some better way to handle it, for example, caching the built container after the powershell tools are installed, or use an existing image with the tools already installed etc.
I'm very new to using the yaml pipelines, so I'm not sure what's possible.
I like my pipelines to be as efficient as possible, so it just bothers me that this is re-running this repetetive install process every time I need to run a simple powershell script.
Also I should mention that I'm using a hosted Devops Agent... vmImage: 'windows-2019'
Just in case it helps. This is from the task log output...
Checking install status for AWS Tools for Windows PowerShell module.
AWS Tools for Windows PowerShell module not found.
Installing AWS Tools for Windows PowerShell module to current user scope
Name Version Source Summary
---- ------- ------ -------
nuget 2.8.5.208 https://onege... NuGet provider for the OneGet meta-package manager
So it determines that the AWS Tools are not installed, and then possibly uses nuget to install it??
I thought perhaps I could use a cache task to cache the install, but even if I could find where the tools are installed to, it seems unlikely that simply restoring the folder would be sufficient.
Using a Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. So the tool needs to be installed in each pipeline.
A stage is one or more jobs, which are units of work assignable to the same machine. Using Microsoft-hosted agent, each stage uses a separate agent generally. So the tool will be installed in each stage.
In a word, Microsoft-hosted agent is not be able to cache tools. In order to pre-install the tool or not install tool every time, you could deploy Self-hosted Windows agents, and install the tool on every machine running agent service.

During a release, how to get a list of server names deployed to from a deployment group in a task to use in another job?

What is the way to get a list of server names that were deployed to so they can be used in another job with a different agent in the same deployment pipeline?
We have a number of servers in a deployment group that get deployed to. We would like to point an automated test server to each of these environments to confirm the deployment went correctly. Therefor we need a list of the servers that were deployed.
Since the list of servers could grow or shrink we can't hard code all the servers to a variable.
As a workaround we created a Powershell step to call the REST API to get the deployment group machine details. However, we would like to achieve this using variables / outputs etc in the Azure Devops interface.
One thing to be aware of is that variables you might set by command do not persist between phases. If you want to know the deployment servers that were deployed during a phase, you will need to find those during the test agent phase you are executing.
I think you answered your own question though. I believe most of the answers you get will be to use the API to get the information that you are desiring. That being said, the only real sure-fire was I think would be for you to add a step to the deployment group phase and let it run the tests on the deployment server.
Not the cleanest solution, but you could also have the deployment group trigger a build definition passing the server name. The build task would just have the testing portion that you want to run. You could have that release step depend on the completion/status of the build definition.
Some features to keep in mind when implementing whatever you decide:
Automatically deploy to new targets in a deployment group
Deploy to failed targets in a Deployment Group
From what I can see, there is no easy way to get at what you want. As per designer documentation:
"When you specify multiple jobs in a build pipeline, they run in parallel by default. You can specify the order in which jobs must execute by configuring dependencies between jobs. Job dependencies are not yet supported in release pipelines. Multiple jobs in a release pipeline run in sequence."
I would imagine this is due to the added complexity inherent in allowing jobs to be run on x number of machines.
The yaml documentation doesn't seem to make the same distinction, but I think it is still a not yet feature, as yaml release pipelines as a whole seem to be a roadmap item.

Is it possible to have Jenkins workflows with an overlapping/shared stage?

This question concerns use of the Jenkins Workflow plugin and "synchronizing" a stage amongst independent jobs.
We have a generic workflow for multiple projects with steps:
build project
push project to test environment
run (long) end-to-end test suite
push project to production
Step 3 runs a long time. If multiple projects are built and pushed to the test environment within the same window of time, we'd like to only run once the end-to-end test suite.
Can we have the jobs some how synchronize on step 3?
The desired orchestration can be achieved by make Step 3 a build action. I.e.
build end-to-end-tests
Where end-to-end-tests is a job dedicated to running the slow end-to-end tests.
Adding a Quiet period to end-to-end-tests supports the goal of "collecting" projects updated within a time period to end-to-end test. That is, if project A and B are pushed to the test environment with Quiet period seconds, then end-to-end-tests runs only once.
JENKINS-30269 might be helpful, but your use case is indeed subtly different from the usual one that RFE would solve; you really seem to need a cross-job stage, which is not currently possible though in principle such a step could be written. In the meantime, a downstream deployment job is probably the most reasonable workaround.