I'm setting up a multi-stage Azure Devops yaml pipeline for a .Net Framework application.
Part of the pipeline will involve using the AWSPowerShellModuleScript task to configure load balancer rules in AWS.
My Task looks like so...
- task: AWSPowerShellModuleScript#1.7.0
name: SetupLoadBalancerRules
inputs:
awsCredentials: 'My AWS Service Connection'
regionName: 'ap-southeast-2'
scriptType: 'filepath'
filePath: 'pipeline-scripts/manage-aws-load-balancer-rules.ps1'
Everything is working correctly. However the AWSPowerShellModuleScript tasks are quite slow to initialise. The powershell itself is very fast, but the task requires approximately 1.5 minutes to setup.
I'm running 2 of these tasks in different stages of my pipeline, so this adds 3 minutes to the total time. This may not seem like a lot, but the application itself is quite small, so the setup for these tasks is actually the most time consuming part of the pipeline.
As far as I can tell, it seems that the pipeline is starting a generic container, and then installing the AWS Powershell tools, every time it needs to run one of these tasks.
This seems to be very wasteful and inefficient, so I was wondering if there might be some better way to handle it, for example, caching the built container after the powershell tools are installed, or use an existing image with the tools already installed etc.
I'm very new to using the yaml pipelines, so I'm not sure what's possible.
I like my pipelines to be as efficient as possible, so it just bothers me that this is re-running this repetetive install process every time I need to run a simple powershell script.
Also I should mention that I'm using a hosted Devops Agent... vmImage: 'windows-2019'
Just in case it helps. This is from the task log output...
Checking install status for AWS Tools for Windows PowerShell module.
AWS Tools for Windows PowerShell module not found.
Installing AWS Tools for Windows PowerShell module to current user scope
Name Version Source Summary
---- ------- ------ -------
nuget 2.8.5.208 https://onege... NuGet provider for the OneGet meta-package manager
So it determines that the AWS Tools are not installed, and then possibly uses nuget to install it??
I thought perhaps I could use a cache task to cache the install, but even if I could find where the tools are installed to, it seems unlikely that simply restoring the folder would be sufficient.
Using a Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. So the tool needs to be installed in each pipeline.
A stage is one or more jobs, which are units of work assignable to the same machine. Using Microsoft-hosted agent, each stage uses a separate agent generally. So the tool will be installed in each stage.
In a word, Microsoft-hosted agent is not be able to cache tools. In order to pre-install the tool or not install tool every time, you could deploy Self-hosted Windows agents, and install the tool on every machine running agent service.
Related
I'm running Azure pipelines on a Windows self-hosted agent. One of my pipelines can do both a 32-bit build and a 64-bit build. I want to use the matrix and maxParallel capabilities to do both builds at once, on the same agent, to save time.
This isn't possible, because the 32-bit build and the 64-bit build both write to the registry, and whoever gets there second, errors out.
The obvious solution is to get a second Azure VM and run a second self-hosted agent on that VM. But I want to see if I can run the two build tasks as two different users, on the theory that they will then write to their own HKCU and not clobber each other.
This would require the default pipeline to trigger a second pipeline, or perhaps run a template, and run it as a different user.
Can this be done?
OTHER USEFUL INFO:
On an Azure DevOps skill-level scale of Beginner-Intermediate-Expert, I'm smack in the middle of Intermediate. Still learning.
The build step uses the built-in VSBuild task.
You can trigger a second pipeline (i.e. by using Trigger Build Task), but pipelines don't have a concept of running as a user - they run on an agent. That agent runs as specific user and it would be tricky to try and execute code as a different user.
Running a second self-hosted agent is a good direction. You don't necessarily need another VM - you could run another agent on the same machine, but as a different user, using different work directory.
You could use agent capabilities and demands to fine tune which kind of build runs on which agent.
I am new to azure pipelines, started learning & in the process of creating my very 1st yaml pipeline.
My project is private, I am using a multi-stage templated pipeline, self-hosted as need to concurrently deploy a java web application to 7 VMs using mvn tomcat7 plugin run: command
so as to run selenium automation tests in parallel across all the VMs. A template pipeline which is called 7 times to deploy to all the VMs is such that it needs to stay running as
necessitated by the embedded tomcat instance on each of the VMs which in turn requires the ablitiy to have parallelism enabled, pay extra for to achieve this.
My question is; is there another way without having to pay extra for parallelism or turning my project to be a public one ?
I think what you want is parallel job. Only job could execute publish tasks in parallel.
And from this document, you could use parallel job freely when you change your project to public. And the job can run for up to 360 minutes(6 hours).
You need is that under Project Settings--> Overview--> change Visibility to public.
After that, under pipeline, add the publish task for each new agent job. So that, you could executes the publish task in parallel.
I have a nodejs web application that I build in Azure Pipelines. I am planning to deploy the generated artifacts on a Azure VM (probably a dev test labs), as part of one of the pipeline steps.
I want to now run browser tests by pointing the browser to the hosted URL in the Azure VM. I want to use the Azure windows and linux VMs in a build pipeline to run the tests on this remote Azure VM and publish the results to the pipeline. These would be karma tests essentially running on the nodejs server.
In my current design, the test results are going to be available on the Azure VM hosting the nodejs application.
What I don't understand is how can I get these test results back to
the Azure Pipeline for publishing the same?
Is there a way I can architect this solution without having to setup my Azure VM as a
pipeline agent in Azure DevOps?
Is there a standard pattern to design such continuous test infrastructure using Azure DevOps?
Thanks
According to your description, you just want to use Microsoft host agent to access an url on your self-host agent (ignore it's Azure VM or your own physical machine, same to host agent).
It depends if that url are accessible through public internet.
The simplest solution here is deploy your build agent on that Azure VM directly. Then run build and test. You can do this through the following script and tasks:
run ng test or any command to raise your tests
publish test results with PublishTestResults task
publish code coverage results with PublishCodeCoverageResults task
Microsoft-hosted agent pool will not work for you with every scenarios. For many teams this is the simplest way to run your jobs. You can try it first and see if it works for your build or deployment. If not, you can use a self-hosted agent. Self-hosted agents give you more control for your builds, tests and deployments.
In your scenario, setup your Azure VM as a pipeline agent and run build/test on it should be the simplest and convenient solution.
I have a requirement to integrate the JMeter scripts, checked-in a Git repository, with a DevOps pipeline so that I can run the JMeter scripts using a specific VM in Azure.
Basically, I should have all my jmxs and csvs in a git repository and when I run the pipeline, having a parameter of the script name, it should run the script on a specific VM (not with a static IP) and copy the jtl in some storage.
What is the best way to achieve this?
With a DevOps pipeline so that I can run the JMeter scripts using a
specific VM in Azure. What is the best way to achieve this?
If the specific VM exists before the current pipeline, you can consider installing self-hosted agent there.
To do CI/CD using Azure pipelines, we need at least one agent. If we use microsoft-hosted agent, it will provide one fresh VM for us to run jobs. Since you need to run the script in your own specific VM, I suggest using self-hosted agent. You can follow the steps here to install one agent into your own VM. (The steps are quite easy and only cost several minutes)
After making your VM a self-hosted agent, the pipeline will call your VM to run the jobs. Now your original issue turns into how to run JMeter locally with command-line. See similar issues here: Five Ways To Launch a JMeter Test without Using the JMeter GUI and Run .jmx file through command line ....
1.So now we can use a command-line task in pipeline to run the JMeter related commands shared in the similar topics above. And these jobs are done in your specific VM.
2.I'm not sure which location you want to copy the jtl to, but you can use Azure File Copy task to copy files to Microsoft Azure storage blobs or virtual machines (VMs). Or a simple copy/xcopy command in your command line task to copy files to another location in same machine. (Specific VM)
Hope all above helps :)
I have Use following Task in Azure CD pipeline.
"Run Taurus" Task is as following.
Where "_WM WebClient TestArtifacts" is git/Azure Repo directory where .jmx file kept(in Code).
In my current organization we are using Azure DevOps Server (on-prem) to release our products. The current setup is that we have a bunch of build and release agents running on a set of VMs.
The servers that we actually release to are different machines than the ones running the release and build agents, therfor we end up using alot of "PowerShell on target Machine" tasks during the release to configure and setup dependencies for our products.(asp.net websites in this case)..
However.. what I find strange is that due to this setup we cant really utilize other tasks to setup our environment/target-machines. For instance, lets say that we would like to use the "extract files"-task, then this would not be possible since the extraction would happen in our agent and not in our environment/target Machine.
Are we missing something or are you actually supposed to basically only use the "PowerShell on target-machine"-task for such a scenario?
You are right, most of the tasks are running on the machine of where the agent is installed. if you want to run the agent in one VM and deploy with this agent to another VM you can use out-of-the-box with the PowerShell on target machine or Copy files to remote machine, or SSH tasks.
But, in the marketplace you can find many tasks that can do things to remote machines, include zip or unzip files.
If you can it's really recommended to install the agents on your target machines.