my vsts continuous integration flow is:
create dynamic linux vm
copy the latest builds artifact to the new dynamic vm
running some scripts on the new dynamic vm
runing tests on the new dynamic vm
destroy the dynamic vm
im using Azure RG Deployment to create the dynamic vm from arm template,
but im not sure about the best practice how to copy the latest artifact and running script and tests on the new vm
(dynamic vm cant be be in my Service connections list and the ps tasks in vsts not works in linux)
so how to access to my latest artifact and copy it to dynamic vm and run scripts and tests on the dynamic vm using vsts azure devops ?
You can use "Azure File Copy" -task to copy files to virtual machine and afterwards you can use "PowerShell on Target Machines" -task to run PowerShell -scripts (you didn't specify what kind of scripts, but that is how in case of PowerShell -scripts).
What kind of tests are you talking about? You should check the Azure DevOps task gallery for test tasks that come out of the box.
Then you can use the same Azure RG Deployment -task to remove the whole vm/resource group).
Related
I'm trying to use Powershell DSC to provision virtual machines in Azure and use Azure DevOps to manage the actually code deployment onto the virtual machines. The DSC script is setup to install IIS, asp.net core hosting, setup the sites in IIS, but not actually deploy any application code. The last step in the DSC is to add the virtual machine to azure devops. I went into the Environments tab in devops and copied the powershell script provided, parameterized my PAT so it could be passed into the configuration script, and executed. However, when the script runs a second time, it gives an error because the a resource already exists with the same name in that environment. I could not find a way to check if the VM was already registered as an environment resource in devops using the vsts agent config. I'm considering using the --replace flag, but I don't know if that will lose my deployment history or has any other consequences.
Is there a way to check if a Virtual Machine is already registered as a resource in an Azure DevOps environment.
I have a nodejs web application that I build in Azure Pipelines. I am planning to deploy the generated artifacts on a Azure VM (probably a dev test labs), as part of one of the pipeline steps.
I want to now run browser tests by pointing the browser to the hosted URL in the Azure VM. I want to use the Azure windows and linux VMs in a build pipeline to run the tests on this remote Azure VM and publish the results to the pipeline. These would be karma tests essentially running on the nodejs server.
In my current design, the test results are going to be available on the Azure VM hosting the nodejs application.
What I don't understand is how can I get these test results back to
the Azure Pipeline for publishing the same?
Is there a way I can architect this solution without having to setup my Azure VM as a
pipeline agent in Azure DevOps?
Is there a standard pattern to design such continuous test infrastructure using Azure DevOps?
Thanks
According to your description, you just want to use Microsoft host agent to access an url on your self-host agent (ignore it's Azure VM or your own physical machine, same to host agent).
It depends if that url are accessible through public internet.
The simplest solution here is deploy your build agent on that Azure VM directly. Then run build and test. You can do this through the following script and tasks:
run ng test or any command to raise your tests
publish test results with PublishTestResults task
publish code coverage results with PublishCodeCoverageResults task
Microsoft-hosted agent pool will not work for you with every scenarios. For many teams this is the simplest way to run your jobs. You can try it first and see if it works for your build or deployment. If not, you can use a self-hosted agent. Self-hosted agents give you more control for your builds, tests and deployments.
In your scenario, setup your Azure VM as a pipeline agent and run build/test on it should be the simplest and convenient solution.
I have a requirement to integrate the JMeter scripts, checked-in a Git repository, with a DevOps pipeline so that I can run the JMeter scripts using a specific VM in Azure.
Basically, I should have all my jmxs and csvs in a git repository and when I run the pipeline, having a parameter of the script name, it should run the script on a specific VM (not with a static IP) and copy the jtl in some storage.
What is the best way to achieve this?
With a DevOps pipeline so that I can run the JMeter scripts using a
specific VM in Azure. What is the best way to achieve this?
If the specific VM exists before the current pipeline, you can consider installing self-hosted agent there.
To do CI/CD using Azure pipelines, we need at least one agent. If we use microsoft-hosted agent, it will provide one fresh VM for us to run jobs. Since you need to run the script in your own specific VM, I suggest using self-hosted agent. You can follow the steps here to install one agent into your own VM. (The steps are quite easy and only cost several minutes)
After making your VM a self-hosted agent, the pipeline will call your VM to run the jobs. Now your original issue turns into how to run JMeter locally with command-line. See similar issues here: Five Ways To Launch a JMeter Test without Using the JMeter GUI and Run .jmx file through command line ....
1.So now we can use a command-line task in pipeline to run the JMeter related commands shared in the similar topics above. And these jobs are done in your specific VM.
2.I'm not sure which location you want to copy the jtl to, but you can use Azure File Copy task to copy files to Microsoft Azure storage blobs or virtual machines (VMs). Or a simple copy/xcopy command in your command line task to copy files to another location in same machine. (Specific VM)
Hope all above helps :)
I have Use following Task in Azure CD pipeline.
"Run Taurus" Task is as following.
Where "_WM WebClient TestArtifacts" is git/Azure Repo directory where .jmx file kept(in Code).
I have multiple ASP.NET web applications which I want to release into different VMs. Manually installing a DevOps agent in every single VM seems really inefficient. Is there a way to make this process faster? Is it possible to create release pipeline that could directly push the code to the public ip of the VM?
As workaround, you can prepare scripts to register each new agent. Here you can find parameters: Self-hosted Windows agents - Unattended config.
If you deploy your web application with IIS deployment task,
You can have a try using task Manage IIS, which can create website in a remote machine.
And then you can add a Windows machine file copy task to copy the build artifacts to the website Physical path in the remote machine.
Another workaround is that you can manage IIS with powershell script. So that you can add task PowerShell on target machines to run powershell script to manage IIS website. You can refer the example scripts at this page, and this page. For more information about IIS powrshell command you can refer here
As we know we can add azure vm as a machine into our VSTS deployment group using the PowerShell script which VSTS provides. Based on that we can create new release definition and add our machine into pipeline.
Question is there any way to add non azure vm into VSTS deployment group?
The script that is provided is agnostic to cloud providers and can be used on any machine with powershell and internet connectivity.