how does Microsoft hosted agent relate to vmImage types? - azure-devops

I am a free tier user of Azure DevOps, as indicated in https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops#use-a-microsoft-hosted-agent, each user is entitled to 10 parallel jobs.
when i login to see the available agent in the Azure Pipeline pool, I see the following:
I am just curious, are the agents listed here equivalent to 10 virtual machines? if so, how man of them are windows images? how many mac images? or those are just wild cards and they can be provisioned to be any vmImage type during run time?
thanks!

Or those are just wild cards and they can be provisioned to be any
vmImage type during run time?
Just consider them as wildcards, they can be provisioned to be any vmImage type during queue time.
Details:
Azure Devops provides some predefined variables about agent, for me I use this script in CMD task to confirm statements above:
echo ID: $(Agent.Id)
echo OS: $(Agent.OS)
echo Name: $(Agent.Name)
echo MachineName: $(Agent.MachineName)
We can disable some of those agents and enable part of them, then we can make sure one specific agent is used to run the pipeline. Here's part of the result list:
So you can consider it as wildcard, those agent can represent any vmImage type. It's not recommended to disable any of those agents in normal situation, it's just for test purpose. Normally if those agents are enabled in Public project, you can easily run ten pipelines(no matter what OS system) at the same time.

Related

How to resolve "No hosted parallelism has been purchased or granted" in free tier?

I've just started with Azure DevOps pipelines and just created a very simple pipeline with a Maven task. For now I don't care for parallelism and I'm not sure in which way I've added it to my pipeline. Is there any way to use the Maven task on the free tier without parallelism?
This is my pipeline:
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: Maven#3
My thought was that tasks are always parallel? Other than that I cannot see where's the parallel step.
First - tasks are always executed sequentially. And 1 sequential pipeline is documented as "1 parallel agent", yes naming could be better. Due to the changes laid out below new accounts now get zero parallel agents, and a manual request must be made to get the previous default of 1 parallel pipeline and the free build minutes.
See this:
We have temporarily disabled the free grant of parallel jobs for public projects and for certain private projects in new organizations. However, you can request this grant by submitting a request. Existing organizations and projects are not affected. Please note that it takes us 2-3 business days to respond to your free tier requests.
More background information on why these limitations are in play:
Change in Azure Pipelines Grant for Private Projects
Change in Azure Pipelines Grant for Public Projects
Changes to Azure Pipelines free grants
TLDR; People were using automation to spin up 1000's of Azure DevOps organizations, adding a pipeline and using the service to send spam, mine bitcoin or for other nefarious purposes. The fact that they could do so free, quick and without any human intervention was a burden on the team. Automatic detection of nefarious behavior proved hard and turned into an endless cat-and-mouse game. The manual step a necessary evil that has put a stop to this abuse and is in no way meant as a step towards further monetization of the service. It's actually to ensure a free tier remains something that can be offered to real peopjle like you and me,
This is absurd. 'Free-tier' is not entirely free unless you request again!
Best Option: Use self-hosted pool. It can be your laptop where you would like to run tests.
MS azure doc here
and use above pool in YAML file
pool: MyPool
Alternatively
Request access to MS:
Folks, you can request here. Typically it get approved in a day or two.
##[error]No hosted parallelism has been purchased or granted. To request a free parallelism grant, please fill out the following form https://aka.ms/azpipelines-parallelism-request
The simplest solution is to change the project from public to private so that you can use the free pool. Private projects have a free pool by default.
Consider using a self hosted pool on your machine as suggested otherwise.
Here's the billing page.
If you're using a recent version of MacOS with Gatekeeper, this "security enhancement" is a serious PITA for the unaware as you get 100s of errors where each denied assembly has to be manually allowed in Security.
Don't do that.
After downloading the agent file from DevOps and BEFORE you unzip the file, run this command on it. This will remove the attribute that triggers the errors and will allow you to continue uninterrupted.
xattr -c vsts-agent-osx-x64-V.v.v.tar.gz ## replace V.v.v with the version in the filename downloaded.
# then unpack the gzip tar file normally:
tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
Here are all the steps you need to run, including the above, so that you can move past the "hosted parallelism" issue and continue testing immediately, either while you are waiting for authorization or to skip it entirely.
Go to Project settings -> Agent pools
Create new Agent pool, call it "local" (Call it whatever you want, or you can also do this in the Default agent pool)
Add a new Agent and follow the instructions which will include downloading the Agent for your OS (MacOS here).
Run xattr -c vsts-agent-osx-x64-V.v.v.tar.gz on the downloaded file to remove the Gatekeeper security issues.
Unzip the archive with tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
cd into the archive directory and type ./config.sh Here the most important configuration option is Server URL which will be https://dev.azure.com/{organization name} Defaults are fine for the rest. Continue until you are back at the command prompt. At this point, if you were to look inside DevOps either in your new agent pool or Default (depending on where you put it) You'll see your new agent as "offline" so run:
./run.sh which will bring your agent online. Your agent is now running and listening for you to start your job. Note this will tie up your terminal window.
Finally, in your pipeline YAML file configure your job to use your local agent by specifying the name of the agent pool where the self-hosted agent resides, like so:
trigger:
- main
pool:
name: local
#pool:
# vmImage: ubuntu-latest
I faced the same issue. I changes the project visibility from Public to Private and then it worked. No requirement to fill a form or to purchase anything.
Best Regards,
Hitesh

Is there an easy way to run Azure DevOps PowerShell scripts on my local machine?

I tried to find anything on this but I didn't succeed. Maybe I am using the wrong words for the search.
What I am trying to achieve is that I have a script that can run in an Azure DevOps environment as well as on my local machine for debug purposes. As far as I can see to execute locally I would need some kind of wrapper for the script that is behaving like the Azure DevOps Task is. Does anything like that exist out there?
If you want to have more control over building your code and be able to see intermediate results you need to install self-hosted agent on your machine. Here you have more info about this.
Most of the task are simply wrappers around console tools which adds sort of authorization or making them visually accessible. Maybe useful for you will be enable System.Debug flag on Microsoft agent to see more details what particular task does. You will see more details and thus be able to better understand what is happening behind.
For instance if you use variables in your script like $(someVariable) setting System.Debug you will see your final script in the log with replaced values.
Be aware also that Secret variables are masked. So you may find *** in logs instead of real value.
However, there is no easy way just to extract and wrap what task does to repeat it on your machine without involving Azure DevOps agent.

Using azure powershell to copy data from one slot to another in pipeline/release

I've created a pipeline and release in azure devops, but before i perform a swap of the slots i need to copy files from 'production' slot to my 'staging' slot. This is because our customer uploads file to the webapp itself (at least for now).
I've been doing some research and i dont think this is possible with a task in azure devops. I think its possible using powershell though.
Have anyone done this before?
This won't be possible out of the box. But you can do this over FTP. Here you have an example how to configure FTP access to you web app. And it would be the same for slot. So what you need is:
configure FTP access on your production slot
configure FTP access on your slot
copy files from production slot - here unfortunately there is no out of the box task to do this, so you need to use powershell like here
upload task to your slot using FTP Upload task
It would getting worse a bit if your slot is not long lived and you create it automatically. I'm not 100% sure, but it came to my mind that actually credential for you production slot may work also for other slots. So then you can skip second bullet and it should not be an issue with dynamically created slots.
Example powershell task:

Can we copy files from $(System.DefaultWorkingDirectory) to Azure Iaas server using Agent Pool “Hosted VS2017”

We are using CI/CD pipeline in OneITVSO. Earlier we had an agent pool which was internally created. Now we are asked to use "Hosted VS 2017". We have a Database solution, ETL solution and Tabular Model solution that needs to get deployed. Additionally we have certain scope scripts.
We are able to build the solution using "Hosted VS 2017". But we are not able to deploy using "Hosted VS 2017" In the release pipeline we have a task "Windows Machine File Copy" which copies either artifacts/dacpac/ispac/.sql files from build server to dev/uat servers.
Using the earlier agent pool this pipeline was getting deployed successfully. But now when we use "Hosted VS 2017" we are getting below error:
Failed to connect to the path \AZDEVSERVERSQL01 with the user ***domain\servicecredentialdwd* for copying. System error 53 has occurred.**
1) Can "Hosted VS 2017" be used for task like "Windows Machine File Copy" (We are using Microsoft Azure Virtual Machine(Iaas) )
2) If we can use "Hosted VS 2017" even for Iaas Azure machines, are we missing any credential access. Should we give any access to domain\servicecredentialdwd for the agent pool "Hosted VS 2017". If so what permissions has to be given and how.
NOTE: Same pipeline gets deployed when "private" agent is used. gets failed when "Hosted VS 2017" is used.
If your IaaS server has a public IP configured, then yes. If not, then no. The build agent has to be able to establish a network route to your virtual machine. If the VM is isolated in a private network, then the build server can't send traffic to it.

Can I pass the VCAP_SERVICES to the test stage at the IBM Cloud Continuous Delivery pipeline?

When in (unit) test stage I'm running the following commands:
echo "Installing Node Modules"
npm install
echo "Run Unit Tests"
npm run test-mocha
My problem is that I cannot access the VCAP_SERVICES in the test stage (job is set to unit test).
Is there a way to access / pass them?
The only way I see, is using the cf cli over the provided shell in that stage. But that would require authentication and you do not want to store your user date there for sure.
So one way would be to store the data in the provided environment tab for that stage. Then you have to adapt these data, in case something is changed, because it is not provided by the vcap file but that seems to be how it is for the test stage at least.
As already mentioned, the best way to use VCAP_SERVICES in the test stage is to set it yourself in the stage's Environment Properties configuration.
The pipeline is the build environment. It needs to be able to run even if the app is not yet deployed or has crashed. We sometimes copy in values from the runtime environment, but the build environment should minimize its dependencies on the runtime environment wherever possible.
There's also the question of the pipeline workers being able to access the runtime services specified in VCAP_SERVICES. For the services I've used in my pipelines it has always worked, but it's not a guaranteed thing.