Build failure with Parallelism in Azure DevOps - azure-devops

I ran the build but failing with below errors. I have corrected the code but not sure what is the cause for this.
And Every time I have chosen Agent specification VS2017-Win 2016 but it is taking different one it seems after I noticed build was run. why it happening. Please help.
It is ASP.NET build.

Recently Microsoft has been battling Bitcoin miners on their pipelines platform. To combat this, they've added additional validation to ensure you are you and your intent isn't nefarious.
They've taken away all the free pipelines from accounts by default and you need to fill in a form to contact support to get these pipelines reinstated.
The link to the form is in the red error message that is rendered in your build log.

Related

what is the use of CICD and how it saves my time while I can simply push and pull my code from github and make my code into production too easily?

I'm trying to learn CICD concepts on my own, I don't understand how it helps me while I can easily push and pull my code from github and make my code into production
Continuous Integration is mainly a culture than being a tool. So, you need to understand why it's necessary that every developer on a team should integrate their code with the repository are least once a day.
Continuous Delivery also indicates challenges and best practices of delivering high-quality software as soon as possible. So, teams that want to decrease the risk and problems of integrating features and increase the speed of delivering new features should adopt the CI/CD culture.
To ensure that every code added to the repository will work and integrate with other parts, you need to check. For instance, you need to make sure that the project will be built successfully, the tests will be passed, and the new changes will not break any other parts, your code will pass some required code quality checks, and so on.
After that, you have to deploy somehow/publish the version of your software. This process usually has some steps and can be done manually in small teams/projects.
Based on the first rule of Continuous Integration, every team member should integrate the code with the repository multiple times a day. Since the frequency of this integration is high, it's not a good idea to do this process manually. There are always chances that somebody forgets to run the operation. That's the main reason why it's necessary to have an automatic CI/CD pipeline.

Azure DevOps - how to know a Resolved work item is actively being tested (vs. in the queue)

TLDR;
Does Azure DevOps have a recommended built in way of marking Resolved Work Items as being actively tested as opposed of being in the queue for testing?
Details
My team is using Azure DevOps with Agile workflow.
This means that out-of-the box a user story goes through the following states
New -> Implementation started ->
Active -> Code complete ->
Resolved -> Acceptance tests passed ->
Closed
This is nicely shown at learn.microsoft.com:
Testing happens in when the story (or bug) is in the Resolved state.
The out of the box board has 4 lanes.
When looking at the board (or even in queries) I'm having trouble seeing what is being actively tested.
For example, if there are 2 resolved items it is not clear which one is being actively worked on and which one is waiting to be picked up.
Showing what is being tested seems like a common desire and my intuition is that the solution for my problem is built in. I want to avoid customising the workflow (and adding a new state called Testing).
Cross-post from pm.stackexchange.com
I would suggest using the Kanban board and adding a column named Testing. This won't require adding a custom state to your workflow but does give you more visibility into the state of a work item.
You can also split columns into Doing and Done so you know where an item is stuck in the flow of your work.
In my company we have been in the same situation.
I like #Wouter's answer and I think it deserves upvotes, but I just want to expose it in another way (it depends on your needs, maybe they are similar to mine).
In my company we have a board with a column called "Testing" (divided into "Doing" and "Done") but this board is only for developers, for their tests. But in our development lifecycle there is another step called "UAT" where the software is deployed in a test environment called "uat" ... and is tested by business users who run their own acceptance tests.
Now, we developers don't want these business users to use Azure DevOps Boards or, worse, change work item states (we tried, it can be a mess).
We want to isolate their actions on a Test Plan.
Therefore, when the release pipeline deploys in the "uat" stage N work items, it also creates a Test Plan with N test cases (manual, not automated) that our business users must validate with Pass / No pass.
When all test cases have been validated, we can deploy the build to Production.
Maybe this suites you.

What are the limitations of the "free" VSTS?

I'm currently evaluating VSTS, but I'm concerned about some of the limitations of the "free" version.
I believe there is a 10 GB storage limit. Is this for everything (source code, build artifacts, packages, etc.), and is there any way to increase this?
I've also seen a limit of four hours of build time per month - that's only 12 minutes a day! I'm finding that even a small solution takes a few minutes to build; our "real" solutions are significantly larger and we often check in code many times during a typical day.
What happens if this build limit is exceeded? Are you prevented from building until the next billing month?
Am I right in saying that purchasing a hosted pipeline (aka "hosted CI/CD") at US$40/month would overcome this limit?
I'm not sure where you got that idea from. There are no limits on storage for source code, packages, build artifacts, or test results that I'm aware of.
There is a 10 GB limit for hosted build agents, but that just refers to the amount of source code/build output that can be present on a single instance of the hosted agent. Honestly, if your source code is anywhere near 10 GB, you're going to find the hosted build agents to be inadequate anyway.
Regarding build, refer to the documentation. You can either pay for Microsoft-hosted agents or purchase private pipelines, which enable you to set up your own build infrastructure. Every VSTS account has one free private pipeline, so you actually don't need to pay anything extra at all, assuming you take on the job of maintaining your own build/release server and only need one thing to run at a time.
The "free" VSTS, as you say, has a limit of five users with basic access. For stakeholders, you can add as many as you need.
For build, you have up to 4 h/month. But if you want to use CI, that is probably not enough. If you will use it only to build at certain points manually, it could be a start.
With your free account you could download and install a private build agent. This will have no minute limits. So you could implement a CI build, for instance.
Hosted agents have up to 10 GB of storage. But again, if you use a private one, you will not have this limit. For other stuff like code, workitems and so on as far as I know there are no limits.
Here you can see how to buy more hosted agents.
Depending on your needs, you could go after Microsoft Action Pack, which will give you internal licenses for other Microsoft software as well as more VSTS users via an MSDN subscription.
Since you are evaluating, you can take a look at this link for more global resource limitations, but they are pretty high, since Microsoft itself uses VSTS.

What is a good tool for Build Pipelines?

I need a tool that will graphically represent our build pipeline. The below screenshots of ThoughtWorks Go and the Jenkins Pipeline plugin illustrate almost exactly what I want it to look like.
The problem is that we already use Jenkins for our builds and deployments, along with a few other custom tools for orchestration type duties. We don't want a pipeline tool to do the builds or deployments itself, it just needs to invoke Jenkins! I tried out Go, and the first thing it asked for is where my source code is and how to build it. I couldn't get Go to work in a way where Jenkins does the builds but Go creates the pipeline.
I've also experimented with the Jenkins Pipeline plugin, but it's very limiting. For one, it doesn't work with the Join plugin (so we can't have jobs run in parallel, which is a requirement). It also assumes that all of our tasks happen in Jenkins (Jenkins can't see outside of our test lab and into our production environment). I don't know if this is a viable option either.
So, does anyone have any recommendation for some pipeline tools that will do what I'm looking for?
Edit (03/2018)
Since writing this question in 2012 and answering it in 2014, numerous tools have come online to support what I originally wanted. Jenkins now supports scripted pipelines natively and has an excellent UI (Blue Ocean) for rendering them. Those stumbling on this question should consider using these for their pipeline needs.
https://jenkins.io/doc/book/pipeline/
https://jenkins.io/projects/blueocean/
End edit
(Old answer)
It didn't exist when I asked the question, but Jenkins' Build Flow Plugin does exactly what I needed, and creates pipeline views very well.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
Jenkins/Hudson can certainly be used to achieve a real pipeline.
You could use Go if you used a dummy material (an empty git repo, for example), and then used the API to trigger a pipeline and upload artifacts.
But, that's quite some effort, and you should probably only do that if you have a very good reason otherwise to use Go.
You can try with GoCD pipeline. It has very nice features for continuous delivery and has nice dashboard also which shows real time flow and status. Give a try.

What should we do when the buildserver is treated like a goldmine?

A year ago I started to create some automated builds on our build machine (TFS2008). Not so much for combining with full scale TDD (we still have a lot of old legacy code), but for being able to detect at an early stage if builds got broken. Another objective was also to minimize the packaging/deployment work.
This has been working quite well so far, but lately some coworkers are starting to treat the buildserver as a goldmine of quick releases, and the testing process seems to get less priority more often. Refactoring some of our code during 2-3 days proved that the builds on our buildserver potentially could reach our customers. :)
I guess our buildserver over time has shifted from being a 'consistency tool' for the developers, into being a server producing packages that is expected to be release quality 24/7.
This is clearly a communication problem, and there should be a set of rules on this. Only problem is that I don't know where to begin. Does anyone have similar experiences with this?
You're correct, it is a communications problem. If your developers and management are expecting release-quality builds all the time, they're not understanding the process of build/test/release.
The only thing you can do is clarify the purpose of a build server: a single, centralized location for builds. You need to clarify the distinction between a build and a release. Builds should always succeed (no one should break the build) but the ability to create a build does not have any bearing whatsoever on build quality or the suitability of a given build for release.
Build quality is measured by unit, functional, and user acceptance testing. There is no replacement for these tests in preparing a build for release. The long-term costs of not doing these tests far outweigh the short-term benefits of getting a release out the door.
Our unittestserver does tests, and tags CVS. Then we go on a buildserver which has ea script to create a release which isready for customer installation. This release is then installed on a test server as if it was the customer's server, and then tested.
Judging your story, you are hoping to find some script or setting which will prevent the buildserver from getting used as "quick release" server. The only real way to do this is process.
Rules in our company:
Developers check into CVS, they get mails from the unittest server if it fails, and have to fix that in code. No access to the build/test server for devs.
There is 1 specific developer who can create a release which he can send to the test department.
The test department installs the release on their test server and tests it.
The testers, and only the testers, can give a "Go" for release.
The release is done by a designated person who is also the customer contact.
As you can see the developers are seperated from the testers and the customer (formally speaking). In practice it is not all that rigid ofcourse, but people need to understand that if this process is not in place, the customer will get inferior quality software.
The customer has to be educated that "fast" means "low quality". We can do it Fast, Good, or Cheap. Pick two.
http://www.sixside.com/fast_good_cheap.asp
I suggest that all builds created by the internal build server state in the splash screen "INTERNAL BUILD - NOT FOR CUSTOMERS", and the release build server plops in the official splash screen.