What are the limitations of the "free" VSTS? - azure-devops

I'm currently evaluating VSTS, but I'm concerned about some of the limitations of the "free" version.
I believe there is a 10 GB storage limit. Is this for everything (source code, build artifacts, packages, etc.), and is there any way to increase this?
I've also seen a limit of four hours of build time per month - that's only 12 minutes a day! I'm finding that even a small solution takes a few minutes to build; our "real" solutions are significantly larger and we often check in code many times during a typical day.
What happens if this build limit is exceeded? Are you prevented from building until the next billing month?
Am I right in saying that purchasing a hosted pipeline (aka "hosted CI/CD") at US$40/month would overcome this limit?

I'm not sure where you got that idea from. There are no limits on storage for source code, packages, build artifacts, or test results that I'm aware of.
There is a 10 GB limit for hosted build agents, but that just refers to the amount of source code/build output that can be present on a single instance of the hosted agent. Honestly, if your source code is anywhere near 10 GB, you're going to find the hosted build agents to be inadequate anyway.
Regarding build, refer to the documentation. You can either pay for Microsoft-hosted agents or purchase private pipelines, which enable you to set up your own build infrastructure. Every VSTS account has one free private pipeline, so you actually don't need to pay anything extra at all, assuming you take on the job of maintaining your own build/release server and only need one thing to run at a time.

The "free" VSTS, as you say, has a limit of five users with basic access. For stakeholders, you can add as many as you need.
For build, you have up to 4 h/month. But if you want to use CI, that is probably not enough. If you will use it only to build at certain points manually, it could be a start.
With your free account you could download and install a private build agent. This will have no minute limits. So you could implement a CI build, for instance.
Hosted agents have up to 10 GB of storage. But again, if you use a private one, you will not have this limit. For other stuff like code, workitems and so on as far as I know there are no limits.
Here you can see how to buy more hosted agents.
Depending on your needs, you could go after Microsoft Action Pack, which will give you internal licenses for other Microsoft software as well as more VSTS users via an MSDN subscription.
Since you are evaluating, you can take a look at this link for more global resource limitations, but they are pretty high, since Microsoft itself uses VSTS.

Related

Installer creation is time-consuming

I develop Windows desktop applications.
Several times a month we have to create one installer for a department of about 400 people, and each time we have to place the installer we are responsible for on the file server, receive installers from other teams, make sure the version number is correct, create information materials for our members, and notify the installer creation team.
I check to make sure that the number of versions is correct, prepare information materials for the members, and notify the installer creation team.
These tasks are tedious because they involve a lot of mistakes by myself and others, take a lot of times, and require a lot of attention.
I would like to reduce the manual work.
Is there a better way?
In addition, we are using TFS and Azure DevOPS for code management.
(We are in the process of transitioning from TFS to Azure DevOPS, so it depends on the team which one they are using.)

what is the use of CICD and how it saves my time while I can simply push and pull my code from github and make my code into production too easily?

I'm trying to learn CICD concepts on my own, I don't understand how it helps me while I can easily push and pull my code from github and make my code into production
Continuous Integration is mainly a culture than being a tool. So, you need to understand why it's necessary that every developer on a team should integrate their code with the repository are least once a day.
Continuous Delivery also indicates challenges and best practices of delivering high-quality software as soon as possible. So, teams that want to decrease the risk and problems of integrating features and increase the speed of delivering new features should adopt the CI/CD culture.
To ensure that every code added to the repository will work and integrate with other parts, you need to check. For instance, you need to make sure that the project will be built successfully, the tests will be passed, and the new changes will not break any other parts, your code will pass some required code quality checks, and so on.
After that, you have to deploy somehow/publish the version of your software. This process usually has some steps and can be done manually in small teams/projects.
Based on the first rule of Continuous Integration, every team member should integrate the code with the repository multiple times a day. Since the frequency of this integration is high, it's not a good idea to do this process manually. There are always chances that somebody forgets to run the operation. That's the main reason why it's necessary to have an automatic CI/CD pipeline.

VSTS limit of hosted pipelines

We're using hosted agent pipeline/pools on VSTS. We've set the hosted pipelines count to 10. It's not enough for our needs, so I tried increasing the count, but the web UI for this does not allow me to (and a dead link shown there, which is supposed to explain on how to raise the limit).
Is there any official article or documentation mentioning this limit? Is there a way to remove that limitation or at least raise it?
The only mention on such a limit, which I've found was in one answer to a Stack Overflow question (on how to add agents to a queue in VSTS), and it mentions that 5 is the max (which is obviously an out of date information). It does not provide any source reference.
We are aware of the private pipelines / private build agents (which seem to have the limit set at 1000 and the same dead link to explain how to raise it). We are using those, but for this particular case, switching from hosted to private is not a viable option.
EDIT (2018-06-27):
Microsoft staff has mentioned the limit on the page of "Microsoft-hosted CI/CD" service in Visual Studio Marketplace, on the "Q & A" section in March 2018:
Currently, we have a hard limit of 10 Hosted pipelines. If more hosted
pipelines are needed, customers have to contact us and we will
increase the limit. Since, hosted pipelines come with a dedicated
azure agent, we have a check on the maximum anyone can buy and for
those accounts with the need for higher number of hosted agents, we
would like to allot them on case by case basis. We are currently
planning to increase the upper limit for Hosted Pipelines. You can
contact us here: "RM_Customerqueries at microsoft dot com" with your
account details and the number of hosted pipelines you need. We will
increase the hosted pipelines for you.
Although it mentions a way to increase the limit, a later question (2018-06-26) from another person states that it does not seem to work:
Is there an updated process for requesting more than 10 hosted agents?
Getting "undeliverable address" responses to the email listed in a
reply below from March.
EDIT (2018-12-31):
In the meantime I've tested a suggestion posted on UserVoice and have filed a support ticket via Azure Portal. This worked (after some clarifications) - the hosted agents pipeline limit was raised for our VSTS (currently Azure DevOps) account.
In the meantime, the page for ordering the agents was tweaked, so now it explains, that you should contact support (and has the mentioned previously link pointing to the Azure DevOps support page).
Submit a user voice here: Hosted pipeline limit

force stable trunk/master branch

Our development departments grows and I want to force a stable master/trunk.
Up to now every developer can commit into the master/trunk. In future developers should commit into a staging area, and if all tests pass the code gets moved to the trunk automatically. If the test fails, the developer gets a mail with the failed tests.
We have several repositories: One for the core product, several plugins and a repository for every customer.
Up to now we run SVN and git, but switching all repos to git could be done, if necessary.
Which software could help us to get this done?
There a some articles on the web which explain how to use gerrit and jenkins to force a stable branch.
I am unsure if I need both, or if it is better to use something else.
Environment: We are 10 developers, and use python and django.
Question: Which tool can help me to force a stable master branch?
Update
I was on holiday, and now the bounty has expired. I am sorry. Thank you for your answers.
Question: Which tool can help me to force a stable master branch?
Having been researching this particular aspect of CI quasi-pathologically since our ~20 person PHP/ZF1-based dev team made the switch from SVN to Git over the winter (and I became the de-facto git mess-fixer), I can't help but share my experience with this particular aspect of continuous integration.
While obviously, having a "critical mass of unit tests running" in combination with a slew of conditionally parameterized Jenkins jobs, triggering infinitely more conditionally parameterized jobs, covering every imaginable circumstance would (perhaps) be the best and most proper way to move towards a Continuous Integration/Delivery/Deployment model, the meatspace resources required for such a migration are not insignificant.
Some questions:
Does your team have some kind of VCS workflow or, minimally, rules defined?
What percentage would you say, roughly, of your codebase is under some kind of behavioral (eg. selenium), functional or unit testing?
Does your team ( / senior devs ) actually have the time / interest to get the most out of gerrit's peer-based code review functionality?
On average, how many times do you deploy to production in any given day / week / month?
If the answers to more than one of these questions are 'no', 'none', or 'very little/few', then I'd perhaps consider investing in some whiteboard time to think through your team's general workflow before throwing Jenkins into the mix.
Also, git-hooks. Seriously.
However, if you're super keen on having a CI/Jenkins server, or you have all those basics covered already, then I'd point you to this truly remarkable gem of a blog post:
http://twasink.net/2011/09/16/making-parallel-branches-meet-regularly-with-git-and-jenkins/
And it's equally savvy cousin:
http://twasink.net/2011/09/20/git-feature-branches-and-jenkins-or-how-i-learned-to-stop-worrying-about-broken-builds/
Oh, and of course, the very necessary devopsreactions tumblr.
There a some articles on the web which explain how to use gerrit and jenkins to force a stable branch.
I am unsure if I need both, or if it is better to use something else.
gerrit is for coding review
Jenkins is a job scheduler that can run any job you want, including one:
compiling everything
launching sole unit test.
In each case, the idea is to do some guarded commit, ie pushing to an intermediate repo (gerrit, or one monitored by Jenkins), and only push to the final repo if the intermediate process (review or automatic build/test) passed successfully.
By adding intermediate repos, you can easily force one unique branch on the final "blessed" repo to which those intermediate referential will push to if the commits are deemed worthy.
It sounds like you are looking to establish a standard CI capability. You will need the following essential tools:
Source Version Control : SVN, git (You are already covered here)
CI server : Jenkins (you will need to build and run tests with each
check in, and report results. Jenkins is the defacto standard tool
used for this)
Testing : PyUnit
Artifact Repository : you will need a mechanism for organizing and
archiving the increments created with each build. This could be a
simple home grown directory based system. I have also used Archiva,
but there are other tools.
There are many additional tools that might be useful depending on your development process:
Code review : If you want to make code review a formal gate in your
process, Gerrit is a good tool.
Code coverage analysis : I've used EMMA in the past for Java. I am
sure that are some good tools for Python coverage.
Many others : a library of Jenkin's plugins that provide a variety of
useful tools is available to you. Taking some time to review
available plugins will definitely be time well spent.
In my experience, establishing the right cultural is as important as finding the right tooling.
Testing : one of the 10 principles of CI is "self testing builds". In
other words, you must have a critical mass of unit tests running.
Developers must become test infected. Unit testing must become a
natural, highly value part of each developers individual development
process. In my experience, establishing a culture of test infection
is the hardest part of deploying CI.
Frequent check-in : Developers and managers must organize there work
in a way that allows for frequent small check-ins. CI calls for daily
checkins. This is sometimes a difficult habit to establish.
Responsiveness to feedback : CI is about immediate feedback. The
developers must be conditioned to response to the immediate feedback.
If unit tests fail, the build it broken. Within 15 minutes of a CI
build breaking, the developer responsible should either have a fix
checked in, or have the original, bad check-in backed out.

What should we do when the buildserver is treated like a goldmine?

A year ago I started to create some automated builds on our build machine (TFS2008). Not so much for combining with full scale TDD (we still have a lot of old legacy code), but for being able to detect at an early stage if builds got broken. Another objective was also to minimize the packaging/deployment work.
This has been working quite well so far, but lately some coworkers are starting to treat the buildserver as a goldmine of quick releases, and the testing process seems to get less priority more often. Refactoring some of our code during 2-3 days proved that the builds on our buildserver potentially could reach our customers. :)
I guess our buildserver over time has shifted from being a 'consistency tool' for the developers, into being a server producing packages that is expected to be release quality 24/7.
This is clearly a communication problem, and there should be a set of rules on this. Only problem is that I don't know where to begin. Does anyone have similar experiences with this?
You're correct, it is a communications problem. If your developers and management are expecting release-quality builds all the time, they're not understanding the process of build/test/release.
The only thing you can do is clarify the purpose of a build server: a single, centralized location for builds. You need to clarify the distinction between a build and a release. Builds should always succeed (no one should break the build) but the ability to create a build does not have any bearing whatsoever on build quality or the suitability of a given build for release.
Build quality is measured by unit, functional, and user acceptance testing. There is no replacement for these tests in preparing a build for release. The long-term costs of not doing these tests far outweigh the short-term benefits of getting a release out the door.
Our unittestserver does tests, and tags CVS. Then we go on a buildserver which has ea script to create a release which isready for customer installation. This release is then installed on a test server as if it was the customer's server, and then tested.
Judging your story, you are hoping to find some script or setting which will prevent the buildserver from getting used as "quick release" server. The only real way to do this is process.
Rules in our company:
Developers check into CVS, they get mails from the unittest server if it fails, and have to fix that in code. No access to the build/test server for devs.
There is 1 specific developer who can create a release which he can send to the test department.
The test department installs the release on their test server and tests it.
The testers, and only the testers, can give a "Go" for release.
The release is done by a designated person who is also the customer contact.
As you can see the developers are seperated from the testers and the customer (formally speaking). In practice it is not all that rigid ofcourse, but people need to understand that if this process is not in place, the customer will get inferior quality software.
The customer has to be educated that "fast" means "low quality". We can do it Fast, Good, or Cheap. Pick two.
http://www.sixside.com/fast_good_cheap.asp
I suggest that all builds created by the internal build server state in the splash screen "INTERNAL BUILD - NOT FOR CUSTOMERS", and the release build server plops in the official splash screen.