A year ago I started to create some automated builds on our build machine (TFS2008). Not so much for combining with full scale TDD (we still have a lot of old legacy code), but for being able to detect at an early stage if builds got broken. Another objective was also to minimize the packaging/deployment work.
This has been working quite well so far, but lately some coworkers are starting to treat the buildserver as a goldmine of quick releases, and the testing process seems to get less priority more often. Refactoring some of our code during 2-3 days proved that the builds on our buildserver potentially could reach our customers. :)
I guess our buildserver over time has shifted from being a 'consistency tool' for the developers, into being a server producing packages that is expected to be release quality 24/7.
This is clearly a communication problem, and there should be a set of rules on this. Only problem is that I don't know where to begin. Does anyone have similar experiences with this?
You're correct, it is a communications problem. If your developers and management are expecting release-quality builds all the time, they're not understanding the process of build/test/release.
The only thing you can do is clarify the purpose of a build server: a single, centralized location for builds. You need to clarify the distinction between a build and a release. Builds should always succeed (no one should break the build) but the ability to create a build does not have any bearing whatsoever on build quality or the suitability of a given build for release.
Build quality is measured by unit, functional, and user acceptance testing. There is no replacement for these tests in preparing a build for release. The long-term costs of not doing these tests far outweigh the short-term benefits of getting a release out the door.
Our unittestserver does tests, and tags CVS. Then we go on a buildserver which has ea script to create a release which isready for customer installation. This release is then installed on a test server as if it was the customer's server, and then tested.
Judging your story, you are hoping to find some script or setting which will prevent the buildserver from getting used as "quick release" server. The only real way to do this is process.
Rules in our company:
Developers check into CVS, they get mails from the unittest server if it fails, and have to fix that in code. No access to the build/test server for devs.
There is 1 specific developer who can create a release which he can send to the test department.
The test department installs the release on their test server and tests it.
The testers, and only the testers, can give a "Go" for release.
The release is done by a designated person who is also the customer contact.
As you can see the developers are seperated from the testers and the customer (formally speaking). In practice it is not all that rigid ofcourse, but people need to understand that if this process is not in place, the customer will get inferior quality software.
The customer has to be educated that "fast" means "low quality". We can do it Fast, Good, or Cheap. Pick two.
http://www.sixside.com/fast_good_cheap.asp
I suggest that all builds created by the internal build server state in the splash screen "INTERNAL BUILD - NOT FOR CUSTOMERS", and the release build server plops in the official splash screen.
Related
Our development departments grows and I want to force a stable master/trunk.
Up to now every developer can commit into the master/trunk. In future developers should commit into a staging area, and if all tests pass the code gets moved to the trunk automatically. If the test fails, the developer gets a mail with the failed tests.
We have several repositories: One for the core product, several plugins and a repository for every customer.
Up to now we run SVN and git, but switching all repos to git could be done, if necessary.
Which software could help us to get this done?
There a some articles on the web which explain how to use gerrit and jenkins to force a stable branch.
I am unsure if I need both, or if it is better to use something else.
Environment: We are 10 developers, and use python and django.
Question: Which tool can help me to force a stable master branch?
Update
I was on holiday, and now the bounty has expired. I am sorry. Thank you for your answers.
Question: Which tool can help me to force a stable master branch?
Having been researching this particular aspect of CI quasi-pathologically since our ~20 person PHP/ZF1-based dev team made the switch from SVN to Git over the winter (and I became the de-facto git mess-fixer), I can't help but share my experience with this particular aspect of continuous integration.
While obviously, having a "critical mass of unit tests running" in combination with a slew of conditionally parameterized Jenkins jobs, triggering infinitely more conditionally parameterized jobs, covering every imaginable circumstance would (perhaps) be the best and most proper way to move towards a Continuous Integration/Delivery/Deployment model, the meatspace resources required for such a migration are not insignificant.
Some questions:
Does your team have some kind of VCS workflow or, minimally, rules defined?
What percentage would you say, roughly, of your codebase is under some kind of behavioral (eg. selenium), functional or unit testing?
Does your team ( / senior devs ) actually have the time / interest to get the most out of gerrit's peer-based code review functionality?
On average, how many times do you deploy to production in any given day / week / month?
If the answers to more than one of these questions are 'no', 'none', or 'very little/few', then I'd perhaps consider investing in some whiteboard time to think through your team's general workflow before throwing Jenkins into the mix.
Also, git-hooks. Seriously.
However, if you're super keen on having a CI/Jenkins server, or you have all those basics covered already, then I'd point you to this truly remarkable gem of a blog post:
http://twasink.net/2011/09/16/making-parallel-branches-meet-regularly-with-git-and-jenkins/
And it's equally savvy cousin:
http://twasink.net/2011/09/20/git-feature-branches-and-jenkins-or-how-i-learned-to-stop-worrying-about-broken-builds/
Oh, and of course, the very necessary devopsreactions tumblr.
There a some articles on the web which explain how to use gerrit and jenkins to force a stable branch.
I am unsure if I need both, or if it is better to use something else.
gerrit is for coding review
Jenkins is a job scheduler that can run any job you want, including one:
compiling everything
launching sole unit test.
In each case, the idea is to do some guarded commit, ie pushing to an intermediate repo (gerrit, or one monitored by Jenkins), and only push to the final repo if the intermediate process (review or automatic build/test) passed successfully.
By adding intermediate repos, you can easily force one unique branch on the final "blessed" repo to which those intermediate referential will push to if the commits are deemed worthy.
It sounds like you are looking to establish a standard CI capability. You will need the following essential tools:
Source Version Control : SVN, git (You are already covered here)
CI server : Jenkins (you will need to build and run tests with each
check in, and report results. Jenkins is the defacto standard tool
used for this)
Testing : PyUnit
Artifact Repository : you will need a mechanism for organizing and
archiving the increments created with each build. This could be a
simple home grown directory based system. I have also used Archiva,
but there are other tools.
There are many additional tools that might be useful depending on your development process:
Code review : If you want to make code review a formal gate in your
process, Gerrit is a good tool.
Code coverage analysis : I've used EMMA in the past for Java. I am
sure that are some good tools for Python coverage.
Many others : a library of Jenkin's plugins that provide a variety of
useful tools is available to you. Taking some time to review
available plugins will definitely be time well spent.
In my experience, establishing the right cultural is as important as finding the right tooling.
Testing : one of the 10 principles of CI is "self testing builds". In
other words, you must have a critical mass of unit tests running.
Developers must become test infected. Unit testing must become a
natural, highly value part of each developers individual development
process. In my experience, establishing a culture of test infection
is the hardest part of deploying CI.
Frequent check-in : Developers and managers must organize there work
in a way that allows for frequent small check-ins. CI calls for daily
checkins. This is sometimes a difficult habit to establish.
Responsiveness to feedback : CI is about immediate feedback. The
developers must be conditioned to response to the immediate feedback.
If unit tests fail, the build it broken. Within 15 minutes of a CI
build breaking, the developer responsible should either have a fix
checked in, or have the original, bad check-in backed out.
During a development project, the delivered code can go between different stages different environment before it reaches the production (e.g. Development Environment for testing deployment processes, Internal Testing for QC, Pre-Production and finally production).
This development effort produces many candidate release in which a certain release can be nominated to move upwards in the development process until it reaches production, also, there might be some cases where the code deployed on the production might require hot-fixes in parallel to the current internal development lines (i.e. Parallel Development).
For a certain UCM project maintained by IBM Rational ClearCase (CC), what is the recommended project structure to be created on "Project Explorer" to accommodate for the following:
The developers should mainly connect and deliver their work on the internal development line (or in CC terminology the development stream).
Once the delivered code to this development stream is considered acceptable, the Technical Team Lead (TTL) can create a baseline. This baseline can be later retrieved by the Deployment Engineer to be deployed on the local Development Environment.
If this baseline was found acceptable, this baseline can be delivered as a whole to the Internal Testing stream to be deployed for further Quality Control (QC) test.
If this baseline was found acceptable, this baseline can be delivered as a whole to the Pre-Production and so forth to the production similar to what was described above.
Of course, if any of these baselines were not accepted by its receiving party, it can be rejected, and the receiving party will wait for another baseline to be recommended for their stream.
Note: The Deployment Engineer will always use a dedicated stream for each environment to get his/her files required to carry out the build/deployment activities.
My apologies for everybody here since I understand that answering this can be long, but my question more concentrates on the exact type of streams and/or views that need to be created in "Project Explorer" to suffice the above objectives.
I am really trying to come up with the best practice approach for release management using CC and how it can be best used this purpose.
I would appreciate your help guys and many thanks to all in advance ...
The rule of thumb is simple:
The less branch, the better.
I mean, if you ever done deliver and rebase before with ClearCase, you know:
how painful it is
how poorly it scales with the number of file (merging 1000 files is awfully long, merging 5000 files is murder)
So the real rule of thumb is:
if you don't have to modify any file for a given development stage, don't create a branch.
For instance, for promoting a code to QA, where you will only read it (and launch some tests, in order to accept that code if they pass, or to reject that code if they fail), don't create a QA Stream where you would deliver the code: it is too long for an non-existent added value.
Use baseline promotion level whenever you can, and recommend your promoted baselines.
The Deployment Engineer will always use a dedicated stream for each environment to get his/her files required to carry out the build/deployment activities.
Err... no, if you don't have any change to do.
The Deployment Engineer doesn't care at all where the baseline is coming from, only if the code deploys and runs successfully.
I'm not sure If any of you have encountered this scenario.We have small small projects or
fixes that we need to push to LIVE every other day.
There is one set of team that says "do a clean build and deploy on prod server".
The other set says we don't have to do a full deploy we will just do a dll drop or aspx drop.
They have listed out few pros and cons for each method. But want to know what method you generally follow any major set backs for each method.
I would first of all try to minimize the cost of building a new release by trying to automate as much as possible. (Which might be easier said than done but it is usually well invested time and money, especially if you release often.)
The big issue I have with dropping in new binaries is that it is often a manual process and those are performed by humans who tend to mess up the easies tasks over and over again.
Aren’t you really looking for a “patch management system” that could help you distribute the changes in a controlled manner and getting rid of the manual work?
That way you still would have pretty good integrity since the patches should be versioned and hopefully carefully tested. But still they can be developed and deployed which much less overhead then a full release.
For sites under my control, I use SVN or Git for revision control, and update the source and compile on the server directly, if necessary. That ensures the integrity of the release, which I suspect is the argument put forth by the "do a clean build and deploy on prod server" team. For servers not under my control, I do whatever I am told :)
In the past my development team we have mostly done waterfall development against an existing application and deployments were only really done towards the end of a release which would normally result in TEST, UAT, PROD releases normally only consisting of three to five releases in a two month cycle.
A release was an MSI installer, deployed via Group Policy.
We have now moved to a more agile methodology and require releases at least once per day for testing, some times more often.
The application is a VB6 app and the MSI was taking care of COM registrations for us, users do not have elevated privileges on their machines.
Does anyone have any better solutions for rapid deployment?
We have considered batch/scripted installs of the MSI, or doing COM registrations per file, both using CPAU for elevated privileges, and ClickOnce. Neither of these have been tested yet.
Edit: Thanks for suggestions.
To clarify, my pain point is the MSI build / deployment process takes a long time can take up to two hours to get the new build on to the testers desktops. The testers do not admin rights on their machine (and will not get them) so I am looking for a better solution.
I have played around with ClickOnce, using a dot net wrapper which starts up the application and has all the OCX/DLL vb6 assemblies as isolated dependencies, but this is having issues finding all the assemblies when it starts up, or messages to that effect.
CruiseControl and Nant are probably your best bet for builds with flexible output. But Rapid Deployment?
My concern is that you are looking at the daily builds in the wrong way. The dailies do NOT need to be widely deployed. In fact, QA and Development are the only ones who should care about the builds on a day to day basis. EVen then, the devs shouldn't be out of sync ;).
The customer team should only recieve builds at the end of a iteration. That is where you show them what you have done and they provide feedback and you move forward from there. Giving them daily builds could cause a vicious thrashing that would kill your velocity.
All that being said, a nice deployment package might be good for QA. But again, it depends on how in step they are with your development iterations. My experience, right or wrong, is that QA is one iteration back testing the deliverables from the last iteration. From that point of view, they should be testing with the last "stable" release as well.
Is this something you can do in a virtual machine? You could securely give your testers admin rights on the virtualized system and most virtualization software has some form of versioning so you can roll back to a "good" state if something goes wrong. I've found it to be very useful for testing.
I'd recommend ClickOnce with the option to update on execution. That way only people using the software receive and install the updates.
You could try registry-free COM. See this other question. ActiveX EXEs still have to be registered though.
EDIT: to clarify, using registry-free COM means the OCX/DLL components you mention don't need to be registered. Nor do any OCX/DLL components they use. You can just copy the whole application directory onto a tester's machine and it will work straightaway.
If I understand your question correctly, you need admin rights to install your product. I see three options:
1) Don't install to the tester's desktops. Get some scratch testing machines (as dmo suggested, VMWare might help) that you can safely give them admin rights to. This may mean giving them a test domain and their own group policy to edit.
2) Build a variant that doesn't require MSI installation, and can be executed directly. Obviously your testers would not be testing the deployment and installation process with this variant, but they could perform other tests of the product's functionality. I don't know if this is possible with your product; it would certainly be work.
3) Take your agile medicine: "[prefer] responding to change over following a plan". That is, if denying admin rights to your testers is interfering with their ability to do their jobs efficiently, then challenge the organization to give them admin rights. (from experience, this will mean shifting to #1, but it might be the best way to make the case). If they are expected to test the product, how can they not even be allowed to install it in the same way a customer would?
If the MSI deployment is taking velocity out of agile testing, then you should test MSI deployment less regularly.
Use XCOPY deployment wherever possible, using .local for COM components. This can be a problem with third party components. As third party components are pretty stable, you should be able to build a custom MSI for these, install them once and be done with it.
You should try an automated build/deploy process or script that you can manually run. Try Teamcity or CruiseControl. Good luck!
I'm not sure just precisely what your pain point is.
You specifically mention registration of VB6 COM objects. Does the installer sometimes fail because of that?
Is it that the installer works but people don't know to install the new build so they are more often than not reporting bugs on an old build?
If the former, then I suspect the problem to be that VB6 was very likely to play fruit basket turnover with the GUIDs when rebuilding the solution. Try recreating your public interfaces in MIDL and have your VB6 classes implement those interfaces.
If the later, then try Microsoft's SMS product. No, it has nothing to do with cell phones. If all the user's aren't on the same domain, then you will have to build an "auto update" feature into your product. Here is a third party offering that I've heard of but never used.
I'm using SetupBuilder (http://setupbuilder.com/products_setupbuilder_pro.htm) for all my builds. Very extensible. Excellent support.
Not sure exactly if it fits your needs, but this kind of post on the forums, "Installing as a limited account user (non-admin) on XP" (http://www.lindersoft.com/forums/showthread.php?t=11891&highlight=admin+rights), makes me think it might be.
I can think of plenty of good reasons to using it; however, what are the downsides to it?
(Apart from buying another server)
What are some advantages to using a daily build instead of it?
(It's worth noting that by "continuous integration" I mean automated integration with an automated build process and automatically runs tests and automatically detects failure of each piece.
It's also worth noting that "continuous integration" just means to a trunk or test server. It does not mean "push every change live".
There are plenty of ways to do continuous integration wrong.)
I can't think of any reason not to do continuous integration testing. I guess I'm assuming that "continuous integration" includes testing. Just because it compiles doesn't mean it works.
If your build and/or tests take a long time then continuous integration can get expensive. In that case, run the tests obviously related to your change before the commit (coverage analysis tools, like Devel::CoverX::Covered can help discover what tests go with what code), do your integration testing after the commit using something like SVN::Notify, and alert the developers if it fails. Archive the test results using something like Smolder. That allows developers to work quickly without having to sit around watching test suites run, while still catching mistakes early.
That said, with a little work you can often you can speed up your build and test process. Many times slow tests are the result of each test having to do too much setup and teardown pointing at a system that's far too coupled requiring the whole system to be setup just to test a small piece.
Decoupling often helps, breaking out sub-systems into independent projects. The smaller scope makes for easier understanding and faster builds and tests. Each commit can do a full build and test without inconveniencing the programmer. Then all the sub-projects can be collected together to do integration testing.
One of the major advantages of running the test suite on every commit, even if it's after the commit, is you know just what broke the build. Rather than "something we did yesterday broke the build", or worse "four things we did yesterday broke the build in different ways and now we have to untangle it" it's "revision 1234 broke the build". You only have to examine that one revision to find the problem.
The advantage of doing a daily build is that at least you know there's a complete, clean build and test run happening every day. But you should be doing that anyway.
I don't think there are any downsides to it. But for the sake of the argument, here is Eric Minick's article on UrbanCode ("It's about tests not builds.") He criticises the tools that are based on Martin Fowler's work saying that they don't let enough time for tests.
"To be truly successful in CI, Fowler asserts that the build should be self-testing and that these tests include both unit and end-to-end testing. At the same time, the build should be very fast - ideally less than ten minutes - because it should run on every commit. If there are a significant number of end-to-end tests, executing them at build time while keeping the whole process under ten minutes is unrealistic.
Add in the demand for a build on every commit, and the requirements start to feel improbable. The options are either slower feedback or the removal of some tests."
James Shore had a great series of blog entries on the dangers of thinking that using a CI tool like CruiseControl meant you were doing continuous integration:
Why I Don't like CruiseControl
Continuous Integration is an Attitude not a Tool
Continuous Integration on a Dollar a Day
One danger of setting up a CI server is goal displacement, thinking that the important thing is to "keep the build passing" as opposed to "ensuring we have high quality software". So people stop caring about how long the tests take to run. Then they take too long to run all of them before checkin. Then the build keeps breaking. Then the build is always broken. So people comment out the tests to make the build pass. And the quality of the software goes down, but hey, the build is passing...
There are generally two cases where I've seen continuous integration not really make sense. Keep in mind I am a big advocate of CI and try to use it when I can.
The first one is when the roi just doesn't make sense. I currently develop several small internal apps. The applications are normally very trivial and the whole lifecycle of the development is about a week or two. To properly setup everything for CI would probably double that and I probably would never see that investment back again. You can argue that I'll get it back in maintenance, but these apps are as likely to be discarded as they are updated. Keep in mind that your job is probably to ship software, not reach 100% code coverage.
The other scenario that I have heard mentioned is that CI doesn't make sense if you're not going to do anything with the results. For example, if your software has to be sent to QA, and the QA staff can only really look at a new version every couple of days, it makes no sense to have builds every few hours. If other developers aren't going to look at code metrics and try to improve them, it makes no sense to track them. Granted this is not the fault of CI not being a good technique, it is a lack of your team willing to embrace CI. Nevertheless, implementing a CI system in such a scenario doesn't make sense.
When starting, it takes a while to set everything up.
If you add tests, coverage, static code inspections, duplicate search, documentation build and deploys, it can take a long time (weeks) to get it right. After that, maintaining the build can be a problem.
e.g, if you add tests to solution, you can have the build detect them automatically based on some criteria or you have to manualy update build settings. Auto detection is much harder to get right. Same for coverage. Same of documentation generation...
The only good reason not to do continuous integration comes when you've gotten your project working to the point where your integration tests hadn't identified any defect in a good long while and they're taking too much time to run every time you do a build. In other words: you've done enough continuous integration that you've proven to yourself that you no longer need it.