I am working in a small team who have just moved to Azure Dev-Ops. When a new build is pushed out we can't seem to figure out a seamless and convenient way to both run regression tests and have full historical data of the test plans which have been run. I am worried that once Azure DevOps has been in use for a while, it may become difficult to locate older test-runs.
So far we have tried creating a large test-suite, with sub-test suite folders for different areas of functionality. Each sub-test suite is then individually run in Test-Runner. The problem with this is when we reset the tests and run them again, we lose historical test data, and any tests that aren't completed on the test run go missing, making it look like a higher percentage of test cases have passed/failed. Another problem is that test-plans are fragmented.
Another option we have explored is adding a label (instead of sub-test suite) to each test case. This causes a problem when trying to run all the tests as Test-Runner appears to have a limit of no more then 100 test cases being run at once. On top of this the test-runs name is indistinguishable from each other when ran in separation, as they all have the same name (from the test-suite).
An ideal solution would be something like TestRail (which we have just moved from) where cases from a test-suite can be selected for a run, and then the test-plan is stored indefinitely. Unfortunately we are unable to move back to TestRail.
In the Azure DevOps Documentation under creating a Test Plan
they put the test plan in for a specific sprint.
What you can do is create a Regression Test Area Path containing tests you want and then every time you want to run a regression test you can add this to the test Plan created in the Sprint.
It does seem a bit confusing but in essence is similar to how it's done in TestRail when you create a Test Run and select what tests you want in the run. It can be a bit of a terminology switch to create a Test Plan for every sprint, but you can do that and then add what regression tests you want to run during the sprint and that way you don't lose any historic test data.
Related
For a new project, I want to keep track of application performance by using BenchmarkDotNet to build up the performance testing suite. Does anyone know if there is anything already built to pass/fail an Azure DevOps build based on the performance results? For example, if a performance test is suddenly three times slower than the previous run I want to fail the build. I've heard some teams keep running metrics of performance but have never heard of how to go about setting something like that up.
I see that with BenchmarkDotNet I can export the results. But I'm hoping there is already something that exists, instead of having to create my own solution to this.
I am looking for some advice on the best practice for versioning software.
Background
Build automation with gradle.
Continuous integration with Jenkins
CVS as SCM
Semantic Versioning
Sonatype Nexus inhouse repo
Question
Lets say I make a change to come code. An automated CI job will pull it in and run some tests against it. If these tests should pass, should Jenkins update the version of the code and push it to nexus? Should it be pushed up as a "SNAPSHOT"? Should it be pushed up to nexus at all, or instead just left in the repository until I want to do a release?
Thanks in advance
I know you said you are using CVS, but first of all, have you checked the git-flow methodology?
http://nvie.com/posts/a-successful-git-branching-model/
I have little experience with CVS, but it can be applied to it, and a good versioning and CI procedure begins with having well defined branches, basically at least one for the latest release, and one for the latest in-development version.
With this you can tell the CI application what it is working with.
Didn't have time for a more detailed answer before, so I will extend now, trying to give a general answer.
Branches
Once you have clearly defined branches you can control your work flow. For example, it is usual having a 'master' and a 'develop' branches, where the master will contain the latest release, and develop will contain the next release.
This means you can always point to the latest release of the code, it is in the master branch, while the next version is in the develop branch. Of course, this can be more detailed, such as tagging the master branch for the various releases, or having an additional branch for each main feature, but it is enough having these two.
Following this, if you need to change something to the code, you edit the develop branch, and make sure it is all correct, then keep making changes until you are happy with the current version, and move this code to the master.
Tests
Now, how to make sure all is correct and valid? By including tests in your project. There is a lot which can be read about testing, but let's keep it simple. There are two main types of tests:
White box tests, where you know the insides of the code, and prepare the tests for the specific implementation, making sure it is built as you want
Black box tests, where you don't know how the code is implemented (or at least, you act as if you didn't), and prepare more generic tests, meant to make sure it works as expected
Now, going to the next step, you won't hear much about these two tests, and instead people will talk about the following ones:
Unit tests, where you test the smallest piece of code possible
Integration test, where you connect several pieces of code and test them
"The smallest piece of code possible" has a lot of different meanings, depending on the person and project. But keeping with the simplification, if you can't make a white box test of it, then you are creating an integration test.
Integration tests include things like database access, running servers, and take a long time. At least much longer than unit testing. Also, integration tests due to their complexity may require setting up a specific environment.
This means that while unit tests can be run locally with ease, integration tests may be so slow that people dislike running them, or may just be impossible to run in your machine.
So what do you do? Easy, separate the tests, so unit tests can be run locally after each change, while integration test are run (after unit tests) by the CI server after each commit.
Additional tests
Just as a comment, don't stop at this simplified vision of tests. There are several ways of handling tests, and some tests I wouldn't fit into unit or integration tests. For example it is always a good idea validating code style rules, or you can make a test which just deploys the project into a server, to make sure it doesn't break.
CI
Your CI server should be listening to commits, and if correctly configured it will know when this commit comes from a development version, a release or anything else. Allowing you to customise the process as you wish.
First of all it should run all the tests. No excuses, and don't worry if it takes two hours, it should run all the tests, as this is your shield against future problems.
If there are errors, then the CI server will stop and send a warning. Fix the code and start again. If all tests passed, then congratulations.
Now it is the time to deploy.
Deploying should be taken with care, always. The latest version available in the dependencies repository should, always, be the most current one.
It is nice having a script to deploy the releases into the repository after a commit, but unless you have some short of final validation, a manual human-controlled one at it, you may end releasing a bad version.
Of course, you may ignore this for development versions, as long as they are segregated from the actual releases, but otherwise it is best handling the final deployment by hand.
Well, it may be with a script or whatever you prefer, but it should be you who begins the deployment of releases, not the machine.
CI customisation
Having a CI server allows for much more than just testing and building.
Do you like reports? Then generate a test coverage report, a quality metrics one, or whatever you prefer. You are using a external service for this? Then send him the files and let it work.
Your project contains files to generate documentation? Build it and store it somewhere.
We use a fairly standard CI deployment pipeline in Teamcity to package our application. We started out with the following pipeline. Each of the steps below represent a gate that the build must pass in order to advance to the next step:
Compile
Unit Tests
Back-end component integration tests
Front-end acceptance tests (Selenium based)
Package
This worked alright in the beginning of the project when the front-end test suite was small and relatively quick ( <2 mins). However, as the suite grew in size and length (15:00 minutes and growing), firing it off on every check-in quickly became untenable. We have since removed the suite from our main pipeline and kick it off four times daily on an independent pipeline:
Compile
Unit Tests
Back-end component integration tests
Package
The problem with this approach is clear, in that it is quite likely a build will make it all the way to the package stage even though it caused regressions in the front-end test suite. I suppose I could invalidate an already created package if the front-end test suite fails, but this seems clunky. We've looked at optimizing the test suite further but I think that's a dead end unless we can run the tests in parallel, something which TeamCity doesn't support.
Suggestions/critiques welcome.
Not sure what development platform you are working here, but I see two general options.
Use selenium grid or something like testNG
Leverage any teamcity build agents you have, and section up tests to be run in parallel on an array of these agents. I have in the past had dedicated build agents running tests, but even so this method will eventually fall over if your test sets increase too much in size.
You should be aiming to reduce the amount of testing you do directly through the UI. Look at how putting more emphasis on testing code further down the stack can reduce your direct UI test burden. In a recent project we refactored the javascript in our app to make it testable. These faster running tests meant that we could remove a large amount of our slower running webdriver tests. Its a bit of an investment in time and effort, but its causes much less pain than your current situation.
Scheduled builds should be avoided where possible as, like you mentioned, you could end up with packaged software that has bundled defects, plus the feedback time increases massively.
Hi in my project we have hundreds of test cases.These test cases are part of build process which gets triggered on every checkin and sends mail to our developer group.This project is fairly big and is been for more than five years.
Now we have so many test cases that build takes more than an hour .Some of the test cases are not structured properly and after refactoring them i was able to reduce the running time substantially,but we have hundreds of test cases and refactoring them one by one seems bit too much.
Now i run some of the test cases(which takes really long to execute) only as part of nightly build and not as part of every checkin.
I am curious as how other guys manage this .
I believe it was in "Working Effectively with Legacy Code" that he said if your test suite takes longer than a couple minutes it will slow developers down too much and the tests will start getting neglected. Sounds like you are falling into that trap.
Are your test cases running against a database? Then that's most likely your biggest source of performance problems. As a general rule, test cases shouldn't ever be doing I/O, if possible. Dependency Injection can allow you to replace a database object with mock objects that simulate the database portion of your code. That allows you test the code without worrying whether the database is setup correctly.
I highly recommend Working Effectively with Legacy Code by Michael Feathers. He discusses how to handle a lot of the headaches that you seem to be running into without having to refactor the code all at once.
UPDATE:
A another possible help would be something like NDbUnit. I haven't used it extensively yet, but it looks promising: http://code.google.com/p/ndbunit/
Perhaps you could consider keeping your oracle database but running it from a ram drive? It wouldn't need to be large because it would only contain test data.
We have about 1000 tests, large percentage of those communicating through REST and hitting database. Total run time is about 8 minutes. An hour seems excessive, but I don't know what you are doing and how complex your tests are.
But I think there is a way to help you. We are using TeamCity and it has a nice ability to have multiple build agents. What you could do is split your test project into subprojects with each subproject containing just a number of tests. You could use JNunit/NUnit Categories to separate them. Then you would configure TeamCity so that each agent would build just one type of subproject. This way, you'd get parallel execution of tests. With few agents (you get 3 for free), you should be able to get to 20 minutes, which might even be acceptable. If you put each agent into VM, you might not even require additional machines, you just need lots of RAM.
I can think of plenty of good reasons to using it; however, what are the downsides to it?
(Apart from buying another server)
What are some advantages to using a daily build instead of it?
(It's worth noting that by "continuous integration" I mean automated integration with an automated build process and automatically runs tests and automatically detects failure of each piece.
It's also worth noting that "continuous integration" just means to a trunk or test server. It does not mean "push every change live".
There are plenty of ways to do continuous integration wrong.)
I can't think of any reason not to do continuous integration testing. I guess I'm assuming that "continuous integration" includes testing. Just because it compiles doesn't mean it works.
If your build and/or tests take a long time then continuous integration can get expensive. In that case, run the tests obviously related to your change before the commit (coverage analysis tools, like Devel::CoverX::Covered can help discover what tests go with what code), do your integration testing after the commit using something like SVN::Notify, and alert the developers if it fails. Archive the test results using something like Smolder. That allows developers to work quickly without having to sit around watching test suites run, while still catching mistakes early.
That said, with a little work you can often you can speed up your build and test process. Many times slow tests are the result of each test having to do too much setup and teardown pointing at a system that's far too coupled requiring the whole system to be setup just to test a small piece.
Decoupling often helps, breaking out sub-systems into independent projects. The smaller scope makes for easier understanding and faster builds and tests. Each commit can do a full build and test without inconveniencing the programmer. Then all the sub-projects can be collected together to do integration testing.
One of the major advantages of running the test suite on every commit, even if it's after the commit, is you know just what broke the build. Rather than "something we did yesterday broke the build", or worse "four things we did yesterday broke the build in different ways and now we have to untangle it" it's "revision 1234 broke the build". You only have to examine that one revision to find the problem.
The advantage of doing a daily build is that at least you know there's a complete, clean build and test run happening every day. But you should be doing that anyway.
I don't think there are any downsides to it. But for the sake of the argument, here is Eric Minick's article on UrbanCode ("It's about tests not builds.") He criticises the tools that are based on Martin Fowler's work saying that they don't let enough time for tests.
"To be truly successful in CI, Fowler asserts that the build should be self-testing and that these tests include both unit and end-to-end testing. At the same time, the build should be very fast - ideally less than ten minutes - because it should run on every commit. If there are a significant number of end-to-end tests, executing them at build time while keeping the whole process under ten minutes is unrealistic.
Add in the demand for a build on every commit, and the requirements start to feel improbable. The options are either slower feedback or the removal of some tests."
James Shore had a great series of blog entries on the dangers of thinking that using a CI tool like CruiseControl meant you were doing continuous integration:
Why I Don't like CruiseControl
Continuous Integration is an Attitude not a Tool
Continuous Integration on a Dollar a Day
One danger of setting up a CI server is goal displacement, thinking that the important thing is to "keep the build passing" as opposed to "ensuring we have high quality software". So people stop caring about how long the tests take to run. Then they take too long to run all of them before checkin. Then the build keeps breaking. Then the build is always broken. So people comment out the tests to make the build pass. And the quality of the software goes down, but hey, the build is passing...
There are generally two cases where I've seen continuous integration not really make sense. Keep in mind I am a big advocate of CI and try to use it when I can.
The first one is when the roi just doesn't make sense. I currently develop several small internal apps. The applications are normally very trivial and the whole lifecycle of the development is about a week or two. To properly setup everything for CI would probably double that and I probably would never see that investment back again. You can argue that I'll get it back in maintenance, but these apps are as likely to be discarded as they are updated. Keep in mind that your job is probably to ship software, not reach 100% code coverage.
The other scenario that I have heard mentioned is that CI doesn't make sense if you're not going to do anything with the results. For example, if your software has to be sent to QA, and the QA staff can only really look at a new version every couple of days, it makes no sense to have builds every few hours. If other developers aren't going to look at code metrics and try to improve them, it makes no sense to track them. Granted this is not the fault of CI not being a good technique, it is a lack of your team willing to embrace CI. Nevertheless, implementing a CI system in such a scenario doesn't make sense.
When starting, it takes a while to set everything up.
If you add tests, coverage, static code inspections, duplicate search, documentation build and deploys, it can take a long time (weeks) to get it right. After that, maintaining the build can be a problem.
e.g, if you add tests to solution, you can have the build detect them automatically based on some criteria or you have to manualy update build settings. Auto detection is much harder to get right. Same for coverage. Same of documentation generation...
The only good reason not to do continuous integration comes when you've gotten your project working to the point where your integration tests hadn't identified any defect in a good long while and they're taking too much time to run every time you do a build. In other words: you've done enough continuous integration that you've proven to yourself that you no longer need it.