For a new project, I want to keep track of application performance by using BenchmarkDotNet to build up the performance testing suite. Does anyone know if there is anything already built to pass/fail an Azure DevOps build based on the performance results? For example, if a performance test is suddenly three times slower than the previous run I want to fail the build. I've heard some teams keep running metrics of performance but have never heard of how to go about setting something like that up.
I see that with BenchmarkDotNet I can export the results. But I'm hoping there is already something that exists, instead of having to create my own solution to this.
Related
I'm trying to learn CICD concepts on my own, I don't understand how it helps me while I can easily push and pull my code from github and make my code into production
Continuous Integration is mainly a culture than being a tool. So, you need to understand why it's necessary that every developer on a team should integrate their code with the repository are least once a day.
Continuous Delivery also indicates challenges and best practices of delivering high-quality software as soon as possible. So, teams that want to decrease the risk and problems of integrating features and increase the speed of delivering new features should adopt the CI/CD culture.
To ensure that every code added to the repository will work and integrate with other parts, you need to check. For instance, you need to make sure that the project will be built successfully, the tests will be passed, and the new changes will not break any other parts, your code will pass some required code quality checks, and so on.
After that, you have to deploy somehow/publish the version of your software. This process usually has some steps and can be done manually in small teams/projects.
Based on the first rule of Continuous Integration, every team member should integrate the code with the repository multiple times a day. Since the frequency of this integration is high, it's not a good idea to do this process manually. There are always chances that somebody forgets to run the operation. That's the main reason why it's necessary to have an automatic CI/CD pipeline.
I am looking for a solution for 20 scrum teams, on how to push code in different environments:
Dev (where developers can code and run unit tests)
SIT (integration with stubbed services)
QA (where QA testing happens, with real integration points, no stubs, currently maintained by a separate team, so that they keep track of what's going in)
Stage (similar to Live, with sensitive data, maintained by a separate team)
Live (that's the live game)
The sticky point here is that many teams would try to push to SIT and things could take time to deploy, and potential bottlenecks could be caused. Also, we need to ensure that our code works well with real integration points (QA env).
With respect to Scrum also, when should we call a user story Done, when we push to SIT, or QA?
I'm sure this has been asked before but couldn't find the exact terminology, feel free to point me to.
EDIT: it's a brand new product, clean slate, no code or pipelines as of yet.
OK, your exact question was: When do you call a User Story done? In Scrum, it is Done when it is potentially shippable, so in your setup: Stage.
Now, I expect that will sounds unrealistic to you and your team. And that is because you have a number of snags in your process that you'll have to solve to really accomplish CI/CD and to have potentially releasable code in a sprint:
Continuous Integration. I don't mean the server and platform/tool you use. I mean actually integrating everyone's code on every checkin. If you've got 20 teams who don't do this today, they aren't going to suddenly start tomorrow. As soon as you try, you're going to run into all kinds of practice, process, and architectural challenges. You'll need to work through those in order to achieve this. My best suggestion is start by having teams in common areas continuously integrate with each other, then start breaking down the barriers between those groups. If even that is too much, maybe just have each individual team integrate with each other multiple times a day as a start. Honestly, the rest of the steps aren't very relevant if you haven't got this down.
Testing is something that happens elsewhere. It's happening at a different stage, in a different environment, and probably with a different team. This is a problem for two reasons. If testing happens after the story is called done, it reenforces the idea that the job of the team is to write code, not create working, usable functionality. Second, those bug reports are going to come back, then stuff that was done and integrated has to be reworked and redone and integrated. If integration was painful before, this just adds a multiplier onto it.
Do you have cross-functional teams working on increments of value? It's a bit of a stretch for me to guess here, but service stubs and difficult integrations are often signs that different teams are working on different components. This creates a lot of opportunity for misalignment that can exacerbate your challenges.
Ok, last one. You have whole teams maintaining environments. That's a big red flag. That means your system is either extremely complicated or that people are leaving a lot of loose ends or both. If you hav to build whole teams around synchronizing other teams, you may be putting a band-aid on your problem. Your environment should be predictable and stable. That means that most tasks regarding your environment should be automatable and then the other teams can do the odd task that isn't.
This probably isn't the answer you were hoping for, but these are likely the challenges you'll have to tackle to get to your goal.
I am working in a small team who have just moved to Azure Dev-Ops. When a new build is pushed out we can't seem to figure out a seamless and convenient way to both run regression tests and have full historical data of the test plans which have been run. I am worried that once Azure DevOps has been in use for a while, it may become difficult to locate older test-runs.
So far we have tried creating a large test-suite, with sub-test suite folders for different areas of functionality. Each sub-test suite is then individually run in Test-Runner. The problem with this is when we reset the tests and run them again, we lose historical test data, and any tests that aren't completed on the test run go missing, making it look like a higher percentage of test cases have passed/failed. Another problem is that test-plans are fragmented.
Another option we have explored is adding a label (instead of sub-test suite) to each test case. This causes a problem when trying to run all the tests as Test-Runner appears to have a limit of no more then 100 test cases being run at once. On top of this the test-runs name is indistinguishable from each other when ran in separation, as they all have the same name (from the test-suite).
An ideal solution would be something like TestRail (which we have just moved from) where cases from a test-suite can be selected for a run, and then the test-plan is stored indefinitely. Unfortunately we are unable to move back to TestRail.
In the Azure DevOps Documentation under creating a Test Plan
they put the test plan in for a specific sprint.
What you can do is create a Regression Test Area Path containing tests you want and then every time you want to run a regression test you can add this to the test Plan created in the Sprint.
It does seem a bit confusing but in essence is similar to how it's done in TestRail when you create a Test Run and select what tests you want in the run. It can be a bit of a terminology switch to create a Test Plan for every sprint, but you can do that and then add what regression tests you want to run during the sprint and that way you don't lose any historic test data.
Hi in my project we have hundreds of test cases.These test cases are part of build process which gets triggered on every checkin and sends mail to our developer group.This project is fairly big and is been for more than five years.
Now we have so many test cases that build takes more than an hour .Some of the test cases are not structured properly and after refactoring them i was able to reduce the running time substantially,but we have hundreds of test cases and refactoring them one by one seems bit too much.
Now i run some of the test cases(which takes really long to execute) only as part of nightly build and not as part of every checkin.
I am curious as how other guys manage this .
I believe it was in "Working Effectively with Legacy Code" that he said if your test suite takes longer than a couple minutes it will slow developers down too much and the tests will start getting neglected. Sounds like you are falling into that trap.
Are your test cases running against a database? Then that's most likely your biggest source of performance problems. As a general rule, test cases shouldn't ever be doing I/O, if possible. Dependency Injection can allow you to replace a database object with mock objects that simulate the database portion of your code. That allows you test the code without worrying whether the database is setup correctly.
I highly recommend Working Effectively with Legacy Code by Michael Feathers. He discusses how to handle a lot of the headaches that you seem to be running into without having to refactor the code all at once.
UPDATE:
A another possible help would be something like NDbUnit. I haven't used it extensively yet, but it looks promising: http://code.google.com/p/ndbunit/
Perhaps you could consider keeping your oracle database but running it from a ram drive? It wouldn't need to be large because it would only contain test data.
We have about 1000 tests, large percentage of those communicating through REST and hitting database. Total run time is about 8 minutes. An hour seems excessive, but I don't know what you are doing and how complex your tests are.
But I think there is a way to help you. We are using TeamCity and it has a nice ability to have multiple build agents. What you could do is split your test project into subprojects with each subproject containing just a number of tests. You could use JNunit/NUnit Categories to separate them. Then you would configure TeamCity so that each agent would build just one type of subproject. This way, you'd get parallel execution of tests. With few agents (you get 3 for free), you should be able to get to 20 minutes, which might even be acceptable. If you put each agent into VM, you might not even require additional machines, you just need lots of RAM.
I can think of plenty of good reasons to using it; however, what are the downsides to it?
(Apart from buying another server)
What are some advantages to using a daily build instead of it?
(It's worth noting that by "continuous integration" I mean automated integration with an automated build process and automatically runs tests and automatically detects failure of each piece.
It's also worth noting that "continuous integration" just means to a trunk or test server. It does not mean "push every change live".
There are plenty of ways to do continuous integration wrong.)
I can't think of any reason not to do continuous integration testing. I guess I'm assuming that "continuous integration" includes testing. Just because it compiles doesn't mean it works.
If your build and/or tests take a long time then continuous integration can get expensive. In that case, run the tests obviously related to your change before the commit (coverage analysis tools, like Devel::CoverX::Covered can help discover what tests go with what code), do your integration testing after the commit using something like SVN::Notify, and alert the developers if it fails. Archive the test results using something like Smolder. That allows developers to work quickly without having to sit around watching test suites run, while still catching mistakes early.
That said, with a little work you can often you can speed up your build and test process. Many times slow tests are the result of each test having to do too much setup and teardown pointing at a system that's far too coupled requiring the whole system to be setup just to test a small piece.
Decoupling often helps, breaking out sub-systems into independent projects. The smaller scope makes for easier understanding and faster builds and tests. Each commit can do a full build and test without inconveniencing the programmer. Then all the sub-projects can be collected together to do integration testing.
One of the major advantages of running the test suite on every commit, even if it's after the commit, is you know just what broke the build. Rather than "something we did yesterday broke the build", or worse "four things we did yesterday broke the build in different ways and now we have to untangle it" it's "revision 1234 broke the build". You only have to examine that one revision to find the problem.
The advantage of doing a daily build is that at least you know there's a complete, clean build and test run happening every day. But you should be doing that anyway.
I don't think there are any downsides to it. But for the sake of the argument, here is Eric Minick's article on UrbanCode ("It's about tests not builds.") He criticises the tools that are based on Martin Fowler's work saying that they don't let enough time for tests.
"To be truly successful in CI, Fowler asserts that the build should be self-testing and that these tests include both unit and end-to-end testing. At the same time, the build should be very fast - ideally less than ten minutes - because it should run on every commit. If there are a significant number of end-to-end tests, executing them at build time while keeping the whole process under ten minutes is unrealistic.
Add in the demand for a build on every commit, and the requirements start to feel improbable. The options are either slower feedback or the removal of some tests."
James Shore had a great series of blog entries on the dangers of thinking that using a CI tool like CruiseControl meant you were doing continuous integration:
Why I Don't like CruiseControl
Continuous Integration is an Attitude not a Tool
Continuous Integration on a Dollar a Day
One danger of setting up a CI server is goal displacement, thinking that the important thing is to "keep the build passing" as opposed to "ensuring we have high quality software". So people stop caring about how long the tests take to run. Then they take too long to run all of them before checkin. Then the build keeps breaking. Then the build is always broken. So people comment out the tests to make the build pass. And the quality of the software goes down, but hey, the build is passing...
There are generally two cases where I've seen continuous integration not really make sense. Keep in mind I am a big advocate of CI and try to use it when I can.
The first one is when the roi just doesn't make sense. I currently develop several small internal apps. The applications are normally very trivial and the whole lifecycle of the development is about a week or two. To properly setup everything for CI would probably double that and I probably would never see that investment back again. You can argue that I'll get it back in maintenance, but these apps are as likely to be discarded as they are updated. Keep in mind that your job is probably to ship software, not reach 100% code coverage.
The other scenario that I have heard mentioned is that CI doesn't make sense if you're not going to do anything with the results. For example, if your software has to be sent to QA, and the QA staff can only really look at a new version every couple of days, it makes no sense to have builds every few hours. If other developers aren't going to look at code metrics and try to improve them, it makes no sense to track them. Granted this is not the fault of CI not being a good technique, it is a lack of your team willing to embrace CI. Nevertheless, implementing a CI system in such a scenario doesn't make sense.
When starting, it takes a while to set everything up.
If you add tests, coverage, static code inspections, duplicate search, documentation build and deploys, it can take a long time (weeks) to get it right. After that, maintaining the build can be a problem.
e.g, if you add tests to solution, you can have the build detect them automatically based on some criteria or you have to manualy update build settings. Auto detection is much harder to get right. Same for coverage. Same of documentation generation...
The only good reason not to do continuous integration comes when you've gotten your project working to the point where your integration tests hadn't identified any defect in a good long while and they're taking too much time to run every time you do a build. In other words: you've done enough continuous integration that you've proven to yourself that you no longer need it.