Azure Devops Regression test case management - azure-devops

We have a very large application with nearly 2K testcases for regression. Our process is multiple sprints of work towards a single release. So, we use a dedicated regression test plan.
My question is how to manage regression runs? Right now, we clone the Master Regression suite or prior regression suite. This allows us to preserve the previous regression results. But this method creates new unique test cases, which doesn't keep associated bugs.
If we reset all the tests in the current suite, I know the previous runs could be seen at the test case level. However, I can't figure out how to call up historical aggregate results, for a previous run.
How should DevOps be used for managing repeat test runs?

How should DevOps be used for managing repeat test runs?
To repeat test, we could to insert parameters in test steps:
Create a parameter by typing a name preceded by "#" in the actions and expected results of test steps
You could check this document Repeat a test with different data for some more details.
For the historical aggregate results, there is an user voice about it on the Developer Community.
Hope this helps.

Related

Azure DevOps - Planning of manual testing in Test Plans - how to define planned execution date for each Test case

At our project we use Azure DevOps Test Plans for manual testing. We do not use pipelines. We have each Test plan for testing of one iteration - approx. 1 month testing for each test plan, but eg. SIT or UAT will take longer. I would like to see, when each test case (or test suite) is going to be tested, but there is no attribute for this.
I would also like to have reporting based on that (how many test cases should have been run by today and how many were really run)?
Can anyone help how to approach that?
Thanks
As a simple way, you can use iterations to plan target periods for your test activities. If you want to have a custom attribute, you can edit your process template (Customize a project using an inherited process):
Select Add new field:
As an example, assign the exiting field to the test case:
You can add it to the test suite and test case:
Test case:
Test suite:

Azure DevOps test Plans

How do I call multiple shared parameters in a single test plan? I can call a single shared parameter for now. It will be helpful if I can call multiple parameters in a single test plan.
When you write a manual test, you often want to specify that the test should be repeated several times with different test data. For example, if your users can add different quantities of a product to a shopping cart, then you want to check that a quantity of 200 works just as well as a quantity of 1.
To do this, you insert parameters in your test steps. Along with the test steps, you provide a table of parameter values. You can also share parameters and their data between test cases when you use the web portal with TFS 2015 and later or Azure DevOps. That way you can run multiple test cases with the same data.

Running Parallel Jobs and getting the aggregated results

I had a quick question about the workflow plugin. I'm trying to see if the plugin will be able to satisfy my use case:
We have a jenkins job that will build our app
We want to spin off a suite of test jobs that will perform various tests on the newly built app (unit, integration, etc). These will need to be run in parallel and we want to run them on more than one jenkins node for performance reasons
We'll take the aggregated output from all our test processes from step 2 and be able to decide whether or not we should deploy (everything's passed) or not
I was curious as to whether or not I'd be able to accomplish this within the plugin and if so if you had any tips/pointers to a start.
Thanks!
You can certainly run nodes inside parallel branches. If one branch fails, the parallel step as a whole fails. If you want the build to succeed, but behave differently depending on test results, you can capture them directly as Groovy variables in various ways.
If you are using JUnitArchiver, currently it does not provide a simple means of exposing the test results directly to the Pipeline script (JENKINS-26276), though if you just want to tell if there are some failures or none, you can inspect currentBuild.status.
If you have JUnit-format test results and wish to automatically split them amongst various nodes (especially helpful in case you have a large pool of machines and it would be unmaintainable to manually divide your tests), see this demo of the Parallel Test Executor plugin’s splitTests step.

#BeforeScenario / #AfterScenario to Specific Scenario in Test Story by using Given

I am a newbie to JBheave and Hive frameworks.
While exploring Q&A repositories, I happen to see the following phrase from one of right Answer to a Question,-
writing a JBehave story
That's what I've seen - and the data object should be setup/cleared
with a #BeforeScenario/#AfterScenario method.
At present I am in the process of writing Test Stories. Yet not get into Steps further.
From the JBehave product website, I get the following sample Test Story. I have Question considering the phrase which I plugged out from the Q&A repo of StackOverFlow.
A story is a collection of scenarios
Narrative:
In order to communicate effectively to the business some functionality
As a development team
I want to use Behaviour-Driven Development
Lifecycle:
Before:
Given a step that is executed before each scenario
After:
Outcome: ANY
Given a step that is executed after each scenario regardless of outcome
Outcome: SUCCESS
Given a step that is executed after each successful scenario
Outcome: FAILURE
Given a step that is executed after each failed scenario
Scenario: A scenario is a collection of executable steps of different type
Given step represents a precondition to an event
When step represents the occurrence of the event
Then step represents the outcome of the event
Scenario: Another scenario exploring different combination of events
Given a [precondition]
When a negative event occurs
Then a the outcome should [be-captured]
Examples:
|precondition|be-captured|
|abc|be captured |
|xyz|not be captured|
I could see the pretty same just as like #BeforeScenario/#AfterScenario over here.
I do have Question here. Is I could write Given before and after to specific Scenario: in a Test Story.
And is that Scenario: output is open to consecutive Scenario:'s in the Test Story.
There is a few differences between #BeforeScenario/#AfterScenario annotations and Lifecycle:Before/After steps
A java method annotated with #BeforeScenario or #AfterScenario is called for all executed scenarios in all stories, while a Lifecycle-Before or -After step will be executed only for scenarios from this one, concrete story.
#AfterScenario method is executed always, regardless of a result of the scenario. Lifecycle After steps can be called always (using Outcome: ANY clause), only on failures (using Outcome: Failure clause) or only on success (using Outcome: SUCCESS clause)
You cannot pass any parameters from a scenario (story) to #BeforeScenario and #AfterScenario java methods, while Lifecycle steps can have parameters, like any other ordinary steps, for example:
Lifecycle:
Before:
Given a step that is executed before each scenario with some parameter = 2
JBehave is for Data Mining. And it is uses Test Driven Development, TDD. We call that as Steps. BDD - Behavior Driven Development, that yields the Mining capability of that framework injected towards any Higher-Level language.
Answering the Question,- In a test story, if we put Scenario in the mid of two then statements, it clears the buffers as it is a new scenario. That way Given clause datasets is applied as-is, rather implied. That way Given clause values are taken forward. For new Scenario only Lifecycle prerequisites which is been set is only applied on before and after respectively.

How do I set up a multi-stage test pipeline in sbt?

Specifically, for a Scalatra project, but the question probably applies to most.
For example, I typically want to run:
unit tests
code quality checks (coverage, duplication, complexity, jsLint!)
integration tests (not too many!)
acceptance tests (usually a "pre-checkin" subset)
regression tests (basically the same as acceptance tests, but a bigger set)
performance tests
I want to run different subsets of these by context - i.e. after a simple code change I might just run the first three; before checking in I might want to run a bigger set, and the Continuous Integration server might have a "fast" and a "slow" build that have even bigger sets.
The basic sbt docs seem to assume a single "test" target - is there a recommended way to implement multiple test phases like this?
You may want to look at this blog about using integrated testing with SBT and Hudson:
http://henkelmann.eu/2010/11/14/sbt_hudson_with_test_integration
Then, to add your own actions you can use this page:
http://code.google.com/p/simple-build-tool/wiki/CustomActions
Basically, though, you will probably want to add a new action for each of your testing steps, in order to get the particular events you want to happen.