Running test using TestDataMethod in MSTest in Azure-Devops - azure-devops

I have an API test that run within Azure Devops pipeline using TestDataMethod within MSTest. My tests run fine, but problem is all tests reported have the same name, so kind of difficult to figure out which test failed. This works fine in Visual Studio in my local. Is there a way to fix this? I found an old thread for the same issue but has no solution.
Screenshot

As you can see we cannot report the result for each subtest at the root level which also mentioned in the ticket you mentioned. For more information about test result, you could refer to Test analytics, which provides near real-time visibility into your test data for builds and releases. It helps improve the efficiency of your pipeline by identifying repetitive, high impact quality issues.

Related

Can I Integrate Web Tests (written in visual studio) in Azure Devops build pipeline

I have a web api (REST) project that is written in .NET and I have written a few webtests (.webtest) that test those apis.
While those tests run fine locally from visual studio, I want to integrate them into my VSTS (Azure Devops) build pipeline, so as to identify and breaking changes that could break any of those APIs.
I am not able to find any task in build pipeline which can run the webtests as part of build. I see option for running unit-tests though.
So, wanted to check what am I missing here.
You might want to find an alternative approach as this link implies it has been deprecated.
Visual Studio web performance test (.webtest file) is tied to the load
test functionality and is deprecated. Some customers have used
.webtest for other purposes such as running API tests, even though it
was not designed for that purpose. Many API testing alternatives are
available in the market. SOAP UI is a free, open source alternative to
consider, and is also available as a commercial option with additional
capabilities.
You could try to use cmd task command line to run MSTest with arguments.
Add Run Command Line step/task to execute MSTest command
Add Publish Test Results step/task
On the other hand, you can do test in Unit Test too, just send the request and check the response, related thread.
Also as Matt mentioned, since Visual Studio web performance tests (.webtest files) are tied to the load test functionality and is also deprecated. You could take a look at this blog here: Cloud-based load testing service end of life

VSTS Manual Test Fails, Bug Created, What is the Retest Trigger?

Puzzling out how to implement VSTS for testing team.
Regarding the scenario where a manual test is run and a bug is created...
The bug gets prioritized and fixed at some later point.
How does the application indicate to the tester when they can run the test again because the bug has been fixed?
When you run the tests each time it has its own test run ID. That means they will have their own test results even if you run the same test case multiple times (They have different test run IDs).
Generally, once the bug is fixed and the fixed sources are integrated into the next release of the application, then you can run the test again to check if the bug is really fixed or not.
More information about the manual tests please see Run manual tests and FAQs for manual testing

How do I run Urban Code Deploy FVT tests locally?

My project at work used Urban Code Deploy (UCD) for its continuous deployment process. My code runs locally and passes all unit tests, but the build group says that my code is failing the FVT test being run by UCD. Is there any way to run this FVT test locally, or at least attempt to run it, so I can hopefully figure out what is failing?
Mike
UC Deploy isn't a testing tool. So the team that has set it up, has it running some other testing tool at the end of hte deployment (which is pretty normal).
So you'll need to ask them what testing tool they're using and go from there.
If you can see how the build group is deploying your code, you should be able to see what testing they are doing, and then be able to replicate that in your own environment. Often the code changes and changes in requirements will not be reflected in the FVT tests, and you need to deliver updated FVT test scripts in conjunction with your code changes.

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Salesforce.com deployment

We are currently working on a Salesforce.com custom APEX project that involves a lot of apex classes, triggers and Visualforce pages. We also have numerous applications from AppExchange that are part of the system.
We develop all the Apex Classes, Visualforce pages, etc in test environment and then deploy it to the live environment using Eclipse IDE. What happens is that every time we deploy changes to the live environment, all the test methods of all the classes (including those from AppExchange Apps) seems to be executing. So deployment of a simple change could end up taking couple of minutes.
Is there a way in apex to "package" classes by namespace or something like that so that when we try to deploy a change, only the test methods relevant to that package are executed. If something like that exists, our deployment can happen much faster.
Unfortunately no, there is no partial testing for deployment of apex code, every change, no matter how minute or self-contained triggers a full test run. This among other things enforces code metrics (minimum total code coverage for instance)
IMHO, this is proving to be a two-sided coin when it comes to enforcing code reliability. When we started using apex all of our tests were very comprehensive performing actual testing of the code with lots of asserts and checks. Then we started having very very long deploy times so now our tests serve one and only function, satisfying minimum code coverage, and even with that simplification it takes almost 3 minutes to deploy anything and we only use 20% of our apex code allowance.
IMHO2, Apex is way too slow of a coding platform to be enforcing this kind of testing. I cant even imagine how long the tests would run if we reach 50% allowance, not to mention any more.
This is possible but you'll need to learn about Apache Ant and have a look at the Force.com Migration Toolkit. You can then use a Build file to determine which files are deployed as well as which tests are run.
I'm busy writing a whitepaper that'll touch on this and other related development strategies... I'll post to my blog when it's done.
If we use the apache ant migration tool we have many options for deployment
like
deployCodeFailingTest which will skip the test classes
and if you want to run only specific test classes
please use : something similar to this in ur build.xml
<target name="deployCode">
`<sf:deploy`
username="${sf.username}"
password="${sf.password}"
serverurl="${sf.serverurl}"
deployroot="codepkg">
<runTest>SampleDeployClass</runTest>
</sf:deploy>
</target>
for detailed reference please use this link
http://www.salesforce.com/us/developer/docs/daas/salesforce_migration_guide.pdf
I would recommend the following approach:
Git as repository for all your sf code
jenkins to deploy your code as CI/CD
PMD as the static code analyser
sfdx as the deployment method in jenkins for deployment.
Refer the trailhead link: https://trailhead.salesforce.com/users/strailhead/trailmixes/architect-dev-lifecycle-and-deployment