we implemented UI tests for a web application using cypress and they are executed on a release pipeline in Azure DevOps.
We need to link the ui tests and test results with our test cases defined in the test plans.
For linking the ui tests with the test cases we retrieve the automated test results using the API-URL:
_apis/tcm/ResultsByRelease?releaseId={}&publishContext=CI&%24top=20000
...once we retrieve the results, I can link the test cases in the test plan to automated tests using the Microsoft.TeamFoundation.WorkItemTracking.WebApi (Method: UpdateWorkItemAsync). Our UI tests have the ID of the test case as attribute, so I can use that to link them.
We can't change the outcome of the test cases based on the retrieved results. I’ve found there is the concept of test point, but I could not find what it is for. In the REST API documentation this the resource has the outcome of test cases. According to the documentation, test point cannot be created only updated based on run, if a understood correctly.
Any ideas, how we can change the outcome of the test cases?
Thanks,
P
You can try getting the test points with below api.
https://learn.microsoft.com/en-us/rest/api/azure/devops/testplan/test%20point/get%20points%20list?view=azure-devops-rest-5.1
You can then iterate the test points results from above step and to get each test point id. Then try updating the outcome of each test point with api here
If your project is developed with visualstudio. You can associate these test results and test case via visual studio. And the outcome would be automatically updated when the test cases completed in release pipeline.
To associate tests with test cases in test plan, check here.
To run tests from test plan, check here
Related
Kindly we need your support regarding some reports need to generate them using the Azure DevOps queries & to be presented in Dashboard:
Need to generate a report to get the test case execution result from queries using the outcome value.
Need to generate a query to get the bugs that not linked to a test case or user story, also the test cases that not linked to user story.
Need to generate a report on the user story level or sprint level using the queries to get the test execution result for the test cases against the reported bugs on the same user story or the same sprint.
Need to change the configuration to let the opened bugs linked to user story with relation parent/child.
How to move set of test cases with their execution outcome from suite to another one in test plan?
We are developing a robotics product that requires quite a few manual tests in every release cycle and we want to automate the management of these tests.
We envision a process with these steps:
One or more test plans are assigned to one or more testers by new stage in our release pipeline using the Azure DevOps REST API (with the respective build artifact related to the pipeline)
Manual testing takes place
A deployment gate keeps the release in a testing stage until all tests have passed (solvable with a function app that parses the response from: GET https://dev.azure.com/{organization}/{project}/_apis/test/runs?buildUri={buildUri}
Is it possible to assign testers (to test a specific build) using the REST API https://dev.azure.com/{organization}/{project}/_apis/test?
Is it possible to assign testers (to test a specific build) using the
REST API
The answer is yes. You can use Test Point - Update rest api to achieve this.
PATCH https://dev.azure.com/{organization}/{project}/_apis/testplan/Plans/{planId}/Suites/{suiteId}/TestPoint?api-version=6.0-preview.2
Update Test Points. This is used to Reset test point to active, update the outcome of a test point or update the tester of a test point.
I’ve been working as an API test engineer for four months. I’m creating API testing framework from scratch. I use Postman to maintain and store my test scripts and use Newman to run my test collection on Jenkins server. But I don’t receive good reports about test results and my manager requires providing graphical weekly and monthly reports about API testing. When I was working as a GUI test automation engineer I used Allure report and I was more than happy with it because I received graphical information about my tests. And I really need kinda the same result for my API testing. Does anybody know how can I do it? If you know how can I get similar result like on screenshot just provide me the name of the tool or basic plan and I will be happy. Thanks!
***attached screenshot is allure report. I use it to get report about Selenium web-driver test results. Example of report that I expect but for API
When i was testing, my company had this software to help me with testing:
https://www.soapui.org/
But it is not free.
Best
I use console.log statements in my postman test scripts. When I run the tests with newman, I capture these statements in a file. This is one way of reporting each failure (or whatever you wish to report). In my case I am formatting the output as a comma-delimited output, so I can import into excel and organize that way.
I compile summary reports by using newman as a nodeJS module. As the test is running, I use the events to capture statistics such as response time for each request. I can capture additional information about requests that timed out or failed. When the collection is finished, I can calculate average response times, overall error rate, etc and persist the summary report to a file.
I am trying to determine whether I am able to inject test case information at run time and leverage the SOAPUI tool. I understand that I can create test cases on the GUI but is this my only option?
Background info if interested: Currently I am working on creating an automation framework at my company. We currently have web page testing and soon to be added SOAP testing. As many of these tests (at one point in the future as I am told my the architect) could be run from both a web page and soap I think it's best to store the test cases in some format (Json, YAML, etc.) to document all the test cases and then inject them into test steps at run time.
However my company enjoys working with SOAPUI. I've used the tool and created test cases, assertions, et al on the GUI (of course) but I cannot find any documentation which suggests that instead of defining the test cases in this way I could inject the test information at run time (similar to what you can do with the wsdl2java apache tool). Can this be done with testrunner? This way I can reuse the test cases. Is this possible? Does this even make sense? I just want to attempt to incorporate a tool I've been asked to use.
Any thoughts are greatly appreciated!
Here is an example of what data may look like:
Partner : [
Organization : [
Company Name:
Company URL:
]
Contact Information : [
Name:
Address:
]
] (sorry i can't get the indents to work properly...)
As I stated below in a comment, I know on the SoapUI GUI I can create a test suite, test case and add test steps. But I want to store the test step information in a different place so I can use the test steps for different kinds of tests.
Your question is way too broad for me to even attempt a complete answer.
SoapUI GUI you use to create the tests. Your data can be stored, and read by SoapUI, in Excel, database, flat file, generated dynamically, whatever you want. You can run everything using the testrunner from command line, or using the Maven plugin from Jenkins.
Seriously, spend some time with the documentation.
Are there any frameworks or tools or other methods for automating integration tests of FQL queries?
By integration tests I mean I want to run the queries against the production facebook graph API.
I can use the Graph API Explorer, or just hit endpoints, to test a query manually against my own profile data - but I want to test a query against different test users with different volumes, types and patterns of data than exist in my personal profile, and verify that the result is what I expect in each different case.
I.E. the standard automated testing pattern..
Set up test data (presumably on a test user)
Run query against test data
Verify expectations on results
..and repeat for as many test cases as necessary.
There is no widely known or used framework to achieve this.
Thus, you'll need to create a test user. Populate it with what it is you want. Authenticate that test user. and then run the FQL on it. Thats your only option to run the tests you are looking for.