I’ve been working as an API test engineer for four months. I’m creating API testing framework from scratch. I use Postman to maintain and store my test scripts and use Newman to run my test collection on Jenkins server. But I don’t receive good reports about test results and my manager requires providing graphical weekly and monthly reports about API testing. When I was working as a GUI test automation engineer I used Allure report and I was more than happy with it because I received graphical information about my tests. And I really need kinda the same result for my API testing. Does anybody know how can I do it? If you know how can I get similar result like on screenshot just provide me the name of the tool or basic plan and I will be happy. Thanks!
***attached screenshot is allure report. I use it to get report about Selenium web-driver test results. Example of report that I expect but for API
When i was testing, my company had this software to help me with testing:
https://www.soapui.org/
But it is not free.
Best
I use console.log statements in my postman test scripts. When I run the tests with newman, I capture these statements in a file. This is one way of reporting each failure (or whatever you wish to report). In my case I am formatting the output as a comma-delimited output, so I can import into excel and organize that way.
I compile summary reports by using newman as a nodeJS module. As the test is running, I use the events to capture statistics such as response time for each request. I can capture additional information about requests that timed out or failed. When the collection is finished, I can calculate average response times, overall error rate, etc and persist the summary report to a file.
Related
we implemented UI tests for a web application using cypress and they are executed on a release pipeline in Azure DevOps.
We need to link the ui tests and test results with our test cases defined in the test plans.
For linking the ui tests with the test cases we retrieve the automated test results using the API-URL:
_apis/tcm/ResultsByRelease?releaseId={}&publishContext=CI&%24top=20000
...once we retrieve the results, I can link the test cases in the test plan to automated tests using the Microsoft.TeamFoundation.WorkItemTracking.WebApi (Method: UpdateWorkItemAsync). Our UI tests have the ID of the test case as attribute, so I can use that to link them.
We can't change the outcome of the test cases based on the retrieved results. I’ve found there is the concept of test point, but I could not find what it is for. In the REST API documentation this the resource has the outcome of test cases. According to the documentation, test point cannot be created only updated based on run, if a understood correctly.
Any ideas, how we can change the outcome of the test cases?
Thanks,
P
You can try getting the test points with below api.
https://learn.microsoft.com/en-us/rest/api/azure/devops/testplan/test%20point/get%20points%20list?view=azure-devops-rest-5.1
You can then iterate the test points results from above step and to get each test point id. Then try updating the outcome of each test point with api here
If your project is developed with visualstudio. You can associate these test results and test case via visual studio. And the outcome would be automatically updated when the test cases completed in release pipeline.
To associate tests with test cases in test plan, check here.
To run tests from test plan, check here
I am required to functional test a web app built using angular 5. The app has a lot of chart widgets - basically reporting done through the chart based on the values in the sql server database for specific criteria given through the query bar. I basically have to check for dynamic changes in the charts based on updates - add/delete/change in the database. There are several different charts which get affected by those changes and I have to validate both the ui and db using automation. I have been reading that protractor can be used for e2e testing, would I be able to validate data updates and changes to the chart using protractor or please suggest me a tool for this. Also I am not seeing a lot of blogs for checking dynamically generated charts using protractor. Please help me with any material you can.
Could be really difficult to do using protractor if we are talking about a production-like environment, with dynamic data. Develop such an e2e test like the case that you are describing will, at the best case, provide you a flaky test that will provide you a large quantity of fake-failings.
If you are using a library to generate the charts like Highcharts, i would divide the testing on two pieces:
A) The easier part: Check that the endpoints that need to provide the data to the charts are retrieving properly the data comparing it to the present data on the DB. You could use a module like protractor-intercept ( https://www.npmjs.com/package/protractor-intercept ) to easily handle that. With these, you will be testing that the data arrives properly from the DB to Client.
B) Difficult part: Mock the data retrieved for those endpoints on a test environment ( yes, you will need help from development team). If you know the data that you are expecting, will be easier to assure that is being properly rendered on the front-end charts.
Those kind of tests are hard to deal. A months ago, I had to design one of those and finally the team decided to cover only the api responses, instead of all the e2e flow.
We are using HP ALM for QC in our projects and we are planning to automate the following things uisng HP ALM's REST api.
1.Dynamically upload the test cases in Test Plan and Test lab.-From the api code sample i can understand that TC's are upload to Test lab by creating test set folders.But how to upload the TC's in Test plan ?
Map the test cases in Test plan and Test lab -No where i have seen any examples which explains how to map the TC's in Test Plan and Lab.
3.Dynamically update the status of the test cases with Success or failure.
4.How to map the TC's with work items ?
Can you please advise us that the above items are achievable ?
of course what you mentioned is possible. First of all it may be different on the HP QC REST ALM version you are using. For instance not all the features are enabled on the first 11.x version, but most of bugs got fixed in the 12.53 for instance.
Anyway coming to your point:
1) Always refer to the HP REST ALM library to check the REST msg you should send:
http://alm-help.saas.hpe.com/en/12.53/api_refs/REST/#Overview.htm
2) You have to choose a way to send and receive those msgs. Nowadays it is quite common the usage of python (I am using that). Please have a look at the "REQUESTS" module that will fit for this task!
First action is to login/authenticate into the QC ALM, then you can start sending next operation according to the library above and keeping the correct information in the header (for instance the LSSOCookie).
3) Plenty of questions are already answered on stackOverflow. Always try to check a specific task.
4) Coming to your questions: if test-set-folders is used for folders in test lab, then test-folders is used as entity for the folder under Test Plan.
Entity= "test-instances" is used then to link a Test Case to Test Set :-)
To update a test-set or test-case you have to send a proper update for that. (some bugs are visible in that area for the HP 11.52 --> for instance before create a test as "No-run" and then update to "pass or fail". This you will experience after you have implemented first points.
you can then create run, run-steps, design-steps, attached files and whatever is mentioned on the HP ALM library.
Please vote the question if this solved your query ;) that you can close the same! I wish you luck in your project! ciao ciao!
I am trying to determine whether I am able to inject test case information at run time and leverage the SOAPUI tool. I understand that I can create test cases on the GUI but is this my only option?
Background info if interested: Currently I am working on creating an automation framework at my company. We currently have web page testing and soon to be added SOAP testing. As many of these tests (at one point in the future as I am told my the architect) could be run from both a web page and soap I think it's best to store the test cases in some format (Json, YAML, etc.) to document all the test cases and then inject them into test steps at run time.
However my company enjoys working with SOAPUI. I've used the tool and created test cases, assertions, et al on the GUI (of course) but I cannot find any documentation which suggests that instead of defining the test cases in this way I could inject the test information at run time (similar to what you can do with the wsdl2java apache tool). Can this be done with testrunner? This way I can reuse the test cases. Is this possible? Does this even make sense? I just want to attempt to incorporate a tool I've been asked to use.
Any thoughts are greatly appreciated!
Here is an example of what data may look like:
Partner : [
Organization : [
Company Name:
Company URL:
]
Contact Information : [
Name:
Address:
]
] (sorry i can't get the indents to work properly...)
As I stated below in a comment, I know on the SoapUI GUI I can create a test suite, test case and add test steps. But I want to store the test step information in a different place so I can use the test steps for different kinds of tests.
Your question is way too broad for me to even attempt a complete answer.
SoapUI GUI you use to create the tests. Your data can be stored, and read by SoapUI, in Excel, database, flat file, generated dynamically, whatever you want. You can run everything using the testrunner from command line, or using the Maven plugin from Jenkins.
Seriously, spend some time with the documentation.
Are there any frameworks or tools or other methods for automating integration tests of FQL queries?
By integration tests I mean I want to run the queries against the production facebook graph API.
I can use the Graph API Explorer, or just hit endpoints, to test a query manually against my own profile data - but I want to test a query against different test users with different volumes, types and patterns of data than exist in my personal profile, and verify that the result is what I expect in each different case.
I.E. the standard automated testing pattern..
Set up test data (presumably on a test user)
Run query against test data
Verify expectations on results
..and repeat for as many test cases as necessary.
There is no widely known or used framework to achieve this.
Thus, you'll need to create a test user. Populate it with what it is you want. Authenticate that test user. and then run the FQL on it. Thats your only option to run the tests you are looking for.