I see multiple tutorials about Postman/Newman test scripts, however they mostly looks like single requests.
What is the best way to chain Postman test request based on previous results, so eg:
PUT upload request
Test for e.g. status code. If 200, do POST start-processing the just uploaded file, else stop
If 200, then do GET to query
If 200, check JSON against fixed expected JSON output.
Newman seems to run an entire collection independently. I only want to run request 1, which then fires request 2 and request 3 based on output of previous request in that same collection.
You can configure this in the Tests section in Postman by using
if (condition) {
postman.setNextRequest("NAME OF YOUR REQUEST")
}
Related
I want to run unit tests in my program whenever the api receives a request. the api returns a response once when it has received a valid request, but then later it will send once it's done with the unit tests. The first response is shown in Postman, but not the second repsonse. I can see that it getting send using wireshark though.
These are two separate responses and since this api is what it is, I don't have the power to change it. How can i use Postman to receive the second response as well?
Edit additional information is requested so:
I have in my collection a POST request and when I trigger it I get a response back with a body with:
{
"messageType" : "Response"
"options" : "async"
}
Then the code can see that another response is incoming later, because of the async token.
A bit later another response is received:
{
"messageType" : "Response"
"Tests" : "11 ok tests"
}
But in postman I can't seem to receive the second response, as the transaction is finished after the first one. How can I make postman also receive the second response?
In Insomnia test suite, is it possible to add or update the request body?
For example, I have get/post request in a collection which takes a json as request body, I like to test the request by changing this json. By default in test suite I can select a request and the defined json (under collections) is used for testing, what if I want to change this json request?
An example code is
const response1 = await insomnia.send();
expect(response1.status).to.equal(200);
Is it possible to change what is posted in insomnia.send() within the test suite? As of now I have to go to the collection change this json but what if I want to test the same api end point with different jsons?
I am aware, I can test it under collection (Debug tab) by changing the JSON and directly testing there but I am trying to leverage the test suite to write a bunch of test against an end point. Insomnia uses Mochajs for testing, is there any sample of that used long with insomnia?
I cannot find any pointers in Insomnia documentation.
I need to test the load and performance test of the API which is hosted in the AWS API gate way. Im using two post methods to get the final result. first post method will pass the below parameters in json format in the API.
{
propno:"xxxxx",
apikey:"xxxx-xxxx",
user:"xxx"
}
by executing this i will get a reference number and status of the execution
{
reference:"ABxxxxxxxxxna",
status:"ok"
}
Then will pass this reference no in another post method to get the desired result.
{
refno:"ABxxxxxxxxxna",
apikey:"xxxx-xxxx",
user:"xxx"
}
Now i want to perform the load test in Jmeter. Any help would be appreciated.
What is your question exactly?
In JMeter you can send a POST request using HTTP Request sampler, the relevant configuration would be something like:
the refno value can be fetched using JSON Extractor configured like:
next in the second HTTP Request use ${refno} reference to the JMeter Variable
You might also need to add a HTTP Header Manager and configure it to send the Content-Type header with the value of application/json
once done you can
Add more users in the Thread Group according to your NFR/SLA/common sense/whatever
Run your test in command-line non-GUI mode
Generate HTML Reporting Dashboard and analyze the results
I've developed a method that does the following steps, in this order:
1) Get a report's metadata via /gdc/md//obj/
2) From that, get the report definition and use that as payload for a call to /gdc/xtab2/executor3
3) Use the result from that call as payload for a call to /gdc/exporter/executor
4) Perform a GET on the returned URI to download the generated CSV
So this all works fine, but the problem is that I often get back a blank CSV or an incomplete CSV. My workaround has been to put a sleep() in between getting the URI back and actually calling a GET on the URI. However, as our data grows, I have to keep increasing the delay on this, and even then it is no guarantee that I got complete data. Is there a way to make sure that the report has finished exporting data to the file before calling the URI?
The problem is that export runs as asynchronous task - result on the URL returned in payload of POST to /gdc/exporter/executor (in form of /gdc/exporter/result/{project-id}/{result-id}) is available after exporter task finishes its job.
If the task has not been done yet, GET to /gdc/exporter/result/{project-id}/{result-id} should return status code 202 which means "we are still exporting, please wait".
So you should periodically poll on the result URL until it returns status 200 which will contain a payload (or 40x/50x if something wrong happened).
I want to pass some data within request body, but I'm using GET request, because I just want to modify this data and send it back.
I know that it is bad practice to use body with GET requests.
But what should I do with this situation if I want to build correct RESTful service?
P.S. I'm not changin any object on server.
I'm not putting any new object on server.
You want a POST. Something like
POST /hashes
{
"myInput": ...
}
The response would be the hashed value. There's no rule that the created resource must be retained by the server.
From the RFC:
The action performed by the POST method might not result in a
resource that can be identified by a URI. In this case, either 200
(OK) or 204 (No Content) is the appropriate response status,
depending on whether or not the response includes an entity that
describes the result.