How Burp exactly scans a request - rest

I just started working with Burp professional suite 2.0.6 beta. After proxy recording, I just right-click and perform the scan with default configuration.
I want to know exactly what happens in that scan. It covers pen testing, but how?
Does it sends requests to the server and analyze the response, if so, take an example of POST API call. Does Burp replaces the input and sends the call to the server?, but in UI, I can't see any new thing(as POST method) created. Then how does Burp analyzes response?
In my application, if a Form is submitted, the response will be "Form Submitted. Submitted ID:9898" which is JSON output.
Some one please guide or teach me the correct things on how exactly Burp scans a request.

You can use the Logger++ extension from the BApp store to monitor activity from Burp Scanner:
https://portswigger.net/bappstore/470b7057b86f41c396a97903377f3d81

Related

Getting all requests while loading website

In my Flutter app...
I was wondering if it would be possible if I could get all the requests a website was requesting. I want something like chrome dev tools offers:
Let's say I would call a HTTP request to a website and then I would receive requested data and these request that were made while loading the web.
You have to study how to work a webserver and the client / server architecture.
To make it easier for you, the client makes a request in a specific path to the web server, the web server provides a response, the client processes the data and, if necessary, makes other requests to complete its "task".
In chrome dev tools, you see only a log/debugger of requests.
To get all requestes, you need to simulate a complete functional client, or get from response the specific information you want.
I link you one package in Dart and one in NodeJS to start understading what you need, web_scarper_dart, puppeteer

Delphi REST Debugger Returns Error 429 Too Many Requests but Browser Returns JSON as Expected

What is the difference between the way a browser calls the URL and doing it via Rest Debugger or HTTP Components?
I have a 3rd party Web REST API that work every time in a browser (IE it returns JSON as expected), but when I use (GET) the same URL in the Delphi REST Debugger it returns error code 429 Too Many Requests.
I am not allowed to post the exact URL here (I'm sorry, boss has the last say but it is like this https://xxxx.yyyy.com.au/search/resources/store/zzzzz/productview/123456).
For additional information the result is consistent giving the 429 error when I use NetHTTPClient and NetHTTPRequest components as well as using the Delphi REST Components.
I thought that setting the user agent to be the same as my browsers might help, but alas it didn't. I use Delphi 10.3.3 Rio
I'm a bit new to REST and haven't found an answer by googling for a couple of days now. Any help will be most appreciated.
Thanks,
John
The answer is cookies. When I rejected all cookies I could see the behavior as stated by #RemyLebeau where the page is in a continuous loop. The browser sends a cookie with the request header. I'm new to all of this, so I'll try to replicate what the browser is doing and see what happens. If I get really stuck I'll post another question specifically about cookies. Many thanks to all who offered advice. Most appreciated. I put this here because someone deleted this as an answer.

In JMeter, is there a way of testing an autocomplete that cancels requests

I'll start this question with 'this is not the same as the previous one'. I can see straightaway that an almost identical question has been asked but the answer is not what I'm after. I will explain...
I need to test an autocomplete search box in a web page. Normally I'd just do a series of requests with the HTML containing one extra letter each time (which is the answer to the other question similar to this). Problem is, that's not how the page behaves. It does submit a new request each time I type a letter, but it's cancelling the previous one instead of letting it continue. Therefore the only one that actually gets to a HTTP 200 response is the very last one.
This blog contains an example of what I'm seeing;
Autocomplete and request cancellation
But about halfway down it shows our test condition;
Client cancellation must also be supported by the search backend. Backend that doesn’t support cancellation continues processing request even after client disconnects.
I need to write a jmeter script that replicates a series of cancelled requests, followed by a single successful request, such that when I look on the backend I either see multiple running queries (bad) or just the last one (good).
Edit: I've also hit a follow up issue, how to identify canceled requests in web server logs. It looks like I'm only seeing single requests if they are allowed to complete (IE if I pause between letters). If the requests are cancelled, they don't get logged in the log. So, how do I verify that they happened at all? If we import the logs into a visualization tool, are we going to be missing the 'canceled' requests.
"Request cancellation" is nothing more than closing the connection from the client side
The easiest way of implementing it in JMeter is setting the response timeout, the setting lives under "Advanced" tab of the HTTP Request sampler or even better HTTP Request Defaults)
Just set this timeout to be lower than the threshold configured in your frontend and JMeter will close the connection making the backend "think" that the autocomplete request has been aborted because the user is still typing.
Demo:

Why didn't Fiddler show this activity?

We have a Client Toolkit provided by our partner that allows us to access their web services. It started giving errors yesterday on any call and initially their support wanted us to provide a Fiddler log. I tried to do so, however there was no activity shown in Fiddler when the call was made.
From this I would have assumed that the error would have to have occurred before an actual web request was sent out. However, the issue turned out to be an update they did that requires an SSL connection. They rolled back the change but advised us to update our calls to use https so they can re-implement their update.
So if the change was on their end, that means that communications obviously were going on with their server. Why wouldn't that have shown up in Fiddler? Are there scenarios where communications occur but a request isn't fully created or something like that? I just assumed that if there was any communication whatsoever that "something" would show up in Fiddler.

Programatically POST'ing a form is not doing what my browser is doing. Why?

I'm trying to programmatically submit a form on a web site that I do not own. I'm trying to simulate what I would manually do with a web browser. I am issuing an HTTP POST request using an HTTP library.
For a reason that I don't know I am getting a different result (an error, a different response, ...) when I programmatically submit the form compared to a manual submission in a web browser.
How can that be and how can I find out what mistake I have made?
This question is intentionally language and library agnostic. I'm asking for the general procedure for debugging such issues.
All instances of this problem are equivalent. Here is how to resolve all of them:
The web site you are posting to cannot tell different clients apart. It cannot find out whether you are using a web browser or an HTTP library. Therefore, only what you send matters for the decision of the server on how to react.
If you observe different responses from the server this means that you are sending different requests.
A few important things that you probably have to send correctly:
URL
Verb (GET or POST)
Headers: Host, User-Agent, Content-Length
Cookies (the Cookie and Set-Cookie headers)
The request body
Use an HTTP sniffer like Fiddler to capture what you are programmatically sending and what your browser is sending. Compare the requests for differences. Eliminate the differences one by one to see which one caused the problem. You can drag an HTTP request into the Composer Window to be able to modify and reissue it.
It is impossible to still get a different result if you truly have eliminated all differences between the manual and the programmatic requests.