How to put a breakpoint in a request sent through Fiddler? - fiddler

In fiddler, how to terminate the request in between before it reaches the host. For eg I send a request and I want to put a breakpoint in that request so that I'm not able to receive the response. Basically, I want to inspect the response before it is returned to the original caller and how my service is behaving if there's a connection lost or some other termination by which the request was unable to reach the server. Any answer is highly appreciated. Sorry for any flaw, I'm a newbie in using Fiddler. :)

Fiddler offers several mechanisms for interfering with requests. If your goal is to simply kill the request without returning a response, you can create a rule with an Action of *drop or *reset rules in Fiddler's AutoResponder.
*drop will close the client connection immediately without sending a response. The closure is graceful at the TCP/IP level, returning a FIN to the client.
*reset will close the client connection immediately without sending a response. The closure is abrupt at the TCP/IP level, returning a RST to the client.
Alternatively, you can have Fiddler return any HTTP/4xx or HTTP/5xx to the client.
Lastly, you could use a breakpoint to allow you to manually manipulate the request before it's sent to the server, and/or manipulate the server's response before it's sent to the client. Use the bpu command in the QuickExec box to break on a given URL (e.g. bpu sample.asp).

Related

HTTP Post under the hood

We have 2 Windows services (same machine) that communicate on top of HTTP Protocol.
On specific machine we see the HTTP POST being sent from the client (Windows service) and arrives to the server (Windows service listening to REST CALLs) - 2 times, meaning i get 2 exact HTTP Post request on the service, but we see on client it was executed only 1 time.
Before going to wireshark/analyze the HTTP protocol, I wish to understand what explain this behavior.
When going to https://www.rfc-editor.org/rfc/rfc7231#section-4.3.3
"the origin server SHOULD send a 201 (Created) response containing a Location header
field that provides an identifier for the primary resource created"
I guess we should look in wireshark for 201 response? And if no response? Does the HTTP or network framework for my C# application is retrying the POST on the server side? because we dont see 2 requests sent from client code.
POST reply behavior
While true, more often than not the server replies with a 200-ok status code and some extra information.
Whether this is by mistake or to avoid chatty apis or some other architecture/design consideration, only the developer can tell.
So in theory you get a 201 with an identifier and then make a GET request with said identifier to retrieve details.
In practice a lot of times this does not occur. So it is not safe to assume this behavior.
Your problem
I highly doubt that there is a built in mechanism that retries post. There are plenty of reasons for that:
Duplicating entries. Imagine creating a PayPal payment. If the network has an error and you just did not receive the answer, the built in mechanism will charge you twice.
There are libraries that do that only when you are sure that the request is idempotent, that is the post contained some sort of identifier and the second request will fail.
First, the calls are HTTP GET (not POST).
We define the URL with hostname/FQDN, the solution to avoid duplicated calls was to work with ip address instead of hostname when sending the Rest API.
This is the long explanation of the problem, no root cause yet.
Used both Wireshark/Process Monitor to diag, not sure for the root cause.
Process Monitor: Filtering to display network calls
Wireshark: Filter to show only HTTP
The Client send a single HTTP Get request to:
/DLEManagement/API/Engine/RunLearningPeriod
The call was executed at 11:08:16.931906
We can see 2nd call at 11:08:54.511909 - We did not trigger.
HTTP Get executed from *Server.exe (in red) and the Server is at *Management.Webservice.exe (in red).
We see that a *Client.exe (Antivirus process, in blue) is sending TCPCopy packets in the window between we sent and received.
Also, we can see that the first request was made with APIPA IPv6 and the 2nd call is IPv4, We checked the network interface and it was disabled.
Wireshark screenshot:
Process Monitor screenshot:
Network configuration:

Redirecting/proxying a HTTP request to a secondary HTTP server

I use a C++ library called restbed as a webserver to deliver static HTML files. On the same machine I have another webserver running, and I would like to redirect some of the incoming connections to restbed. Depending on the request I would make the decision to redirect certain requests to the other server.
Is it technically possible and advised to connect two sockets with each other, if I get access to the underlying socket of the incoming HTTP connection?
If not, what would be a common approach for this? I can only have one TCP port for both services.
Yes, you can respond to an HTTP request by opening a connection to another HTTP server, forwarding the request to that server, and then forwarding the response back to the original client. In fact it's common for Internet-facing systems to include some kind of "front end" or "reverse proxy" or "L7 load balancer" or "API gateway" that does exactly this, often applying some kind of authentication, input validation, or routing logic in the process.
If you're building this yourself, it's not quite as simple as just opening a socket to the second HTTP server and forwarding the request verbatim. You should use some HTTP client library to send the request to the second server. In other words, the HTTP server that receives the original request should then turn around and be an HTTP client for the second server. When preparing the request for the second server, you should copy some but not all of the data out of the original request.
You should copy the HTTP method and URL.
You should probably not copy the scheme (http: or https:) because how the client chose to connect to the original server doesn't have to influence how that server will connect to the second server; you might be using HTTPS for the original server but forward requests using HTTP.
You should not copy the Host header unless for some reason the second server has been configured to respond to the same host name as the original server.
You should not copy headers that will confuse the HTTP client library you're using to connect to the second server. For example, if the client sends you Accept-Encoding: gzip then it is claiming to be able to accept gzipped responses, but if you forward that header, the second server will think that the HTTP client library you're using in your server can accept gzipped responses, whether it actually can or not.
You should forward cache control headers if you want the second server to be able to send 304 Not Modified if the client already has the file.
If you're just serving static files from the second server, then you can probably get something to work just by sending the HTTP method and URL only and ignoring the other request headers.
It's a similar story on the response side. You should probably copy some headers like Content-Type, but others, like Content-Length, will be set by your server, so you should not copy those headers. Try starting out by copying no headers and see if it works, then copy individual headers to address issues you discover. You will probably at least need to copy Content-Type.
HTTP has a lot of features, and I can't hope to go through all the possible situations here. The point I want to get across is that you can't just copy all the headers from one request or response into the other, because some of them may not apply, but you can't just copy none of them either. You have to understand what the headers do and handle them appropriately.
Which headers you should preserve depends a lot on how much handling of the request and response you're doing in the first server. The more the first server handles or interprets the request and/or response, the more its interaction with the second server becomes independent of its interaction with the client, and the fewer headers you should copy.

Handle REST API timeout in time consuming operations

How is possible to handle timeouts in time consuming operations in a REST API. Let's say we have the following scenario as example:
A client service sends a request to insert a resource through a REST API.
Timeout elapses. The client thinks the insertion failed.
REST API keep working and finishes the insertion.
Client do not notify the resource insertion and it status is "Failed".
I can think I a solution with a message broker to send orders to a queue and wait until they are solved.
Any other workaround?
EDIT 1:
POST-PUT Pattern as has been suggested in this thread.
A Message Broker (add more complexity to the system)
Callback or webhook. Pass in the request a return url that the server API can call to let the client know that the work is completed.
HTTP offers a set of properties for invoking certain methods. These are primarily safetiness, idempotency and cacheability. While the first one guarantees a client that no data is modified, the 2nd one gives a promise whether a request can be reissued in regards to connection issues and the client not knowing whether the initial request succeeded or not and only the response got lost mid way. PUT i.e. does provide such a property, i.e.
A simple POST request to "insert" some data does not have any of these properties. A server receiving a POST request furthermore processes the payload according to its own semantics. The client does not know beforehand whether a resource will be created or if the server just ignores the request. In case the server created a resource the server will inform the client via the Location HTTP response header pointing to the actual location the client can retrieve information from.
PUT is usually used only to "update" a resource, though according to the spec it can also be used in order to create a new resource if it does not yet exist. As with POST on a successful resource creation the PUT response should include such a Location HTTP response header to inform the client that a resource was created.
The POST-PUT-Creation pattern separates the creation of the URI from the actual persistence of the representation by first firing off POST requests to the server until a response is received containing a Location HTTP response header. This header is used in a PUT request to actually send the payload to the server. As PUT is idempotent the server simply can reissue the request until it receives a valid response from the server.
On sending the initial POST request to the server, a client can't be sure whether the request reached the server and only the response got lost, or the initial request didn't make it to the server. As the request is only used to create a new URI (without any content yet) the client may simply reissue the request and in worst case just create a new URI that points to nothing. The server may have a cleanup routine that frees unused URIs after a certain amount of time.
Once the client receives the URI, it simply can use PUT to reliably send data to the server. As long as the client didn't receive a valid response, it can just reissue the request over and over until it receives a response.
I therefore do not see the need to use a message-oriented middleware (MOM) using brokers and queues in order to guarantee reliable messaging.
You could also cache the data after a successful insertion with a previously exchanged request_id or something of that sort. But I believe message broker with some asynchronous task runner is a much better way to deal with the problem especially if your request thread is a scarce resource. What I mean by that is. If you are receiving a good amount of requests all the time. Then it is a good idea to keep your responses as quickly as possible so the workers will be available for any requests to come.

GET rest api fails at client side

Suppose client call server using GET API, is it possible that server send a response but client misses that response??
If yes how to handle such situation, as I want to make sure that client receives the data. For now I am using second REST call by client as ack of first.
It is certainly possible. For example if you are using a site with a REST API and a request is just sent to the API and your internet connection dies when the answer is supposed to arrive, then it is quite possible that the server has received your request, successfully handled it, even sent the response, but your computer did not receive it. It could be an issue on a server responsible for transmitting the request as well. The solution to this kind of issue is to have a timeout and if a request timed out, then resend it until it is no longer timed out.

How can I tell data arrives on a HTTP keep-alive connection?

I am implementing a simple web server using libuv. Currently I am stuck with the keep-alive connection.
Based on my understanding of keep-alive, I just do not call uv_close() on the established connection(a TCP socket) after a request is processed, and reuse it latter.
I am wondering how can I tell a new request arrives on that connection?
That is, when should I call uv_read_start() on that alive connection?
When you use keep-alive, the connection will not be closed after the first request. When the client wants to send a new request, it will just reuse the same connection, so your read callback will be called again. You shouldn't even need to call uv_read_start() again.
Immediately you have finished writing the prior response.