Why didn't Fiddler show this activity? - fiddler

We have a Client Toolkit provided by our partner that allows us to access their web services. It started giving errors yesterday on any call and initially their support wanted us to provide a Fiddler log. I tried to do so, however there was no activity shown in Fiddler when the call was made.
From this I would have assumed that the error would have to have occurred before an actual web request was sent out. However, the issue turned out to be an update they did that requires an SSL connection. They rolled back the change but advised us to update our calls to use https so they can re-implement their update.
So if the change was on their end, that means that communications obviously were going on with their server. Why wouldn't that have shown up in Fiddler? Are there scenarios where communications occur but a request isn't fully created or something like that? I just assumed that if there was any communication whatsoever that "something" would show up in Fiddler.

Related

How to get requests coming out of Riot/League client

Fiddler doesn't show outgoing API requests.
Using IFEO debugger shows localhost requests that are not usefull for me, I need actual domains.
Someone told me that client ignores windows proxy and i need application proxy but client has protection against it so there's even more to it.
Has anyone tracked lol/riot's requests before and know how to do that?

In JMeter, is there a way of testing an autocomplete that cancels requests

I'll start this question with 'this is not the same as the previous one'. I can see straightaway that an almost identical question has been asked but the answer is not what I'm after. I will explain...
I need to test an autocomplete search box in a web page. Normally I'd just do a series of requests with the HTML containing one extra letter each time (which is the answer to the other question similar to this). Problem is, that's not how the page behaves. It does submit a new request each time I type a letter, but it's cancelling the previous one instead of letting it continue. Therefore the only one that actually gets to a HTTP 200 response is the very last one.
This blog contains an example of what I'm seeing;
Autocomplete and request cancellation
But about halfway down it shows our test condition;
Client cancellation must also be supported by the search backend. Backend that doesn’t support cancellation continues processing request even after client disconnects.
I need to write a jmeter script that replicates a series of cancelled requests, followed by a single successful request, such that when I look on the backend I either see multiple running queries (bad) or just the last one (good).
Edit: I've also hit a follow up issue, how to identify canceled requests in web server logs. It looks like I'm only seeing single requests if they are allowed to complete (IE if I pause between letters). If the requests are cancelled, they don't get logged in the log. So, how do I verify that they happened at all? If we import the logs into a visualization tool, are we going to be missing the 'canceled' requests.
"Request cancellation" is nothing more than closing the connection from the client side
The easiest way of implementing it in JMeter is setting the response timeout, the setting lives under "Advanced" tab of the HTTP Request sampler or even better HTTP Request Defaults)
Just set this timeout to be lower than the threshold configured in your frontend and JMeter will close the connection making the backend "think" that the autocomplete request has been aborted because the user is still typing.
Demo:

REST API status when external APIs are down - Best Practices

I'm looking for guidance on good practices when it comes to returning errors from a REST API. I'm working on a new API so I can take it any direction.
In my case client invokes my API which internally invokes some external APIs. In case of success no problem, but in case of error responses from the far end(external cloud APIs) I am not sure what is industry standard for such services. Am currently thinking of returning 200 OK and then a json payload which details about the external API errors.
So what is the industry recommendations? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?
The failure you're asking about is one that has occurred within the internals of the service itself, though it is having external dependencies, so a 5XX status code range is the correct choice. 503 Service Unavailable looks perfect for the situation you've described.
5XX codes used for telling the client that even though the request was fine, the server has had some kind of problem fulfilling the request. On the other hand,
4XX codes are used to tell the client that it has done something wrong in request (and that the server is just fine, thanks).
Sections 10.4 and 10.5 of the HTTP 1.1 spec explain the different purposes of 4XX and 5XX codes.
Our colleagues have already provided the links / explanations about the HTTP status codes so you should learn them and find the most appropriate in your case.
I'll more concentrate on what can influence your decisions, assuming you've learnt the status codes.
Basically, You should understand what are the business implications of the flow triggered by client when he/she calls "your" API. The client doesn't know anything about the external cloud API you're working with and doesn't really care whether it works or not, the client works with your application.
If so, when the remote system returns some kind of error (and yes, different error statuses should give you a clue of what's wrong with the remote system), its your business decision about how to handle this error, and depending on this decision you might want to "behave" differently in the interaction with a client.
Here are some examples:
You know that the remote system breaks extremely rarely. But once its unavailable, you system doesn't work as well.
In this case you can might consider to retry the call to remote system if it failed. And if you still out of luck - then return some error status. Probably something like 5XX
You know that the data provided by remote client is not really important, on the other hand when the client calls your API its better to provide "something" even if its not really up-to-date than nothing. Think about the remote system that provides the "recommended movies" by some client id. And you're building a portal (netflix style). If this recommended movies service is down for some reason, it doesn't make sense to fail the whole portal page (think about the awful user experience). In this case you might want to "pre-cache" some generic list of movies, and use it as a fallback in case of failure of that remote service. In this case obviously you should return 2XX status in any case.
More advanced architecture. You know that the remote service fails often, and you can continue to work when its down. In this case maybe you will want to choose an "asynchronous" style of interaction with the client. For example: the client calls your rest and you respond immediately with an "Accepted" status code (202). You can save this id with status in some Database so that when the user "asks for status of the ticket by ticket id" you'll be able to query the DB. The point is that you return immediately. Then you might want to send the message with the task to some messaging system and once the consumer will pick the message, it will be processed and the db will be updated. As long as the remote service fails the message will get back to queue still being "unprocessed" (usually messaging systems can implement this behavior). Now at some point in time, the remote system starts responding, and all the messages get processed. Now their status in DB is "done".
So its up to client to ask "what happens" /or you can implement some push model with web sockets or something (its not REST style communication anymore in this case). But the point is that at some point in time the client will receive "OK, we're done with the ticket ID" (status 200). In this case the client can call a special endpoint and consume the stored results that you'll store in the DB as well (again status 200)
Bottom line, as you see, HTTP return codes are just an indicator, but its up to you how to organize the process of interconnection with the client and the relevant HTTP statuses will be derived from your decisions.
I would use 503 - Service Unavailable - as the error. Reason -
This is considering the case that the API operation cannot be completed without response from the external API. This is similar to my DB not responding. So my API is unavailable for service till the external service is back online.
As an API client, I am not concerned whether the API server internally invokes other APIs or not. I am just concerned with the result of the API server. So it does not matter to the client whether I am a proxy or not - hence, I would avoid 502 (Bad Gateway) and 504 (Gateway Timeout). These error can put the client into wrong assumption that the Gateway between the client and our service is causing trouble.
As suggested by #developerjack, I would also recommend to - "Include a Retry-After header so that your HTTP client knows not to spam you with retries until after X time. This means less error traffic for you, and better request planning for the client."
HTTP calls are between client and server, and so the error codes should reflect where the error or fault lies on either side of that relationship. Just because its downstream to you doesn't mean the HTTP client needs to care about that.
Given this, you should be returning a 5xx error because the fault is not with the client, its with the server (or its downstream services). Returning a 2xx (see below for caveat) would be incorrect because the HTTP call did not succeed, and a 4xx would be incorrect because it's not the client's fault.
Digging into specific 5xx's you can return:
A 504 or 502 might be appropriate if you specifically want to signal that your service is acting as a gateway/proxy.
A 523 is unofficial but used by cloudflare to specifically signal that an upstream/origin service is unreachable
A 500 (with a human and machine readable error body) is a safe default that simply indicates "there is something not right with the server and its services right now".
Now, in terms of best practice, there are some techniques you can use to either reduce the 500 errors, or make it easier on the clients to respond/react to this 5xx response.
Put in place retries within your service. If your service is working and the fault is downstream, and can successfully store the client's request to retry later when downstream services are available then you can still respond with a 2xx and the client knows that their request will be submitted. A great example of this might be a user sign up workflow; you can process the signup at your side, and then queue the welcome email to retry later if your email provider is unavailable.
Have both human descriptions, machine error codes and links in your API responses. Human descriptions are useful when debugging and developing against your service. Machine codes mean clients can index/track and code up specific code paths to a given scenario, and links to your docs mean you can alway provide more information. Even better is including any specific ID's for you to trace instances of this error in case the HTTP client needs to reach out for support (though this will be heavily dependant on your logging & telemetry). Here's an example:
{
"error_code": 1234,
"description": "X happened with Y because of Z.",
"learn_more": "https://dev.my.app/errors/1234",
"id": "90daa63b-f5ac-4c33-97d5-0801ade75a5d"
}
Include a Retry-After header so that your HTTP client knows not to spam you with retries until after X time. This means less error traffic for you, and better request planning for the client.

forbidden message while executing a rest message through Jmeter

We have come across similar problem, need your help to resolve this.
Can you please either let us know your contact number so that we can reach out to you or if you can provide your script if possible so that we can refer to
Here is the problem we are stuck with:
I am trying to test a Rest service through HTTP sampler using Jmeter. Not sure how to capture token from the sampler generates a token and to use this token for authorization in the header manager of another HTTP.
Loadrunner is not displaying the web address when trying to enter in the truclient browser. Below is the problem as this web address automatically redirect to another web address which is the authentication server.
Can you please suggest another solution for the below issue?
Here is the exact scenario we are trying to achieve
we want to loadtest the portal however due to redirect and different authentication method being used we are unable to do it using truclient protocol in loadrunner. Also tried Multiple protocol selecting LDAP, SMTP, HTTP/HTML etc but no luck.**
Thank You,
Sonny
JMETER is going to architecturally be the HTTP protocol layer equivalent with LoadRunner, with the exception of the number of threads per browser emulation.
In contrast to the code request, I want to architecturally visualize the problem. You mention redirect, is this an HTTP 301/302 redirect or one which is handled with information passed back to the client, processed on the client and then redirected to another host? You mention dynamic authentication via header token, have you examined the web_add_header() and web_add_auto_header() in Laodrunner web virtual users for passing of extra header messages, including ones which have been correlated from previous requests, such as the token being passed back as you note?
This authentication mechanism is based upon? LDAP? Kerberos? Windows Integrated Authentication? Simple Authentication based upon username/password in header? Can you be architecturally more specific and when this comes into play, such as from the first request to gain access to the test environment through the firewall or from a nth request to gain access within a business process?
You mention RESTFul services. These can be transport independent, such as being passed over SMTP using a mailbox to broker the passing of data between client and server, or over HTTP similar to SOAP messages. Do you have architectural clarity on this? Could it be that you need to provide mailbox authentication across SMTP and POP3 to send and receive?

Encode request payload in GWT RPC call

I am using GWT to create my web-app.
When making RPC call from client side (browser), in inspect element my Request Payload is as below :
7|0|8|https://xxxx.xxxx.in/TestProject/in.TestProject.Main/|87545F2996A876761A0C13CD750EA654|in.TestProject.client.CustomerClassService|check_User_Login|java.lang.String/2004016611|in.TestProject.Beans.CustomerBean/3980370781|UserId|Password|1|2|3|4|3|5|5|6|7|8|6|0|0|0|0|0|CustId|0|0|0|0|0|0|0|0|0|
In this request all the details like username, password & custid are displayed in the request payload.
My question is, is it possible to encode OR hide those details from request payload?
You are looking at the wrong level of abstraction. What's the point of encoding/"hidding" these values in the payload? Everything you exchange between the server and client can be intercepted anyway... unless you use HTTPS. It ensures safe/encrypted communication between the server and client. Don't try to be "clever" and only encrypt part of the communication/payload, just use HTTPS.
But my concern is client itself should not be able to seen which method call we are making, parameter type in the request, parameter values etc. It should be hidden from client.
But those parameter values were input by the user himself or are hardcoded somewhere in the application (which the user will always be able to see/decipher, because his browser has to). So what you are trying to achieve is security through obscurity and is never a good idea. I'd focus my attention and efforts into securing the endpoints (GWT-RPC services), validating the input sent there, etc.
You have to remember one thing - that the user has access to the source code (compiled and minified, but still) of the client-side part of your application. So:
He'll always be able to figure out how to communicate with your server, because your application has to.
He can modify the application to send malicious requests - even if you created some hypothetical way of encoding parameters/addresses. Just find a place just before the encoding is done and voila. Firebug and other Developer Tools will help you immensely in this.
So "securing" client-side in this way is meaningless (of course, CSRF, XSS, etc. should be your concern), a malicious user will always bypass it because you have to give him all the tools to do it - otherwise, a "normal" user (or rather his browser) wouldn't be able to use your application.