I have used Fiddler to post manually to a method of my service for testing. And This is a long running method (taking several minutes).
But after my method return, Fiddler didn't show the response (just hyphen - in the Result column).
Please give me advices to troubleshoot this problem?
Thanks.
MillDol.
Related
I'm building a JMeter script in which I'm sending a TCP messages (both fixed length and variable length) to a server. The problem is the server doesn't sends anything at the end of the response message that may indicate that the response message is ended, resulting in the test run keeps on running and if I stop it manually it gives a 500 (Read Exception). I've by-passed this situation by adding a response time-out and a response assertion but when I load test my script all the requests fail. I've tried putting \n, \r, setting end of line to 10 etc but all in vain.
Now I've already gotten some opinions like it is due to server side settings but my question is what exactly are those settings about. Because I've to explain this blocker to non-tech persons. So is there any way that this issue can be overcome or can anyone please tell me what exactly are the server side settings that need to be configured.
If your requests fail then your Response Assertion doesn't really act like a "workaround", either you configured it not properly or you're receiving something which is not expected by the assertion.
We cannot help you efficiently without seeing:
Your Response Assertion configuration
TCP Sampler (or whatever Sampler you're using) response from the View Results Tree listener
At least few lines from hte .jtl results file containing results of your "load test"
If you want to mark all the Samplers as successful no matter of the real outcome you can use JSR223 Assertion with the following one-liner:
prev.setSuccessful(true)
Make sure to put it "high enough" so all the Samplers would be in the Assertion's scope
With regards to the "server-side-settings" which we're supposed to "tell" - we don't know that "server" you're trying to test so if it's something free and open source we need to know that piece of software name.
If it's something your colleagues developed in house - I'm afraid we're not able to help at all, you'd better ask them.
I'm using Charles' Rewrite Tool to change 200 responses to 400 in order to test failing API calls. However, the rewrite is triggering on the Options request. I'd like to only have it trigger on the Get or Post requests and allow the Options requests through. Is this possible using Charles?
We were able to work around the issue by assuming that OPTIONS would always return an empty body.
The below Regex values will match for GET (because it has a response body) and not match for OPTIONS (because it doesn't have a response body).
\{[\S\s]*\}
or
\[[\S\s]*\]
I think Charles does not have this option, which is really a pitty, because it seems to be easy to implement and it would open the doors to the API world.
I would suggest you to ask Karl (the author and main developer) for this new feature at the contact section of the site.
We have this exact same need to mock API responses. Since the Rewrite tool doesn't support this feature, we have setup Breakpoints on the responses we want to mock, once the breakpoint is hit, we change the response to whatever we want. It works, but is less than ideal.
Unfortunately, Charles doesn't have this feature to filter out which the Request that has certain HTTP Method.
It's not a direct answer, but you can achieve with Scripting tool from Proxyman
function onResponse(context, url, request, response) {
// Update status Code
response.statusCode = 500;
// Done
return response;
}
Here is the Snippet Code that you can do with JS Code.
Disclaimer: I'm a creator of Proxyman. Since there are many people who struggle with this problem, hopefully, the Scripting tool can help you.
In Charles, you can use Breakpoints tools. FYR: https://tanaschita.com/20220307-manipulating-network-requests-and-responses-with-charles/.
As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.
I'm not really sure what's going on, but today I've noticed that the facebook api is working extremely slow for me.
At first I though it was a bug in my code, but I tried the Graph API Explorer, and even that's causing timeout errors half the time (just using /me):
Failed to load resource: the server responded with a status of 504 (Server timeout)
I don't think its my internet connection, since everything else seems to be working quickly, and http://speedtest.net is giving me good results.
Is my problem somehow fixable or is this just some sort of freak occurance?
Has this happened for anyone else?
Do I need to consider the case that it will take exceedingly long in my application to recieve a response?
I currently have a registration page that waits for a FB.api response (with a spinner gif) before displaying the form. I could use a timeout to wait a few seconds and show it if the api doesn't respond, but I'd really rather not have to use this same sort of logic in every api call that my application depends on...
EDIT: its spontaneously fixed itself now. still no clue what happened.
You can check facebook api live status with this URL
https://developers.facebook.com/live_status
today at 11:13pm: API issues We're currently experiencing a problem
that may result in high API latency and timeouts. We are working on a
fix now.
I have a SOAP WS which I access through PHP's SoapClient (wrapped with Zend Framework's Soap Client). The webservice runs through https, and the calls take quite some time (a few minutes each).
I am making 4 calls, one after another through the same instance of SoapClient. However, after some time running, and at a random point (not allways on the same method call) I see the following error:
Warning: SoapClient::__doRequest(): SSL: Broken pipe in pathtomyfile
I still have no idea why this happened, but I've got some extra insight and a workaround.
The issue arises when, after a SOAP call that took really long to run, I try to use the same connection for another request. The first one will succeed, but upon the new call, the error raises.
This means, that as long as you don't NEED the connection to be same (which is usually the case on SOAP web services), you can just reset the connection between calls. Not the most efficient use of resources, but will work flawlessly.
I found that adding the
'keep_alive' => false
option to
new SoapClient($url, $options)
solved the issue for me.
There is a related bug report here but very little documentation about it apart from that: https://bugs.php.net/bug.php?id=60329