Strange timeline on ClientConnected and ClientDoneRequest - fiddler

I'm on help debugging a friend's site which is complained have a long connection time.
When try inspecting it with Fiddler I saw the ClientDoneRequest and ClientConnected is quite strange :
URI requested : /
ACTUAL PERFORMANCE
--------------
ClientConnected: 11:40:07.859
ClientBeginRequest: 11:40:33.687
ClientDoneRequest: 11:40:33.687
Gateway Determination: 0ms
DNS Lookup: 0ms
TCP/IP Connect: 65ms
HTTPS Handshake: 0ms
ServerConnected: 11:40:33.750
FiddlerBeginRequest: 11:40:33.750
ServerGotRequest: 11:40:33.750
ServerBeginResponse: 11:40:33.687
ServerDoneResponse: 11:40:44.031
ClientBeginResponse: 11:40:44.031
ClientDoneResponse: 11:40:44.031
Overall Elapsed: 00:00:10.3437500
As you can see, ClientDoneRequest - ClientConnected is approx to 30s ...
I have checked around but have no idea what lead to this problem
Somebody point me out please :S
Thanks
P/S : Fiddler version 2.3.0.0

http://groups.google.com/group/httpfiddler/browse_thread/thread/cd325dea517acc1d
That's entirely expected in cases where the client's request was sent
on a reused client socket. ClientConnected refers to the connection
time of the socket connection from the browser to Fiddler. Because
those socket connections may be reused, you can often see cases where
ClientConnected is even minutes earlier than ClientBeginRequest,
because the socket was originally connected for, say, request #1, and
then later reused for, say, request #12 a few seconds later, then
request #20 about 20 seconds later, and later request #35 nearly a
minute later, etc.
By default, a client socket is kept alive if it is reused within 30
seconds (pref named
"fiddler.network.timeouts.clientpipe.receive.reuse") of the previous
request.

Just stumbled upon this question, and then this related web page that described what all the timing entries mean:
http://fiddler.wikidot.com/timers

Related

How to send some TCP hex packets in JMeter

I'm working on a test scenario that it's testing a socket server over TCP with JMeter.
My test tree and TCP Sampler looks like this:
I used BinaryTCPClientImpl for 'TCPClient classname'. It worked correctly and sent the hex packet (24240011093583349040005000F6C80D0A) to the server and I received the packet in server side too. After receiving the packet in the server side, it answered and JMeter received the response packet correctly too.
As you can see in the following test result, the TCP Sampler (Login Packet) send 4 times in the right way and responses are true (404000120935833490400040000105490d0a).
The problem is that JMeter waits till the end of Timeout (in my case 2000ms) for each request and when it occurred then go to the next request. I don't want to wait for a timeout, I need a forward scenario, without the wait.
I found the solution according to the following question and it helped me:
Answer link
I just set the End of line(EOL) byte value to 10 that it means return new line in ASCI table.

How to implement Socket.PollAsync in C#

Is it possible to implement the equivalent of Socket.Poll in async/await paradigm (or BeginXXX/EndXXX async pattern)?
A method which would act like NetworkStream.ReadAsync or Socket.BeginReceive but:
leave the data in the socket buffer
complete after the specified interval of time if no data arrived (leaving the socket in connected state so that the polling operation can be retried)
I need to implement IMAP IDLE so that the client connects to the mail server and then goes into waiting state where it received data from the server. If the server does not send anything within 10 minutes, the code sends ping to the server (without reconnecting, the connection is never closed), and starts waiting for data again.
In my tests, leaving the data in the buffer seems to be possible if I tell Socket.BeginReceive method to read no more than 0 bytes, e.g.:
sock.BeginReceive(b, 0, 0, SocketFlags.None, null, null)
However, not sure if it indeed will work in all cases, maybe I'm missing something. For instance, if the remote server closes the connection, it may send a zero-byte packet and not sure if Socket.BeginReceive will act identically to Socket.Poll in this case or not.
And the main problem is how to stop socket.BeginReceive without closing the socket.

Python socket.getdefaulttimeout() is None, but getting "timeout: timed out"

How can I determine what the numeric timeout value is that is causing the below stack trace?
...
File "/usr/lib/python2.7/httplib.py", line 548, in read
s = self._safe_read(self.length)
File "/usr/lib/python2.7/httplib.py", line 647, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
timeout: timed out
After importing my modules, the result of socket.getdefaulttimeout() is None (note that this isn't the same situation as what produced the above, since getting those requires an 8-hour stress run on the system).
My code is not setting any timeout values (default or otherwise) AFAICT. I have not yet been able to find any hint that 3rd party libraries are doing so either.
Obviously there's some timeout somewhere in the system. I want to know the numeric value, so that I can have the system back off as it is approached.
This is python 2.7 under ubuntu 12.04.
Edit:
The connection is to localhost (talking to CouchDB listening on 127.0.0.1), so NAT shouldn't be an issue. The timeout only occurs when the DB is under heavy load, which implies to me that it's only when the DB is getting backed up and cannot respond quickly enough to requests, which is why I would like to know what the timeout is, so I can track response times and throttle incoming requests when the response time gets over something like 50% of the timeout.
Not knowing anything more my guess would be that NAT tracking expires due to long inactivity and unfortunately in most cases you won't be able to discover exact timeout value. Workaround would be to introduce some sort of keep alive packets to your protocol if there's such a possibility.

lwip - what's the reason tcp socket blocked in send()?

I am make a application base on lwip,the applcation just send data to the server;
When my app works for some times (about 5 hours),I found that the send thread hung in send() function,and after about 30min send() return 0,and my thread run agin;
In the server side ,have make a keepalive,its time is 5min,when my app hungs,5min later the server close the sockect,but my app have not get this,still hungs in send() until 30min get 0 return; why this happen?
1: the up speed is not enough to send data,it will hungs in send?
2: maybe the server have not read data on time,and it make send buff full and hungs?
how can i avoid these peoblems in my code ? I have try to set TCP_NODELAY,SO_SNDTIMEO and select before send,but also have this problem.
send() blocks when the receiver is too far behind the sender. recv() returns zero when the peer has closed the connection, which means you must close the socket and stop reading.

too many open sockets with HttpClient?

I have the following code in client of http-client 4.2.1
PoolingClientConnectionManager mgr = new PoolingClientConnectionManager();
mgr.setMaxTotal(20);
HttpClient httpclient = new DefaultHttpClient(mgr);
I then have a try...finally and call httpPost.reset after every post.
For some reason, I see the program taking up 110 ESTABLISHED http connections to my server and 235 connections in CLOSE_WAIT(not TIMED_WAIT).
What am I doing wrong? Is there a bug around this? The maximum connections should be 20 or am I mistaken?
thanks,
Dean
okay, never mind....someone was creating quite a few DefaultHttpClient's in the code and I had missed that. It seems to be working now except now it keeps creating new sockets over and over for the same host(different urls on same host) resulting in a performance nightmare of very slow throughput :(....grrrrrr.