I have the following code in client of http-client 4.2.1
PoolingClientConnectionManager mgr = new PoolingClientConnectionManager();
mgr.setMaxTotal(20);
HttpClient httpclient = new DefaultHttpClient(mgr);
I then have a try...finally and call httpPost.reset after every post.
For some reason, I see the program taking up 110 ESTABLISHED http connections to my server and 235 connections in CLOSE_WAIT(not TIMED_WAIT).
What am I doing wrong? Is there a bug around this? The maximum connections should be 20 or am I mistaken?
thanks,
Dean
okay, never mind....someone was creating quite a few DefaultHttpClient's in the code and I had missed that. It seems to be working now except now it keeps creating new sockets over and over for the same host(different urls on same host) resulting in a performance nightmare of very slow throughput :(....grrrrrr.
Related
I'm trying to close a RESTEasy client after a certain delay (e.g 5 seconds) and it seems the current configuration I'm using is not working at all.
HttpClient httpClient = HttpClientBuilder.create()
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setDefaultRequestConfig(RequestConfig.custom()
.setConnectionRequestTimeout(5 * 1000)
.setConnectTimeout(5 * 1000)
.setSocketTimeout(5 * 1000).build())
.build();
ApacheHttpClient43Engine engine = new ApacheHttpClient43Engine(httpClient, localContext);
ResteasyClient client = new ResteasyClientBuilder().httpEngine(engine).build();
according to the documentation the ConnectionTimeToLive should close the connection no matter if there's payload or not.
please find attached the link
https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html-single/developing_web_services_applications/index#jax_rs_client
In my specific case, there sometimes is some latency and the payload is sent in chunks (below the socketTimeout interval hence the connection is kept alive and it could happen that the client is active for hours)
My main goal is to kill the client and release the connection but I feel there is something I'm missing in the configuration.
I'm using wiremock to replicate this specific scenario by sending the payload in chucks.
.withChunkedDribbleDelay
any clue about the configuration?
You may try using .withFixedDelay(60000) instead of .withChunkedDribbleDelay().
How can a PSGI application be served with many concurrent connections? I have tried event-based and preforking webservers but the number of concurrent connections seems to be limited by the number of worker processes. I've heard that for instance Node.js scales to several thousand parallel connections, can you achieve similar in Perl?
Here is a sample application that keeps connection open infinitely. The point is not to have infinite connections but to keep connections open long enough to hit connection limits:
my $app = sub {
my $env = shift;
return sub {
my $responder = shift;
my $writer = $responder->(['200', ['Content-Type' => 'text/plain' ]]);
my $counter=0;
while (1);
$writer->write(++$counter."\n");
sleep 1; # or non-blocking sleep such as Coro::AnyEvent::sleep
}
$writer->close;
};
};
I don't think you're supposed to have infinite loops inside apps, I think you're supposed to only setup a recurring timer, and in that timer notify/message/write... See Plack::App::WebSocket - WebSocket server as a PSGI application and Re^4: real-time output from Mojolicious WebSockets?
While I have not yet tried it I came across this question whilst searching for a solution to a problem faced when using a socket server to report on progress etc of long-running jobs. Initially I was thinking of an approach along the lines of ParallelUserAgent except as a server and not a client.
Returning to the problem a few days later after realising that Net::WebSocket::Server blocked new connection requests if a long-running block of code within the new connection handler callback.
My next approach will be split out the long running functionality to a new spawned shell process and use a DB to track the progress which can then be accessed as required within the server without lengthy blocking.
Thought I'd throw up my approach in case it helps anyone walking a similar path.
I've been having some problems with the below code that I've pieced together. All the events work as advertised however, when a client drops off-line without first disconnecting the close event doesn't get call right away. If you give it a minute or so it will eventually get called. Also, I find if I continue to send data to the client it picks up a close event faster but never right away. Lastly, if the client gracefully disconnects, the end event is called just fine.
I understand this is related to the other listen events like upgrade and ondata.
I should also state that the client is an embedded device.
client http request:
GET /demo HTTP/1.1\r\n
Host: example.com\r\n
Upgrade: Websocket\r\n
Connection: Upgrade\r\n\r\n
//nodejs server (I'm using version 6.6)
var http = require('http');
var net = require('net');
var sys = require("util");
var srv = http.createServer(function (req, res){
});
srv.on('upgrade', function(req, socket, upgradeHead) {
socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +
'Upgrade: WebSocket\r\n' +
'Connection: Upgrade\r\n' +
'\r\n\r\n');
sys.puts('upgraded');
socket.ondata = function(data, start, end) {
socket.write(data.toString('utf8', start, end), 'utf8'); // echo back
};
socket.addListener('end', function () {
sys.puts('end'); //works fine
});
socket.addListener('close', function () {
sys.puts('close'); //eventually gets here
});
});
srv.listen(3400);
Can anyone suggest a solution to pickup an immediate close event? I am trying to keep this simple without use of modules. Thanks in advance.
close event will be called once TCP socket connection is closed by one or another end with few complications of rare cases when system "not realising" that socket been already closed, but this are rare cases. As WebSockets start from HTTP request server might just keep-alive till it timeouts the socket. That involves the delay.
In your case you are trying to perform handshake and then send data back and forth, but WebSockets are a bit more complex process than that.
The handshake process requires some security procedure to validate both ends (server and client) and it is HTTP compatible headers. But different draft versions supported by different platforms and browsers do implement it in a different manner so your implementation should take this in account as well and follow official documentation on WebSockets specification based on versions you need to support.
Then sending and receiving data via WebSockets is not pure string. Actual data sent over WebSockets protocol has data-framing layer, which involves adding header to each message you send. This header has details over message you sending, masking (from client to server), length and many other things. data-framing depends on version of WebSockets again, so implementations will vary slightly.
I would encourage to use existing libraries as they already implement everything you need in nice and clean manner, and have been used extensively across commercial projects.
As your client is embedded platform, and server I assume is node.js as well, it is easy to use same library on both ends.
Best suit here would be ws - actual pure WebSockets.
Socket.IO is not good for your case, as it is much more complex and heavy library that has multiple list of protocols support with fallbacks and have some abstraction that might be not what you are looking for.
I'm on help debugging a friend's site which is complained have a long connection time.
When try inspecting it with Fiddler I saw the ClientDoneRequest and ClientConnected is quite strange :
URI requested : /
ACTUAL PERFORMANCE
--------------
ClientConnected: 11:40:07.859
ClientBeginRequest: 11:40:33.687
ClientDoneRequest: 11:40:33.687
Gateway Determination: 0ms
DNS Lookup: 0ms
TCP/IP Connect: 65ms
HTTPS Handshake: 0ms
ServerConnected: 11:40:33.750
FiddlerBeginRequest: 11:40:33.750
ServerGotRequest: 11:40:33.750
ServerBeginResponse: 11:40:33.687
ServerDoneResponse: 11:40:44.031
ClientBeginResponse: 11:40:44.031
ClientDoneResponse: 11:40:44.031
Overall Elapsed: 00:00:10.3437500
As you can see, ClientDoneRequest - ClientConnected is approx to 30s ...
I have checked around but have no idea what lead to this problem
Somebody point me out please :S
Thanks
P/S : Fiddler version 2.3.0.0
http://groups.google.com/group/httpfiddler/browse_thread/thread/cd325dea517acc1d
That's entirely expected in cases where the client's request was sent
on a reused client socket. ClientConnected refers to the connection
time of the socket connection from the browser to Fiddler. Because
those socket connections may be reused, you can often see cases where
ClientConnected is even minutes earlier than ClientBeginRequest,
because the socket was originally connected for, say, request #1, and
then later reused for, say, request #12 a few seconds later, then
request #20 about 20 seconds later, and later request #35 nearly a
minute later, etc.
By default, a client socket is kept alive if it is reused within 30
seconds (pref named
"fiddler.network.timeouts.clientpipe.receive.reuse") of the previous
request.
Just stumbled upon this question, and then this related web page that described what all the timing entries mean:
http://fiddler.wikidot.com/timers
I'm writing an internal service that needs to touch a mod_perl2 instance for a long-running-process. The job is fired from a HTTP POST, and them mod_perl handler picks it up and does the work. It could take a long time, and is ready to be handled asynchronously, so I was hoping I could terminate the HTTP connection while it is running.
PHP has a function ignore_user_abort(), that when combined with the right headers, can close the HTTP connection early, while leaving the process running (this technique is mentioned here on SO a few times).
Does Perl have an equivalent? I haven't been able to find one yet.
Ok, I figured it out.
Mod_perl has the 'opposite' problem of PHP here. By default, mod_perl processes are left open, even if the connection is aborted, where PHP by default closes the process.
The Practical mod_perl book says how to deal with aborted connections.
(BTW, for the purposes of this specific problem, a job queue was lower on the list than a 'disconnecting' http process)
#setup headers
$r->content_type('text/html');
$s = some_sub_returns_string();
$r->connection->keepalive(Apache2::Const::CONN_CLOSE);
$r->headers_out()->{'Content-Length'} = length($s);
$r->print($s);
$r->rflush();
#
# !!! at this point, the connection will close to the client
#
#do long running stuff
do_long_running_sub();
You may want to look at using a job queue for this. Here is one provided by Zend that will let you start background processing jobs. There should be a number of these to choose from for php and perl.
Here's another thread that talks about this problem and an article on some php options. I'm not perl monk, so I'll leave suggestions on those tools to others.