I have written a netty client code to send some processed data to multiple clients. After running for 3-4 hours I exhaust all sockets and no more connections possible. Also when I check the socket states in the OS a large number of sockets are in TIME_WAIT state.
public class NettyClient {
private static LogHelper logger = new LogHelper(NettyClient.class);
private static EventLoopGroup workerGroup = new NioEventLoopGroup();
private static Bootstrap nettyClient = new Bootstrap()
.group(workerGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5000);
private URL url;
private RequestVo Req;
private ChannelFuture chFuture;
private Object ReportData;
private JAXBContext jbContext;
private static final int CHANNEL_READ_TIMEOUT = 5;
public NettyClient() {
// TODO Auto-generated constructor stub
}
public NettyClient(RequestVo Req, JAXBContext jbCtx,Object data) {
this.Req = Req;
this.ReportData = data;
this.jbContext = jbCtx;
}
public void sendRequest() {
logger.debug("In sendRequest()");
//ChannelFuture chFuture = null;
try {
this.url = new URL(Req.getPushAddress());
//add handlers
nettyClient.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ch.pipeline()
.addLast("timeout",
new ReadTimeoutHandler(CHANNEL_READ_TIMEOUT, TimeUnit.SECONDS));
ch.pipeline()
.addLast("codec", new HttpClientCodec());
ch.pipeline()
.addLast("inbound",
new NettyClientInBoundHandler(Req, jbContext, ReportData));
}
});
//make a connection to the Client
int port = url.getPort() == -1? url.getDefaultPort():url.getPort();
chFuture = nettyClient.connect(url.getHost(), port);
chFuture.addListener(new NettyClientConnectionListener(this.Req.getRequestId()));
} catch (Exception e) {
logger.error("Exception: Failed to connect to Client ", e);
} finally {
}
}
}
Here are the methods from ChannelInBoundHandler Class
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception
{
Map<String, String> props = new HashMap<String, String>();
if(msg instanceof HttpResponse) {
logger.debug("channelRead()");
HttpResponse httpRes = (HttpResponse) msg;
HttpResponseStatus httpStatus = httpRes.status();
props.put(REQUEST_ID, this.Request.getRequestId());
props.put(CLIENT_RESPONSE_CODE, String.valueOf(httpStatus.code()));
JmsService.getInstance(DESTINATION).sendTextMessage(props, "");
logger.debug("channelRead() HttpResponse Code: " + httpStatus.code());
ctx.close();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception
{
Map<String, String> props = new HashMap<String, String>();
logger.error("exceptionCaught()", cause);
if(cause instanceof ReadTimeoutException) {
//If read-timeout, send back the response
props.put(REQUEST_ID, this.Request.getRequestId());
props.put(CLIENT_RESPONSE_CODE,
String.valueOf(HttpResponseStatus.REQUEST_TIMEOUT.code()));
JmsService.getInstance(DESTINATION).sendTextMessage(props, "");
ctx.close();
}
else {
logger.error("Exception: ", cause);
}
}
Any idea what is wrong in the code would greatly help me.
Thanks
I'm not familiar with netty, but I think I can explain part of your problem, and hopefully help you along the way:
When you make use of a port and then close it, the port will not automatically be available for use by other processes at once. Instead, it will go into the TIME_WAIT state for a certain period of time. For Windows, I believe this will be 240 seconds (four minutes).
I'd guess that your code is slowly using up all the available ports on your system, due to the release of ports from the TIME_WAIT state is happening too slowly.
It's not entirely clear to me where the actual port numbers are coming from (are they auto-generated by url.getDefaultPort() perhaps?), but perhaps you can find some way to reuse them? If you can keep one or more open connections and somehow reuse these, then you might be able to decrease the frequency of requests for new ports enough for the closed ports to go out of their TIME_WAIT state.
Related
With UWP sockets on Windows 10, I'm seeing an annoying behavior with pending asynchronous write operations not being interrupted when CancelIOAsync or Dispose is called on the socket. Below is some sample code that demonstrates the issue.
Basically, the while loop loops for 2 iterations until the send socket buffer is full (the server side connection doesn't read the data on purpose to demonstrate the problem). The next await socket.StoreAsync() operation ends up waiting indefinitely, it's not interrupted by the closing of the socket.
Any ideas on how to interrupt the pending StoreAsync?
Thanks
Benoit.
public sealed partial class MainPage : Page
{
private StreamSocket socket;
private StreamSocket serverSocket;
public MainPage()
{
this.InitializeComponent();
StreamSocketListener listener = new StreamSocketListener();
listener.ConnectionReceived += OnConnection;
listener.Control.KeepAlive = false;
listener.BindServiceNameAsync("12345").GetResults();
}
async private void Connect_Click(object sender, RoutedEventArgs e)
{
socket = new StreamSocket();
socket.Control.KeepAlive = false;
await socket.ConnectAsync(new HostName("localhost"), "12345");
DataWriter writer = new DataWriter(socket.OutputStream);
try
{
while(true)
{
writer.WriteBytes(new byte[1000000]);
await writer.StoreAsync();
Debug.WriteLine("sent bytes");
}
}
catch(Exception ex)
{
Debug.WriteLine("sent failed : " + ex.ToString());
}
}
async private void Close_Click(object sender, RoutedEventArgs e)
{
//
// Closing the client connection with a pending write doesn't cancel the pending write.
//
Debug.WriteLine("Closing Client Connection");
await socket.CancelIOAsync();
socket.Dispose();
}
private void OnConnection(StreamSocketListener sender, StreamSocketListenerConnectionReceivedEventArgs args)
{
System.Diagnostics.Debug.WriteLine("Received connection");
serverSocket = args.Socket;
}
}
I've just started looking at netty for some projects and have been able to get some simple client and server examples running that use INET and unix domain sockets to send messages back and forth. I've also been able to send datagram packets over INET sockets. But I have a need to send datagram packets over UNIX domain sockets. Is this supported in netty? If so, could someone point me at documentation or an example? I suspect this is not supported given that the DatagramPacket explicitly takes InetSocketAddress. If not supported, would it be feasible to add this to netty?
Is this supported in netty?
Yes. Below is a simple example I wrote.
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.channel.*;
import io.netty.channel.epoll.EpollDomainSocketChannel;
import io.netty.channel.epoll.EpollEventLoopGroup;
import io.netty.channel.epoll.EpollServerDomainSocketChannel;
import io.netty.channel.unix.DomainSocketAddress;
/**
* #author louyl
*/
public class App {
public static void main(String[] args) throws Exception {
String sockPath = "/tmp/echo.sock";
final ServerBootstrap bootstrap = new ServerBootstrap();
EventLoopGroup serverBossEventLoopGroup = new EpollEventLoopGroup();
EventLoopGroup serverWorkerEventLoopGroup = new EpollEventLoopGroup();
bootstrap.group(serverBossEventLoopGroup, serverWorkerEventLoopGroup)
.localAddress(new DomainSocketAddress(sockPath))
.channel(EpollServerDomainSocketChannel.class)
.childHandler(
new ChannelInitializer<Channel>() {
#Override
protected void initChannel(final Channel channel) throws Exception {
channel.pipeline().addLast(
new ChannelInboundHandlerAdapter() {
#Override
public void channelActive(final ChannelHandlerContext ctx) throws Exception {
final ByteBuf buff = ctx.alloc().buffer();
buff.writeBytes("This is a test".getBytes());
ctx.writeAndFlush(buff).addListeners(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
future.channel().close();
future.channel().parent().close();
}
});
}
}
);
}
}
);
final ChannelFuture serverFuture = bootstrap.bind().sync();
final Bootstrap bootstrapClient = new Bootstrap();
EventLoopGroup clientEventLoop = new EpollEventLoopGroup();
bootstrapClient.group(clientEventLoop)
.channel(EpollDomainSocketChannel.class)
.handler(new ChannelInitializer<Channel>() {
#Override
protected void initChannel(final Channel channel) throws Exception {
channel.pipeline().addLast(
new ChannelInboundHandlerAdapter() {
#Override
public void channelRead(final ChannelHandlerContext ctx, final Object msg) throws Exception {
final ByteBuf buff = (ByteBuf) msg;
try {
byte[] bytes = new byte[buff.readableBytes()];
buff.getBytes(0, bytes);
System.out.println(new String(bytes));
} finally {
buff.release();
}
ctx.close();
}
#Override
public void exceptionCaught(final ChannelHandlerContext ctx, final Throwable cause) throws Exception {
System.out.println("Error occur when reading from Unix domain socket: " + cause.getMessage());
ctx.close();
}
}
);
}
}
);
final ChannelFuture clientFuture = bootstrapClient.connect(new DomainSocketAddress(sockPath)).sync();
clientFuture.channel().closeFuture().sync();
serverFuture.channel().closeFuture().sync();
serverBossEventLoopGroup.shutdownGracefully();
serverWorkerEventLoopGroup.shutdownGracefully();
clientEventLoop.shutdownGracefully();
}
}
I've got an issue, for which I am unable to post full code (sorry), due to security reasons. The gist of my issue is that I have a ServerBootstrap, created as follows:
bossGroup = new NioEventLoopGroup();
workerGroup = new NioEventLoopGroup();
final ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst("idleStateHandler", new IdleStateHandler(0, 0, 3000));
//Adds the MQTT encoder and decoder
ch.pipeline().addLast("decoder", new MyMessageDecoder());
ch.pipeline().addLast("encoder", new MyMessageEncoder());
ch.pipeline().addLast(createMyHandler());
}
}).option(ChannelOption.SO_BACKLOG, 128).option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true);
// Bind and start to accept incoming connections.
channelFuture = b.bind(listenAddress, listenPort);
With createMyHandlerMethod() that basically returns an extended implementation of ChannelInboundHandlerAdapter
I also have a "client" listener, that listens for incoming connection requests, and is loaded as follows:
final String host = getHost();
final int port = getPort();
nioEventLoopGroup = new NioEventLoopGroup();
bootStrap = new Bootstrap();
bootStrap.group(nioEventLoopGroup);
bootStrap.channel(NioSocketChannel.class);
bootStrap.option(ChannelOption.SO_KEEPALIVE, true);
bootStrap.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addFirst("idleStateHandler", new IdleStateHandler(0, 0, getKeepAliveInterval()));
ch.pipeline().addAfter("idleStateHandler", "idleEventHandler", new MoquetteIdleTimeoutHandler());
ch.pipeline().addLast("decoder", new MyMessageDecoder());
ch.pipeline().addLast("encoder", new MyMessageEncoder());
ch.pipeline().addLast(MyClientHandler.this);
}
})
.option(ChannelOption.SO_REUSEADDR, true)
.option(ChannelOption.TCP_NODELAY, true);
// Start the client.
try {
channelFuture = bootStrap.connect(host, port).sync();
} catch (InterruptedException e) {
throw new MyException(“Exception”, e);
}
Where MyClientHandler is again a subclassed instance of ChannelInboundHandlerAdapter. Everything works fine, I get messages coming in from the "server" adapter, i process them, and send them back on the same context. And vice-versa for the "client" handler.
The problem happens when I have to (for some messages) proxy them from the server or client handler to other connection. Again, I am very sorry for not being able to post much code, but the gist of it is that I'm calling from:
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(someOtherMessage);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
Now here's the problem: the bolded (client) writeAndFlush - never actually writes the message bytes, it doesn't throw any errors. The ChannelFuture returns all false (success, cancelled, done). And if I sync on it, eventually it times out for other reasons (connection timeout set within my code).
I know I haven't posted all of my code, but I'm hoping that someone has some tips and/or pointers for how to isolate the problem of WHY it is not writing to the client context. I'm not a Netty expert by any stretch, and most of this code was written by someone else. They are both subclassing ChannelInboundHandlerAdapter
Feel free to ask any questions if you have any.
*****EDIT*********
I tried to proxy the request back to a DIFFERENT context/channel (ie, the client channel) using the following test code:
public void proxyPubRec(int messageId) throws MQTTException {
logger.log(logLevel, "proxying PUBREC to context: " + debugContext());
PubRecMessage pubRecMessage = new PubRecMessage();
pubRecMessage.setMessageID(messageId);
pubRecMessage.setRemainingLength(2);
logger.log(logLevel, "pipeline writable flag: " + ctx.pipeline().channel().isWritable());
MyMQTTEncoder encoder = new MyMQTTEncoder();
ByteBuf buff = null;
try {
buff = encoder.encode(pubRecMessage);
ctx.channel().writeAndFlush(buff);
} catch (Throwable t) {
logger.log(Level.SEVERE, "unable to encode PUBREC");
} finally {
if (buff != null) {
buff.release();
}
}
}
public class MyMQTTEncoder extends MQTTEncoder {
public ByteBuf encode(AbstractMessage msg) {
PooledByteBufAllocator allocator = new PooledByteBufAllocator();
ByteBuf buf = allocator.buffer();
try {
super.encode(ctx, msg, buf);
} catch (Throwable t) {
logger.log(Level.SEVERE, "unable to encode PUBREC, " + t.getMessage());
}
return buf;
}
}
But the above at line: ctx.channel().writeAndFlush(buff) is NOT writing to the other channel - any tips/tricks on debugging this sort of issue?
someOtherMessage has to be ByteBuf.
So, take this :
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(someOtherMessage);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
... and replace it with this :
serverHandler.channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof myProxyingMessage) {
if (ctx.channel().isActive()) {
ctx.channel().writeAndFlush(ByteBuf);
**getClientHandler().writeAndFlush(myProxyingMessage);**
}
}
}
Actually, this turned out to be a threading issue. One of my threads was blocked/waiting while other threads were writing to the context and because of this, the writes were buffered and not sent, even with a flush. Problem solved!
Essentially, I put the first message code in an Runnable/Executor thread, which allowed it to run separately so that the second write/response was able to write to the context. There are still potentially some issues with this (in terms of message ordering), but this is not on topic for the original question. Thanks for all your help!
Is there a way (and how) to know the status of a connection pool? Like, how many connections are being used, how many are available, ...
We are currently facing issues where the application cannot get a connection from the pool (ConnectionPoolTimeoutException: Timeout waiting for connection from pool) so to track down the cause we would like to log some pool stats each time a new connection is requested.
I have been browsing the Apache HTTPClient API but did not find a way to get this information.
We use PoolingClientConnectionManager.
You can use methods of the ConnPoolControl interface to control parameters of the internal pool
You can have a detailed information total and per route with the code below:
public static void main(String[] args) {
PoolingHttpClientConnectionManager connectionManager = HttpClientUtils.getConnectionManager();
System.out.println(createHttpInfo(connectionManager));
}
private static String createHttpInfo(PoolingHttpClientConnectionManager connectionManager) {
StringBuilder sb = new StringBuilder();
sb.append("=========================").append("\n");
sb.append("General Info:").append("\n");
sb.append("-------------------------").append("\n");
sb.append("MaxTotal: ").append(connectionManager.getMaxTotal()).append("\n");
sb.append("DefaultMaxPerRoute: ").append(connectionManager.getDefaultMaxPerRoute()).append("\n");
sb.append("ValidateAfterInactivity: ").append(connectionManager.getValidateAfterInactivity()).append("\n");
sb.append("=========================").append("\n");
PoolStats totalStats = connectionManager.getTotalStats();
sb.append(createPoolStatsInfo("Total Stats", totalStats));
Set<HttpRoute> routes = connectionManager.getRoutes();
if (routes != null) {
for (HttpRoute route : routes) {
sb.append(createRouteInfo(connectionManager, route));
}
}
return sb.toString();
}
private static String createRouteInfo(PoolingHttpClientConnectionManager connectionManager, HttpRoute route) {
PoolStats routeStats = connectionManager.getStats(route);
String info = createPoolStatsInfo(route.getTargetHost().toURI(), routeStats);
return info;
}
private static String createPoolStatsInfo(String title, PoolStats poolStats) {
StringBuilder sb = new StringBuilder();
sb.append(title + ":").append("\n");
sb.append("-------------------------").append("\n");
if (poolStats != null) {
sb.append("Available: ").append(poolStats.getAvailable()).append("\n");
sb.append("Leased: ").append(poolStats.getLeased()).append("\n");
sb.append("Max: ").append(poolStats.getMax()).append("\n");
sb.append("Pending: ").append(poolStats.getPending()).append("\n");
}
sb.append("=========================").append("\n");
return sb.toString();
}
Update (2019-01-07)
The connection manager is retrieved from an utilitarian class I've created (you can create it differently):
public class HttpClientUtils {
private static final PoolingHttpClientConnectionManager connectionManager = createConnectionManager();
private static PoolingHttpClientConnectionManager createConnectionManager() {
try {
SSLConnectionSocketFactory socketFactory = new SSLConnectionSocketFactory(
SSLContext.getDefault(),
new String[] {"TLSv1", "TLSv1.1", "TLSv1.2"},
null,
SSLConnectionSocketFactory.getDefaultHostnameVerifier());
Registry<ConnectionSocketFactory> registry = RegistryBuilder.<ConnectionSocketFactory>create()
.register("http", PlainConnectionSocketFactory.INSTANCE)
.register("https", socketFactory)
.build();
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager(registry);
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(20);
return cm;
} catch (NoSuchAlgorithmException | RuntimeException ex) {
Logger.getLogger(HttpClientUtils.class.getName()).log(Level.SEVERE, null, ex);
return null;
}
}
public static PoolingHttpClientConnectionManager getConnectionManager() {
return connectionManager;
}
}
I'm trying to create an application which is able to work even when network is down.
The idea is to store data returned from RequestFactory on the localStorage, and to use localStorage when network isn't available.
My problem - I'm not sure exactly how to differentiate between server errors(5XX, 4XX, ...) and network errors.
(I assume that on both cases my Receiver.onFailure() would be called, but I still don't know how to identify this situation)
Any help would be appreciated,
Thanks,
Gilad.
The response code when there is no internet connection is 0.
With RequestFactory to identify that the request was unsuccessful because of the network the response code has to be accessed. The RequestTransport seems like the best place.
Here is a rough implementation of an OfflineAwareRequestTransport.
public class OfflineAwareRequestTransport extends DefaultRequestTransport {
private final EventBus eventBus;
private boolean online = true;
public OfflineAwareRequestTransport(EventBus eventBus) {
this.eventBus = eventBus;
}
#Override
public void send(final String payload, final TransportReceiver receiver) {
// super.send(payload, proxy);
RequestBuilder builder = createRequestBuilder();
configureRequestBuilder(builder);
builder.setRequestData(payload);
builder.setCallback(createRequestCallback(receiver, payload));
try {
builder.send();
} catch (RequestException e) {
}
}
protected static final int SC_OFFLINE = 0;
protected RequestCallback createRequestCallback(final TransportReceiver receiver,
final String payload) {
return new RequestCallback() {
public void onError(Request request, Throwable exception) {
receiver.onTransportFailure(new ServerFailure(exception.getMessage()));
}
public void onResponseReceived(Request request, Response response) {
if (Response.SC_OK == response.getStatusCode()) {
String text = response.getText();
setOnline(true);
receiver.onTransportSuccess(text);
} else if (response.getStatusCode() == SC_OFFLINE) {
setOnline(false);
boolean processedOk = processPayload(payload);
receiver.onTransportFailure(new ServerFailure("You are offline!", OfflineReceiver.name,
"", !processedOk));
} else {
setOnline(true);
String message = "Server Error " + response.getStatusCode() + " " + response.getText();
receiver.onTransportFailure(new ServerFailure(message));
}
}
};
}