Vert.x - How to know which connection was closed - httpclient

I have a Vert.x app that gets HTTP requests as a Server and later down the road sends the data (HTTP request as well) as a Client to several other servers (more than one Client exits).
I see in the logs that sometimes I get io.vertx.core.VertxException: Connection was closed
but with no other info.
How can I know which connection was the one that was actually closed? I have more than one connections active.
I tried to add exceptionHandler to HttpServer and to HttpClientRequest, but they both were never called.

The io.vertx.core.VertxException: Connection was closed can be triggered for both client connections and server connections.
You'd get these errors on your HTTP client connection if the remote server closed the connection before completing the response, and you'd be able to capture them by setting the appropriate handlers on the client request (With Vert.x 4, you'd do something like ...send().onFailure(err -> /* handle the failure */) ), which I believe you already do.
You'd get these errors for the HTTP server connection if the remote client disconnects, either before your server completed the response, or - if the connection has keep-alive enabled (which is the default for HTTP/1.1 connections) - even after the response was sent.
In case the client closed the server connection before the response was fully sent, you should be able to capture and handle there errors in the HttpServer.exceptionHandler(), as I'm sure you already do.
In case the client closed the server connection after the response was fully sent, while it is in a keep-alive state, then there is no HttpServerRequest (or RoutingContext if you are using vertx-web, as you should) where the exception happens in, so Vert.x would just disregard the error (see here for the code).
So why do you still see those errors in the log? It could be various things because that exception is also used to handle EventBus connections and all kinds of internal network streams managed by Vert.x servers - and all without a stack trace (the actual exception instance being thrown is created statically here), so Vert.x kinds of sucks in that way.
My recommendation: make sure you attach error handlers to everything (pay attention to websocket connections or HTTP response streams, if you use them) and if you still get those errors in the logs - maybe you can just ignroe them as the commenter suggested.

Related

HTTP request: End connection to proxy

I'm currently connected to a local proxy 127.0.0.1:5034 using a socket, and through it I send a connect request to another external proxy server by using:
CONNECT 192.168.1.2:5043 HTTP/1.1
Host:192.168.1.2
After that I receive The following message:
HTTP/1.1 200 OK
But the problem is after that, when I try to end my connection with the remote proxy by this:
Connection: close
it seems like even the local proxy 127.0.0.1:5034 is closed and causing a socket error, I've searched for some time to find a way to end just the connect request but can't seem to find it.
Is there a way to close the connection just for the remote proxy and keep the local proxy connection alive?
No, this is impossible. By design, CONNECT transforms the HTTP/1.1 connection into a tunnel, and requests inside that tunnel are opaque to 127.0.0.1:5034: it merely forwards bytes back and forth until the tunnel is closed. RFC 7231 § 4.3.6 says (emphasis mine):
A tunnel is closed when a tunnel intermediary detects that either
side has closed its connection: the intermediary MUST attempt to send
any outstanding data that came from the closed side to the other
side, close both connections, and then discard any remaining data
left undelivered.

Camel Netty 2.11.2 component stale connection issue, not serving any requests

I am using Camel Netty for full duplex communication over TCP socket.
My application is using the following parameters in the route.
<inOut uri="netty:tcp://{{IP-Port}}?
textline=true&sync=true&decoderMaxLineLength=1000000&autoAppendDelimiter=false&disconnect=false&producerPoolMaxActive=-1&producerPoolMinEvictableIdle=120000&keepAlive=false&noReplyLogLevel=INFO&serverExceptionCaughtLogLevel=INFO&requestTimeout=2500" />
The netty component above receives requests from a preceding wiretap in the flow.
During the day after about 8-10 hours, some of the connections show as ESTABLISHED state but will not be serving any requests. Even at the server end, these connections show as ESTABLISHED but there is no activity for hours.
When we looked at one connection closely, found that the last request attempted (not been received by server) was writing body to endpoint and got an exception org.apache.camel.processor.DefaultErrorHandler - Failed delivery for (MessageId: xxxxx on ExchangeId: ID-xxxx). On delivery attempt: 0
Since netty is being called from wiretap, after this last request, succeeding requests are not even entertained and they are blocked in wiretap itself..
I am collecting tcpdump later tonight for more details though.
Questions:
1. Why is producerPoolMinEvictable NOT kicking in to clear such stale connections?
2. How do we clear these stale connections automatically without having to
bounce the application?
3. Is there a problem using wiretap?
Appreciate suggestions to resolve this issue. Please ask for any more details needed to answer and I shall be happy to share.
Note:
camel-netty
2.11.2-

Gatling check for java.io.IOException: Remotely closed

While doing a test in gatling I get the following error
java.io.IOException: Remotely closed
which is expected (server cuts connection). How do I mark the test success or check for that exception?
This exception means that the server closed the connection when the client (Gatling) was trying to write on it.
This might be in indication that you have to tune your keep-alive timeout to NOT match typical user think time, but such event will always happen.
But then, web browsers retry sending the request in case of such a failure.
Gatling can do that too, but it's disabled for now (will be enabled by default in 2.1.6). Until then, you can change the maxRetry value in gatling.conf.

Mule ESB CE 3.5.0 TCP Reconnection Strategies

I am working with Mule ESB CE 3.5.0 and am seeing what I believe is a resource leak on the TCP connections. I am hooking up VisualVM and checking the memory. I see that it increases over time without ever decreasing.
My scenario is that I have messages being sent to Mule, Mule does its thing, and then dispatches to a remote TCP endpoint (on the same box, usually). What I did was not start up the program that would receive a message from Mule's TCP outbound endpoint. So there is nothing listening for Mule's dispatched message.
I configure my TCP connectors as following:
<tcp:connector name="TcpConnector" keepAlive="true" keepSendSocketOpen="true" sendTcpNoDelay="true" reuseAddress="true">
<reconnect-forever frequency="2000" />
<tcp:xml-protocol />
</tcp:connector>
<tcp:endpoint name="TcpEndpoint1" responseTimeout="3000" connector-ref="TcpConnector" host="${myHost}" port="${myPort}" exchange-pattern="one-way" />
My questions are:
When a flow fails to send to the TCP outbound endpoint, what happens to the message? Is the message kept in memory somewhere and once the TCP connector establishes connections to the remote endpoint, do all the accumulated messages burst through and get dispatched?
When the reconnection strategy is blocking, I assume it is a dispatcher thread that tries to establish the connection. If we have more message to dispatch, then are there more dispatcher threads that are tied up to attempting the reconnection? What happens if it is non-blocking?
Thanks!
Edit:
If I also understand the threading documentation correctly, does that mean that if I have the default threading profile set to poolExhaustedAction="RUN", and all the dispatcher threads block waiting for a connection, eventually the flow threads, and then the receiver threads will block on trying to establish the connection. When the remote application begins listening again, all the backlogged messages from the blocked threads will burst through.
So if the flow receives transient data, it should be configured to have non-blocking reconnection and since it is acceptable to throw away the messages (in my use case), then we can make do with the exception that will be thrown.
I would point you to the documentation:
Non-Blocking Reconnection
By default, a reconnection strategy will block Mule application
message processing until it is able to connect/reconnect. When you
enable non-blocking reconnection, the application does not need to
wait for all endpoints to re-connect before it restarts. Furthermore,
if a connection is lost, the reconnection takes place on a thread
separate from the application thread. Note that such behavior may or
may not be desirable, depending on your application needs.
On blocking reconnection strategies what you are going to get is that the dispatcher will get blocked, waining for an available connection. The messages are not technically kept anywhere, their flow is just stopped.
Regarding the second question it changes between transport and transport. In this very special case, given that tcp is a connection per request transport, different dispatchers will try to get a different socket form the pool of connections.
In case of non-blocking strategies you will get an exception. You can probably test it easily.

Is TCP Reset (RST) two way?

I have a client-server (Java) application using persistent TCP connections, but sometimes the Server receives java.io.IOException: Connection reset by peer exception when trying to write on the socket, however I don't see any error in the Client log.
This RST is probably caused by an intermediate proxy/router, but if that's the case, should this be seen on the client as well?
If the RST is sent by the client, it can be seen on it using a packet sniffer such as wireshark. However, it won't show up in any user-level sockets since it's sent by the OS as a response to various erroneous inputs (such as connection attempts to a closed port).
If the RST is sent by the network, then it's pretending to be the client to sever the connection. It can do so in one direction, or in both of them. In that case, the client might not see anything, except for a RST sent by the actual server when the client continues to send data to a connection it perceives as open, while the server sees it as closed.
Try capturing the traffic on both the server and the client, see where the resets are coming from.