How are read and write socket errors defined in the wrk HTTP benchmarking tool? - sockets

I am using the wrk HTTP benchmarking tool to test a server. And I am getting READ, WRITE as well as CONNECTION and TIMEOUT errors.
What I understand is:
CONNECTION errors, are caused by the refusal of a TCP connection.
Which could involve every element in the connection chain (Client,
ISP and Server).
TIMEOUT errors, are caused by the host failing to respond to a
request within a certain time.
But what about READ and WRITE errors?
I would really appreciate, if someone could point me in the direction of a good resource?

So what I understood from this code from the WRK repository is that.
WRITE ERROR’s happen when attempting to write on a connection, but it fails because of a closed socket on the server.
READ ERROR’s happen when attempting to read on a connection, but it fails because of a closed socket on the server.
Happy if anybody can confirm or refute that.

Related

Vert.x - How to know which connection was closed

I have a Vert.x app that gets HTTP requests as a Server and later down the road sends the data (HTTP request as well) as a Client to several other servers (more than one Client exits).
I see in the logs that sometimes I get io.vertx.core.VertxException: Connection was closed
but with no other info.
How can I know which connection was the one that was actually closed? I have more than one connections active.
I tried to add exceptionHandler to HttpServer and to HttpClientRequest, but they both were never called.
The io.vertx.core.VertxException: Connection was closed can be triggered for both client connections and server connections.
You'd get these errors on your HTTP client connection if the remote server closed the connection before completing the response, and you'd be able to capture them by setting the appropriate handlers on the client request (With Vert.x 4, you'd do something like ...send().onFailure(err -> /* handle the failure */) ), which I believe you already do.
You'd get these errors for the HTTP server connection if the remote client disconnects, either before your server completed the response, or - if the connection has keep-alive enabled (which is the default for HTTP/1.1 connections) - even after the response was sent.
In case the client closed the server connection before the response was fully sent, you should be able to capture and handle there errors in the HttpServer.exceptionHandler(), as I'm sure you already do.
In case the client closed the server connection after the response was fully sent, while it is in a keep-alive state, then there is no HttpServerRequest (or RoutingContext if you are using vertx-web, as you should) where the exception happens in, so Vert.x would just disregard the error (see here for the code).
So why do you still see those errors in the log? It could be various things because that exception is also used to handle EventBus connections and all kinds of internal network streams managed by Vert.x servers - and all without a stack trace (the actual exception instance being thrown is created statically here), so Vert.x kinds of sucks in that way.
My recommendation: make sure you attach error handlers to everything (pay attention to websocket connections or HTTP response streams, if you use them) and if you still get those errors in the logs - maybe you can just ignroe them as the commenter suggested.

Camel Netty 2.11.2 component stale connection issue, not serving any requests

I am using Camel Netty for full duplex communication over TCP socket.
My application is using the following parameters in the route.
<inOut uri="netty:tcp://{{IP-Port}}?
textline=true&sync=true&decoderMaxLineLength=1000000&autoAppendDelimiter=false&disconnect=false&producerPoolMaxActive=-1&producerPoolMinEvictableIdle=120000&keepAlive=false&noReplyLogLevel=INFO&serverExceptionCaughtLogLevel=INFO&requestTimeout=2500" />
The netty component above receives requests from a preceding wiretap in the flow.
During the day after about 8-10 hours, some of the connections show as ESTABLISHED state but will not be serving any requests. Even at the server end, these connections show as ESTABLISHED but there is no activity for hours.
When we looked at one connection closely, found that the last request attempted (not been received by server) was writing body to endpoint and got an exception org.apache.camel.processor.DefaultErrorHandler - Failed delivery for (MessageId: xxxxx on ExchangeId: ID-xxxx). On delivery attempt: 0
Since netty is being called from wiretap, after this last request, succeeding requests are not even entertained and they are blocked in wiretap itself..
I am collecting tcpdump later tonight for more details though.
Questions:
1. Why is producerPoolMinEvictable NOT kicking in to clear such stale connections?
2. How do we clear these stale connections automatically without having to
bounce the application?
3. Is there a problem using wiretap?
Appreciate suggestions to resolve this issue. Please ask for any more details needed to answer and I shall be happy to share.
Note:
camel-netty
2.11.2-

Best practice for connecting to a TCP service with Play

I want to connect to a locally running TCP service from a web application I'm building using the Play framework and scala.
I'm not sure how to open this connection and send commands to it, should I be writing raw socket code? Also, how should I be managing the connection? Can I just open the connection once and send commands from each web request to it? What if the connection is closed or falls over?
Not sure Pay will be of much help here, most of its modules focus on HTTP communication. You should have a look at Akka-IO though.

My netty TCP/IP server stops listenning few hours after starting

I have written TCP/IP server using Netty4.0 running on a Linux machine listening to small GPS tracking devices. I have been facing weird problem, which is server stops listening to them in a sudden several hours after I starts it. There is any error log I can see and still server is running. It looks like only channel is not working. When I run a client to do health check, the client socket is still alive and keep sending packet to the server but server does not get it.
If you have any idea how to solve it, please tell me about it. It would be appreciated.
It is impossible to tell without more informations. I would check different things like if there was an OOM exception or with telnet if the server really refuse connections etc. Also jstack may show you if there is some deadlock etc.

Dealing with intermittent Winsock errors

My client app gets intermittent winsock errors (10060, 10053) against one particular server we interface with. I have it re-trying the request that failed, but sometimes it fails repeatedly, and I give up after 5 re-tries. Would it be likely to help at all if I closed the socket and created a new one? (I know nothing about the server-side.)
Ok, so the errors that you're getting are:
10060 - WSAETIMEDOUT
10053 - WSAECONNABORTED
When do you get them? What are you doing at the time?
You get a WSAECONNABORTED when the remote end of the connection, or possibly an intermediary router, resets the connection and sends an RST. This could simply be the remote end issuing a non lingering close or it could be the remote end aborting or crashing.
You can't continue doing anything with a connection that has had a WSAECONNABORTED on it as the connection has been aborted and is no more; it is a dead connection, it has passed on...
Context matters immensely as to why you might get a WSAETIMEDOUT exception and the context will dictate if retrying is sensible or not.
One thing I would try is- do tracert to your server.
Often when someone is connected through VPN; you may see this error because your local and remote ip addresses overlap.
e.g. if your local ipaddress range is 192.168.1.xxx and vpn remote range is also 192.168.1.xxx you will also see this error.
sharrajesh