I'm working with a Camel flow that uses a Netty TCP socket consumer to receive messages from a client program (which is outside of my control). The client should be opening a socket, sending us one message, then closing the socket, but we've been seeing cases where instead of one message Camel is "splitting" the text stream into two parts and trying to process them separately.
So I'm trying to figure out, since you can re-use the same socket for multiple Camel messages, but TCP sockets don't have a built-in concept of "frames" or a standard for message delimiters, how does Camel decide that a complete message has been received and is ready to process? I haven't been able to find a documented answer to this in the Netty component docs (https://camel.apache.org/components/3.15.x/netty-component.html), although maybe I'm missing something.
From playing around with a test script, it seems like one answer is "Camel assumes a message is complete and should be processed if it goes more than 1ms without receiving any input on the socket". Is this a correct statement, and if so is this behavior documented anywhere? Is there any way to change or configure this behavior? Really what I would prefer is for Camel to wait for an ETX character (or a much longer timeout) before processing a message, is that possible to set up?
Here's my test setup:
Camel flow:
from("netty:tcp://localhost:3003")
.log("Received: ${body}");
Python snippet:
DELAY_MS = 3
def send_msg(sock, msg):
print("Sending message: <{}>".format(msg))
if not sock.sendall(msg.encode()) is None:
print("Message failed to send")
time.sleep(DELAY_MS / 1000.0)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
print("Using DELAY_MS: {}".format(str(DELAY_MS)))
s.connect((args.hostname, args.port))
cutoff = int(math.floor(len(args.msg) / 2))
msg1 = args.msg[:cutoff]
send_msg(s, msg1)
msg2 = args.msg[cutoff:]
send_msg(s, msg2)
response = s.recv(1024)
except Exception as e:
print(e)
finally:
s.close()
I can see that with DELAY_MS=1 Camel logs one single message:
2022-02-21 16:54:40.689 INFO 19429 --- [erExecutorGroup] route1 : Received: a long string sent over the socket
But with DELAY_MS=2 it logs two separate messages:
2022-02-21 16:56:12.899 INFO 19429 --- [erExecutorGroup] route1 : Received: a long string sen
2022-02-21 16:56:12.899 INFO 19429 --- [erExecutorGroup] route1 : Received: t over the socket
After doing some more research, it seems like what I need to do is add a delimiter-based FrameDecoder to the decoders list.
Setting it up like this:
from("netty:tcp://localhost:3003?sync=true"
+ "&decoders=#frameDecoder,#stringDecoder"
+ "&encoders=#stringEncoder")
where frameDecoder is provided by
#Bean
ChannelHandlerFactory frameDecoder() {
ByteBuf[] ETX_DELIM = new ByteBuf[] { Unpooled.wrappedBuffer(new byte[] { (char)3 }) };
return ChannelHandlerFactories.newDelimiterBasedFrameDecoder(1024, ETX_DELIM,
false, "tcp");
}
seems to do the trick.
On the flip side though, it seems like this will hang indefinitely (or until lower-level TCP timeouts kick in?) if an ETX frame is not received, and I can't figure out any way to set a timeout on the decoder, so would still be eager for input if anyone knows how to do that.
I think the default "timeout" behavior I was seeing might've just been an artifact of Netty's read loop speed -- How does netty determine when a read is complete?
i'm new to Netty and intend to create a tcp socket server which reads the info of each client and replies back towards client before processing requests immediately ,i.e. sort of an acknowledgement towards client as and when the message enters overriden channelRead method of ChannelInboundHandlerAdapter class.
Please guide me in the above specified objective.
i'm currently trying the basic netty 4.1.4 echo server example however i wanted server to send back acknowledgement to the client so i updated channelread method as follows :
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.write(msg);
ChannelFuture cf = ctx.channel().write("FROM SERVER");
System.out.println("Channelfuture is "+cf);
}
and the output obtained was as follows:
Channelfuture is DefaultChannelPromise#3f4ee9dd(failure: java.lang.UnsupportedOperationException: unsupported message type: String (expected: ByteBuf, FileRegion))
I understand the error that it is expecting bytebuf but how do i achieve it? also, whether this method would be able to send out acknowledgement towards client
You can use String.getBytes(Charset) and Unpooled.wrappedBuffer(byte[]) to convert to ByteBuf.
ChannelFuture cf = ctx.channel()
.write(Unpooled.wrappedBuffer("FROM SERVER".getBytes(CharsetUtil.UTF_8)));
Also note that ctx.channel().write(...); may not be what you want. Consider ctx.write(...); instead. The difference is that if your handler is a ChannelDuplexHandler it would receive a write event when you do channel().write(). Using ctx instead of channel will send the write out from your handlers point in the pipeline instead of from the end of the pipeline, which is usually what you want.
edited at 2015-11-25 02:10
My ejabberd version is 14.12 and erlang R17B, so this code seems not useful because erlang:system_info(otp_release) in R17B retruns "17"
ejabberd_listener.erl
SockOpts2 =
try erlang:system_info(otp_release) >= "R13B" of
true -> [{send_timeout_close, true} | SockOpts];
false -> SockOpts
catch
_:_ -> []
end,
I added {send_timeout_close, true} manually in listen option, my problem sees to be solved because socket is closed at the same time of send timeout, trying to send follow-up messages in the queue would receive a {error,enotconn} response.
when a {gen_event, 'closed'} msg comes, c2s process terminate normally.
edited at 2015-11-24 03:40
Maybe I found method to reproduce this problem:
1. build a normal c2s connection with xmpp client
2. cut the client's network with some tools, eg. clumsy(drops all the tcp packet from server)
3. keep sending large packets to the c2s process
At first, gen_tcp:send returns ok before sendbuffer fills
Then, gen_tcp:send retruns {error,timeout} because of sendbuffer is filled
the process calls ejabberd_socket:close(Socket) to close the connection
send_text(StateData, Text) when StateData#state.mgmt_state == active ->
catch ?INFO_MSG("Send XML on stream = ~ts", [Text]),
case catch (StateData#state.sockmod):send(StateData#state.socket, Text) of
{'EXIT', _} ->
(StateData#state.sockmod):close(StateData#state.socket),
error;
_ ->
ok
end;
But ejabberd_socket:close/1 seems to be an async call, so the c2s process would handle next message in message_queue, keep calling gen_tcp:send/2, waiting for a send_timeout.
But at this time, ejabberd_receiver called gen_tcp:close(Socket), the socket is closed, so previous gen_tcp:send/2 never returns. I have tried several times with this method, it happens 100%.
Briefly, if I send packets to a client socket which is unable to receive packet and the sendbuffer is fullfilled, i would receive a {error, timeout} after sendtimeout. But, if another async process closed the socket when i am waiting for a sendtimeout with gen_tcp:send/2, I would never get a response.
so, I did this with erl, and gen_tcp:send/2 no response ( cuting network at step3, keep sending packet, async close).
I want to know is this a problem or because reason of myself?
original post below
Generally in ejabberd , i route message to client process, send to tcp socket via this function. And it works well most time.
Module ejabberd_c2s.erl
send_text(StateData, Text) when StateData#state.mgmt_state == active ->
catch ?INFO_MSG("Send XML on stream = ~ts", [Text]),
case catch (StateData#state.sockmod):send(StateData#state.socket, Text) of
{'EXIT', _} ->
(StateData#state.sockmod):close(StateData#state.socket),
error;
_ ->
ok
end;
But in some cases the c2s pid blocked on gen_tcp:send like this
erlang:process_info(pid(0,8353,11)).
[{current_function,{prim_inet,send,3}},
{initial_call,{proc_lib,init_p,5}},
{status,waiting},
{message_queue_len,96},
{messages ...}
...
Most cases happened when user's network status not so good, the receiver process should send 2 messages to c2s pid , and c2s would terminate session or wait for resume
{'$gen_event',closed}
{'DOWN',#Ref<0.0.1201.250595>,process,<0.19617.245>,normal}
I printed message queue in the c2s process, the 2 msg are in the queue, waiting to be handled. Unfortunately,
the queue does not move any more becasue the process had blocked before handling these messages, as described above, stacked at prim_inet:send/3 when tring to do gen_tcp:send/2.
The queue grows very large after days, and ejabberd crahes when the process asking for more memory.
prim_inet:send/3 source :
send(S, Data, OptList) when is_port(S), is_list(OptList) ->
?DBG_FORMAT("prim_inet:send(~p, ~p)~n", [S,Data]),
try erlang:port_command(S, Data, OptList) of
false -> % Port busy and nosuspend option passed
?DBG_FORMAT("prim_inet:send() -> {error,busy}~n", []),
{error,busy};
true ->
receive
{inet_reply,S,Status} ->
?DBG_FORMAT("prim_inet:send() -> ~p~n", [Status]),
Status
end
catch
error:_Error ->
?DBG_FORMAT("prim_inet:send() -> {error,einval}~n", []),
{error,einval}
end.
It seems the port driver did not reply {inet_reply,S,Status} after erlang:port_command(S, Data, OptList) .
the gen_tcp:send function would block infinity, Can anyone explain this?
It depends on the version of Erlang you are using. The option to timeout on gen_tcp send is not used on old ejabberd version because it was not available at that time in Erlang. Moreover, you have to use a very recent version of Erlang as some bug were fixed in Erlang itself regarding that options.
I've made a TCP Server and a client but I got stuck on a probably very simple thing.
There are two functions, "recv()" and "send()". "recv()" can return different values like "SOCKET_ERROR" (and others) that signs that connection was lost or something else.
In the server (which is threaded), a message "Connecting..." is sent when a client connects and then either "Connection successful" or "Connection failed" followed by the error. In short it can either be:
send(...) //Connecting...
...
send(...) //Connection successful
or:
send(...) //Connecting...
...
send(...) //Connection failed
...
send(...) //The error
How can I check if there is a message waiting to be received?
You can use select(...) API on socket to wait for N milliseconds to check if message is ready to be received or if send API can be executed without blocking.
I have a problem. When I want to achieve WebSocket server, the server can't send data to the client (in Chrome 16). For example, sending the text "Hello", the server sends the data framing "0x81 0x05 0x48 0x65 0x6c 0x6c 0x6f" to the client, but the browser can't receive the data. Is this code wrong?
sub getSendDataNoMask{
my $dataStr="Hello";
my #ret;
push(#ret,pack("H*","81"));
push(#ret,pack("H*","05"));
push(#ret,$dataStr);
return join("",#ret);
}
What error do you get from the Chrome Javascript console?
You also didn't post your handshake code (the more likely thing to have a problem). Are you certain that the handshake was completed successfully? In other words, did you get an onopen event in the browser?
var ws = WebSocket("ws://myhost:6080/websocket");
ws.onopen = function (e) {
console.log("connection opened");
};
ws.onmessage
console.log("Got data: " + e.data);
};
If you didn't get an opopen event then the handshake never finished successfully. If you are getting on onopen event, then I would try sending data the opposite direction and make sure you can receive and decode frames from your perl server before trying to send.