Erlang socket and receive timeout - sockets

How to set receive timeout for socket, i could not find it in the socket option man.
my first soluation to the problem is to put after statement.
{ok, Listen} = gen_tcp:listen(Port, [..,{active, once}...]),
{ok, Socket} = gen_tcp:accept(Listen),
loop(Socket).
loop(Socket) ->
receive
{tcp, Socket, Data} ->
inet:setopts(Sock, [{active, once}]),
loop(Socket);
{tcp_closed, Socket} -> closed;
Other -> process_data(Other)
after 1000 -> time_out
end.
but the socket may never timeout because there are messages from other processes
how can i set timeout without spawning other process ?

You can't specify a receive timeout if you are using active mode. If you need to control receive timeout behavior, switch to passive mode on the socket, i.e. {active,false} on the socket options, and then use gen_tcp:recv with a receive timeout option.
In addition, a lot of Erlang socket server designs use an Erlang process per client connection. You can see http://www.trapexit.org/Building_a_Non-blocking_TCP_server_using_OTP_principles and http://20bits.com/article/erlang-a-generalized-tcp-server for examples. The OTP provides a lot of great ways to build robust servers with Erlang; take advantage of it!

also you can use prim_inet:async_recv/3 which allow you to receive tcp message with timeout while receving other messages from different processes
read(Socket) ->
prim_inet:async_recv(Socket, 0, 1000),
receive
{inet_async, _ ,_ ,{ok, Msg}} ->
io:format("message received ~p~n",[Msg]),
read(Socket);
{inet_async,_,_,{error,timeout}} ->
io:format("timeout !"),
catch gen_tcp:close(Socket);
{fake, Msg} -> io:format("Message = ~p~n", [Msg]),
read(Socket)
end.

Related

ZMQ: Message gets lost in Dealer Router Dealer pattern implementation

I have a working setup where multiple clients send messages to multiple servers. Each message target only one server. The client knows the ids of all possible servers and only sends the messages if such server is actually connected. Each server on startup connects to the socked. There are multiple server workers which bind to inproc router socket. The communication is initiated from client always. The messages are sent asynchronously to each server.
This is achieved using DEALER->ROUTER->DEALER pattern. My problem is that when the number of client & server workers increase, the "ack" sent by server to client (Step # 7 below) is never delivered to client. Thus, the client is stuck waiting for acknowledgement whereas the server is waiting for more messages from client. Both the systems hang and never come out of this condition unless restarted. Details of configuration and communication flow are mentioned below.
I've checked system logs and nothing evident is coming out of it. Any help or guidance to triage this further will be helpful.
At startup, the client connects to the socket to its IP: Port, as a dealer.
"requester, _ := zmq.NewSocket(zmq.DEALER)".
The dealers connect to Broker. The broker connects frontend (client workers) to backend (server workers). Frontend is bound to TCP socket while the backend is bound as inproc.
// Frontend dealer workers
frontend, _ := zmq.NewSocket(zmq.DEALER)
defer frontend.Close()
// For workers local to the broker
backend, _ := zmq.NewSocket(zmq.DEALER)
defer backend.Close()
// Frontend should always use TCP
frontend.Bind("tcp://*:5559")
// Backend should always use inproc
backend.Bind("inproc://backend")
// Initialize Broker to transfer messages
poller := zmq.NewPoller()
poller.Add(frontend, zmq.POLLIN)
poller.Add(backend, zmq.POLLIN)
// Switching messages between sockets
for {
sockets, _ := poller.Poll(-1)
for _, socket := range sockets {
switch s := socket.Socket; s {
case frontend:
for {
msg, _ := s.RecvMessage(0)
workerID := findWorker(msg[0]) // Get server workerID from message for which it is intended
log.Println("Forwarding Message:", msg[1], "From Client: ", msg[0], "To Worker: ")
if more, _ := s.GetRcvmore(); more {
backend.SendMessage(workerID, msg, zmq.SNDMORE)
} else {
backend.SendMessage(workerID, msg)
break
}
}
case backend:
for {
msg, _ := s.RecvMessage(0)
// Register new workers as they come and go
fmt.Println("Message from backend worker: ", msg)
clientID := findClient(msg[0]) // Get client workerID from message for which it is intended
log.Println("Returning Message:", msg[1], "From Worker: ", msg[0], "To Client: ", clientID)
frontend.SendMessage(clientID, msg, zmq.SNDMORE)
}
}
}
}
Once the connection is established,
The client sends a set of messages on frontend socket. The messages contain metadata about the all the messages to be followed
requester.SendMessage(msg)
Once these messages are sent, then client waits for acknowledgement from the server
reply, _ := requester.RecvMessage(0)
The router transfers these messages from frontend to backend workers based on logic defined above
The backend dealers process these messages & respond back over backend socket asking for more messages
The Broker then transfers message from backend inproc to frontend socket
The client processes this message and sends required messsages to the server. The messages are sent as a group (batch) asynchronously
Server receives and processes all of the messages sent by client
After processing all the messages, the server sends an "ack" back to the client to confirm all the messages are received
Once all the messages are sent by client and processed by server, the server sends a final message indicating all the transfer is complete.
The communication ends here
This works great when there is a limited set of workers and messages transferred. The implementation has multiple dealers (clients) sending message to a router. Router in turn sends these messages to another set of dealers (servers) which process the respective messages. Each message contains the Client & Server Worker IDs for identification.
We have configured following limits for the send & receive queues.
Broker HWM: 10000
Dealer HWM: 1000
Broker Linger Limit: 0
Some more findings:
This issue is prominent when server processing (step 7 above) takes more than 10 minutes of time.
The client and server are running in different machines both are Ubuntu-20LTS with ZMQ version 4.3.2
Environment
libzmq version (commit hash if unreleased): 4.3.2
OS: Ubuntu 20LTS
Eventually, it turned out to be configuring Heartbeat for zmq sockets. Referred documentation here http://api.zeromq.org/4-2:zmq-setsockopt
Configured following parameters
ZMQ_HANDSHAKE_IVL: Set maximum handshake interval
ZMQ_HEARTBEAT_IVL: Set interval between sending ZMTP heartbeats
ZMQ_HEARTBEAT_TIMEOUT: Set timeout for ZMTP heartbeats
Configure the above parameters appropriately to ensure that there is a constant check between the client and server dealers. Thus even if one is delayed processing, the other one doesn't timeout abruptly.

erlang who close the tcp socket

http://erlangcentral.org/wiki/index.php/Building_a_Non-blocking_TCP_server_using_OTP_principles describe how to build a non-blocking tcp server, and one question about inet_async message.
handle_info({inet_async, ListSock, Ref, Error}, #state{listener=ListSock, acceptor=Ref} = State) ->
error_logger:error_msg("Error in socket acceptor: ~p.\n", [Error]),
{stop, Error, State};
If Error = {error, close}, who close the socket, client or server?
It depends, if you get that error, the socket may not have been opened in the first place. So if you try gen_tcp:send(Socket, "Message") you will get that the connection is closed.
Other reasons that the connection closed could be that the listening socket timed out waiting on a connection, or that gen_tcp:close(Socket) was called before the attempt to send a message.
Also you need to make sure you are connecting to the same port that the server initially opened the listening socket. So to answer your question, it could be either closed the connection.

How to connect to socket by TCP

I have simple server on OpenShift by Erlang, which creates socket and wait new connections. Local ip for server is 127.10.206.129 and server listens 16000 port.
Code of my server:
-module(chat).
-export ([start/0, wait_connect/2, handle/2]).
start() ->
{ok, ListenSocket} = gen_tcp:listen(16000, [binary, {ip,{127,10,206,129}}]),
wait_connect(ListenSocket, 0).
wait_connect(ListenSocket, Count) ->
io:format("~s~n", ["Wait..."]),
gen_tcp:accept(ListenSocket),
receive
{tcp, _Socket, Packet} ->
handle(Packet, Count),
spawn(?MODULE, wait_connect, [ListenSocket, Count+1]);
{tcp_error, _Socket, Reason} ->
{error, Reason}
end.
Code of my client:
-module(client).
-export ([client/2]).
client(Host, Data) ->
{ok, Socket} = gen_tcp:connect(Host, 16000, [binary]),
send(Socket, Data),
ok = gen_tcp:close(Socket).
send(Socket, <<Data/binary>>) ->
gen_tcp:send(Socket, Data).
Server starts without troubles. I tried run client on my localhost and had error (it tried to connect for time much than timeout:
2> client:client("chat-bild.rhcloud.com", <<"Hello world!">>).
** exception error: no match of right hand side value {error,etimedout}
in function client:client/2 (client.erl, line 4)
I tried this stupid way (although 127.10.206.129 is a incorrect ip, because it's server local ip):
3> client:client({127,10,206,129}, <<"Hello world!">>).
** exception error: no match of right hand side value {error,econnrefused}
in function client:client/2 (client.erl, line 16)
How to do gen_tcp:connect with url?
Only ports 80, 443, 8000, and 8443 are open to the outside world on openshift. You can not connect to other ports on OpenShift from your local workstation, unless you use rhc port-forward command, and that will only work with published ports that are used by cartridges.
Your listen call is limiting the listen interface to 127.10.206.129, but judging from the second client example, your client can't connect to that address. Try eliminating the {ip,{127,10,206,129}} option from your listen call so that your server listens on all available interfaces, or figure out what interface the server name "chat-bild.rhcloud.com" corresponds to and listen only on that interface.

erlang: to controlling_process(), or not to controlling_process()

Consider the following erlang code of a simple echo server:
Echo listener:
-module(echo_listener).
-export([start_link/1]).
-define(TCP_OPTIONS, [binary, {packet, 0}, {reuseaddr, true},
{keepalive, true}, {backlog, 30}, {active, false}]).
start_link(ListenPort) ->
{ok, ListenSocket} = gen_tcp:listen(ListenPort, ?TCP_OPTIONS),
accept_loop(ListenSocket).
accept_loop(ListenSocket) ->
{ok, ClientSocket} = gen_tcp:accept(ListenSocket),
Pid = spawn(echo_worker, usher, [ClientSocket]),
gen_tcp:controlling_process(ClientSocket, Pid),
accept_loop(ListenSocket).
Echo worker:
-module(echo_worker).
-export([echo/1]).
echo(ClientSocket) ->
case gen_tcp:recv(ClientSocket, 0) of
{ok, Data} ->
gen_tcp:send(ClientSocket, Data),
echo(ClientSocket);
{error, closed} ->
ok
end.
Whenever a client socket is accepted, the echo server spawns a echo worker and pass the client socket directly as a function parameter. In the code there is controller_process(), but I have tried the code without calling controlling_process(), it also works.
What is the real purpose of controlling_process()? When is it needed?
Thanks in advance.
Erlang documentation says about gen_tcp:controlling_process/1:
Assigns a new controlling process Pid to Socket. The controlling process is the process which receives messages from the socket. If
called by any other process than the current controlling process,
{error, not_owner} is returned.
You created listen socket with option {active, false} and you read the socket synchronously with gen_tcp:recv/2 so your code will work even without calling of gen_tcp:controlling_process/1. However if you want to receive data asynchronously you must create listen socket with option {active, true}. In that case owner of accepted connection will receive messages about incoming data. So if you dont call gen_tcp:controlling_process/1 these messages will be sent to listener process instead of worker.

FIN,ACK after PSH,ACK

I'm trying to implement a communication between a legacy system and a Linux system but I constantly get one of the following scenarios:
(The legacy system is server, the Linux is client)
Function recv(2) returns 0 (the peer has performed an orderly shutdown.)
> SYN
< SYN, ACK
> ACK
< PSH, ACK (the data)
> FIN, ACK
< ACK
> RST
< FIN, ACK
> RST
> RST
Function connect(2) returns -1 (error)
> SYN
< RST, ACK
When the server have send its data, the client should answer with data, but instead I get a "FIN, ACK"
Why is it like this? How should I interpret this? I'm not that familiar with TCP at this level
When the server have send its data, the client should answer with data, but I instead get a "FIN, ACK" Why is it like this? How should I interpret this?
It could be that once the server has sent the data (line 4) the client closes the socket or terminates prematurely and the operating system closes its socket and sends FIN (line 5). The server replies to FIN with ACK but the client has ceased to exist already and its operating system responds with RST. (I would expect the client OS to silently ignore and discard any TCP segments arriving for a closed connection during the notorious TIME-WAIT state, but that doesn't happen for some reason.)
http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination:
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-UX do. If such a host actively closes a connection but still has not read all the incoming data the stack already received from the link, this host sends a RST instead of a FIN (Section 4.2.2.13 in RFC 1122). This allows a TCP application to be sure the remote application has read all the data the former sent—waiting the FIN from the remote side, when it actively closes the connection. However, the remote TCP stack cannot distinguish between a Connection Aborting RST and this Data Loss RST. Both cause the remote stack to throw away all the data it received, but that the application still didn't read
After FIN, PSH, ACK --> One transaction completed
Second request receiving but sending [RST] seq=140 win=0 len=0