Can non-blocking sockets be used with epoll's level triggered mode? - sockets

Currently I have a server application which supports multiple client sessions. Server is running with epoll's edge triggered mode. The sockets which are used inside the server are all non-blocking in nature.
The main epoll loop looks something like this,
n = epoll_wait()
iterate over n
if event is epollin(assume client has written some data)
while(1)
drain the buffer untill you get EAGAIN
break
Problem here arises when the data continuously flowing over the buffer and buffer never drains. The other sessions does not get a chance to be entertained by the server.
Because of this possible starvation to other clients, I am thinking of using level triggered mode which allows server to entertain all the active sessions in a round robin way.
Can I just use level triggered mode by removing "EPOLLET" from the subscribed event and read buffer data once(like, in LT mode)?
Any comments/references are appreciated.
Thanks !

Related

Interrupting gen_tcp:recv in erlang

We have a gen_server process that manages the pool of passive sockets on the client side by creating them and borrowing them for other processes. Any other process can borrow a socket, sends a request to the server using the socket, gets a reply through gen_tcp:recv, and then releases the socket to the gen_server socket pool process.
The socket pool process monitors all processes that borrow the sockets. If any of the borrowed process is down, it gets a down signal from it:
handle_info({'DOWN', Ref, process, _Pid, _Reason}, State) ->
In this case we would like to drain the borrowed socket, and reuse it by putting back into the pool. The problem is that while trying to drain a socket using gen_tcp:recv(Socket, 0, 0), we get inet ealready error message, meaning that recv operation is in progress.
So the question is how to interrupt previous recv, successfully drain a socket, and reuse for other processes.
Thanks.
One more level of indirection will greatly simplify the situation.
Instead of passing sockets to processes that need to use them, have each socket controlled by a separate process that owns it and represents the socket within the system. Route Erlang-side messages to and from sockets as necessary to implement the "borrowing" of sockets (even more flexibly, pass the socket controller a callback module that speaks a given protocol, so as soon as data comes over the network it is interpreted as Erlang messages internally).
If this is done you will not lose control of sockets or have them in indeterminate states -- they will instead be held by a single, owning process the entire time. Instead of having the route-manager/pool-manager process receive the 'DOWN' messages, have the socket controllers monitor its current using process. When a 'DOWN' is received then you can change state according to whatever is necessary.
You can catch yourself in some weird situations passing open files descriptors, socket and other types of ports around among sockets that aren't designated as the owner of them. Passing ports and sockets around also becomes a problem if you need to scale a program across several nodes (suddenly you have to care where things are passed and what node they are on, etc.).

which case we should use The Hybrid Approach Socket in Erlang?

Programing Erlang says in chapter 17.2
Erlang sockets can be opened in one of three modes: active, active once, or passive
...
You might think that using passive mode for all servers is the correct approach. Unfortunately, when we’re in passive mode, we can wait for the data from only one socket. This is useless for writing servers that must wait for data from multiple sockets.
I just could not understand the sentence This is useless for writing servers that must wait for data from multiple sockets
In my opinion, if I can not convince the clients, I should not use the active mode.
But I can make a Parallel Server with passive mode for each client(one Erlng process for one client).
Maybe it says that a Erlang process for multiple sockets. But I can not imagine the example of this case.
Could you give me more information about it?
Thank you!
Unfortunately, when we’re in passive mode, we can wait for the data from only one socket. This is useless for writing servers that must wait for data from multiple sockets.
I'd say that's not a very compelling argument against passive sockets. In almost all cases, you'll have one Erlang process per socket, and this problem doesn't arise.
A better argument against passive sockets is that while waiting for data (using gen_tcp:recv), the process cannot receive messages from other Erlang processes. Those messages could be the result of a computation, a request to shut down, etc.
That is, when using active or active-once mode, your receive would look something like this:
receive
{tcp, Socket, Data} ->
%% do something with Data
%% then reactivate the socket
ok = inet:setopts(Socket, [{active,once}]),
loop(Socket);
{result, Result} ->
%% send Result back to socket
ok = gen_tcp:send(Socket, Result),
loop(Socket);
stop ->
%% stop this process
exit(normal)
end
Using this code, whichever event arrives first will be handled first, regardless of whether it's incoming data on the socket or a message from another Erlang process.
If on the other hand you were using gen_tcp:recv to receive the data, you would block on that call, unable to react to {result, Result} and stop in a timely manner.

Socket data read wait time

I have application where I am listening on multiple sockets using select. If I start processing request that came in from Socket A and in the meanwhile if another request on socket B arrives then I want to know how long does socket B request had to wait before I could get it. Since this is a single threaded application I cannot spawn a new thread and go back to select to monitor again and instantly start processing request from socket B.
Is there a 'C' api available to get me this metric or is this just not possible to get?
There is no a straightforward way how to measure the interval between the 'data ready' time and 'data read' time because there is not any timestamp written together with the data. Moreover the situation is even more complex because a stream oriented socket may receive several data segments till select is closed and the it is not what interval should be measured.
If the application data processing is longer than packet processing in the kernel the you can do a reasonable measurement in following way:
print current time and some unique data id based on application protocol when select wakes up due to socket B data availability.
log any packet received for the socket B. You can use either a network traffic capture tool like wireshark or tcpdump. Or you can configure an iptables firewall rule (if it is running on linux) with target -j LOG.
Write a simple script/program that correlates the captured packets and the application log and subtract received and start processing time.
Of course the idea above does ignore the kernel processing time. If you really need exact time I have to introduce a new thread to your application.

Continious stream of data via socket gets progressively more delayed

I am working on an application which, through a Java program, links two different robot simulation environments. One simulation environment (let's call it A) sends the current state of the robot to the Java application, which does some calculations and then sends data about this current state, as well as some other information, on to the other simulation environment (let's call it B). Simulation B then updates the state of the robot to match Simulation A's version.
The problem is that as the program continues to run, simulation B begins to lag behind what simulation A is doing. This lag increases continuously, so that after a minute or so simulation B is several seconds behind.
I am using TCP sockets to send data between these environments and the Java program. From background reading on socket programming, I found out it is bad practice to continuously open and close sockets rapidly, so what I am doing currently is just keeping both sockets open. I have a loop running which grabs data from Sim A, does some calculations, and then sends the position data to Sim B and then I have the thread wait for 100ms and then the loop repeats. To be clear, the position data sent to B is unaltered from what is received from A.
Upon researching the lag issue, someone suggested to me that for streams of data it is actually a good idea to open and close sockets, because if you keep the socket open, if one simulation takes a longer time to process things than the other, you end up with the position data stacking up in the buffer and being read sequentially, instead of reading the most recent data. Is this true? Would rewriting my code to open and close sockets every 100ms potentially get rid of the delay? Or is this not how sockets actually work?
Edit for clarification: It is more critical that the simulations stay in sync than that all position data is sent, in other words it is acceptable to not pass along all data points for the sake of staying in sync.
Besides keeping the socket open causing problems, does anyone have any ideas of what might be causing the lag issue?
Thanks in advance for any insight/suggestions/hints!
You are correct about using a single connection. Data can indeed back up, but using multiple connections doesn't change that.
The basic question here is whether the Java program can calculate as fast as the robot can send data. If it can't, it will get behind. You can do various things to the networking to speed it up but if the computations can't keep up they are futile. So you need to investigate your timings.

Should I have to add a lock to a socket while two or more threads want to access it?

I get a socket from the accept function in main process, and two or more threads can send data from it. Then, the access of the socket must be mutually-excluive when two or more threads want to send data from it parallelly. My problem is if the OS will add a lock to the connected socket in the bottom of the system .
Since you mention accept(), I take it we are talking about stream sockets.
You can send simultaneously from multiple threads or processes on the same socket, but there is no guarantee that the data from multiple senders will not be interleaved together. So you probably don't want to do it.
If you are sending small amounts of data at a time that don't cause the socket to block, you can probably expect the data blocks submitted to each simultaneous send()/write() call to arrive contiguously at the other end. PROBABLY. You can't count on it.