There was this question posted in class today about API design in socket programming.
Why are listen() and accept() provided as different functions and not merged into one function?
Now as far as I know, listen marks a connected socket as ready to accept connections and sets a max bound on the queue of incoming connections. If accept and listen are merged, can such a queue not be maintained?
Or is there some other explanation?
Thanks in advance.
listen() means "start listening for clients"
accept() means "accept a client, blocking until one connects if necessary"
It makes sense to separate these two, because if they were merged, then the single merged function would block. This would cause problems for non-blocking I/O programs.
For example, lets take a typical server that wants to listen for new client connections, but also monitor existing client connections for new messages. A server like this typically uses a non-blocking I/O model so that it is not blocked on any one particular socket. So it needs a way to "start listening" on the server socket without blocking on it. Once listening on the server socket has been initiated, the server socket is added to the bucket of sockets being monitored via select() (called poll() on some systems). The select() call would indicate when there is a client pending on the server socket. Then the program can then call accept() without fear of blocking on that socket.
listen(2) makes given TCP socket a server socket, i.e. creates a queue for accepting connection requests from the clients. Only the listening side port, and possibly IP address, are bound (thus you need to call bind(2) before listen(2)). accept(2) then actually takes such connection request from that queue and turns it into a connected socket (four parts required for two-way communication - source IP address, source port number, destination IP address, and destination port number - are assigned). listen(2) is called only once, while accept(2) is usually called multiple times.
Under the hood, bind assigns an address and a port to a socket descriptor. It means the port is now reserved for that socket, and therefore the system won't be able to assign the same port to another application (an exception exists, but I won't go into details here). It's also a one-time-per-socket operation.
Then listen is responsible for establishing the number of connections that can be queued for a given socket descriptor, and indicate that you're now willing to receive connections.
On the other hand, accept is used to dequeue the first connection from the queue of pending connections, and create a new socket to handle further communication through it. It may be called multiple times, and generally is. By default, this operation is blocking if there are no connections in the queue.
Now suppose you want to use an async IO mechanism (like epoll, poll, kqueue, select, etc). If listen and accept were a single API, how would you indicate that a given socket is willing to receive connections? The async mechanism needs to know you wish to handle this type of event as well.
With quite different semantics, it makes sense to have them apart.
Related
I have a client application that uses IOCP for socket communication. I'm using ConnectEx to make the TCP connection to the remote endpoing (binding the socket to ADDR_ANY and port 0 before calling ConnectEx).
It will be valid to have two connections to the same remote endpoint (same IP address and port number). When I test that condition with my current code, I have two overlapped IO read operations outstanding (one on each connected socket) from calls to WSARecv(). Each WSARecv() is called with the correct socket and overlapped structure. For example: WSARecv(socket1, ... overlapped1) and WSARecv(socket2, ... overlapped2). The problem I've run into is that when I get a response back from either remote, it triggers the completion event for both of the outstanding overlapped operations. My code only produces this result when two remotes have the same ip address and port number, not when there are two unique remote addresses. Is this the expected behavior (hopefully not)? If so, is there another way to accomplish this?
I'm posting an answer, even though it is really just an explanation of why the problem happened.
My test involved connecting to and communicating with a remote device that provides data. It turns out that it is on the other side of a digi terminal server. So the connection path was:
my test computer (via TCP) -> Digi terminal server (via Serial) -> remote device.
The digi terminal server basically converts TCP/IP to serial communications, and back. Since the serial side doesn't have a concept of 'connectedness' the digi doesn't know which TCP/IP connection should receive the serial data in response to a TCP/IP request, so it forwards the serial data to all active connections on the TCP/IP side. That's what was producing the IOCP trigger on both of my pending overlapped operations. Every time a request was sent to the digi, it sent the request out of its serial port. When the end device responded, the digit forwarded the response data to each of my TCP/IP connections.
Thanks to everyone who commented on my question, but sorry for taking up your time.
We have a gen_server process that manages the pool of passive sockets on the client side by creating them and borrowing them for other processes. Any other process can borrow a socket, sends a request to the server using the socket, gets a reply through gen_tcp:recv, and then releases the socket to the gen_server socket pool process.
The socket pool process monitors all processes that borrow the sockets. If any of the borrowed process is down, it gets a down signal from it:
handle_info({'DOWN', Ref, process, _Pid, _Reason}, State) ->
In this case we would like to drain the borrowed socket, and reuse it by putting back into the pool. The problem is that while trying to drain a socket using gen_tcp:recv(Socket, 0, 0), we get inet ealready error message, meaning that recv operation is in progress.
So the question is how to interrupt previous recv, successfully drain a socket, and reuse for other processes.
Thanks.
One more level of indirection will greatly simplify the situation.
Instead of passing sockets to processes that need to use them, have each socket controlled by a separate process that owns it and represents the socket within the system. Route Erlang-side messages to and from sockets as necessary to implement the "borrowing" of sockets (even more flexibly, pass the socket controller a callback module that speaks a given protocol, so as soon as data comes over the network it is interpreted as Erlang messages internally).
If this is done you will not lose control of sockets or have them in indeterminate states -- they will instead be held by a single, owning process the entire time. Instead of having the route-manager/pool-manager process receive the 'DOWN' messages, have the socket controllers monitor its current using process. When a 'DOWN' is received then you can change state according to whatever is necessary.
You can catch yourself in some weird situations passing open files descriptors, socket and other types of ports around among sockets that aren't designated as the owner of them. Passing ports and sockets around also becomes a problem if you need to scale a program across several nodes (suddenly you have to care where things are passed and what node they are on, etc.).
As far as i know only one process can be bound to a port of the same protocol, and in order to read incoming information to a port a socket must be bound to a that relevant port.
is there a way of sharing a socket with another process or something like that?
is there a way of sharing a socket with another process or something like that?
Sharing a socket and thus the port between two processes is possible (like after fork) but this is probably not what you want for data analysis, since if one process reads the data the other does not get them anymore.
how can firewall/iptables check incoming tcp traffic of already bound ports?
Packet filter like iptables work inside the kernel and get the data before they gets send to the socket. It does not even matter if there is socket bound to this specific port at all. Unless the packet filter denies the data they get forwarded unchanged to the socket (if there is any).
Passive IDS like snort or tools like tcpdump get the raw packets and here it also does not matter if there is a socket at all. They can only read the packets, i.e. not modify or block.
Application level firewalls or (reverse) proxies have their own socket and receive the data there (directly or redirected by the packet filter). They can then analyse the data and will explicitly forward the data (maybe after modification) to the original application.
I am trying to understand how concurrency works at a system level.
Backstory
I have an application and a datastore. The datastore can have several processes running and so can handle multiple requests concurrently. The datastore accepts communication over a single TCP port using a protocol in the format <msg length> <operation code> <operation data>
The existing application code blocks on datastore io. I could spin up several threads to achieve concurrency, but still each thread would block on io. I have some single thread non-blocking IO libraries but using them should require me to do some socket programming.
Question
How would a single-process connection pool to a single non-blocking port work? From what I understand the port maintains a sort of mapping so it can send the response to correct place when a response is ready. But I read that is uses the requestor's ip as the key. If multiple requests to the same port occur from the same process, wouldn't the messages get mixed up / intermingled?
Or, does each connection get assigned a unique key, so to make a connection pool I need only store a list of connection objects and they are guaranteed never to interact with each other?
Edit: don't know why i said TCP, and half the content of this question is unnecessary ... I am embarrassed. Probably ought to delete it, actually. I voted.
The datastore accepts communication over a single TCP port
The result of the accept() is a new full-duplex socket which can be read and written to concurrently and independently of all other sockets in the process. The fact that its local port is shared is irrelevant. TCP ports aren't physical objects, only numbers.
Non-blocking mode and data stores have nothing to do with it.
I am writing a gateway service which listens on the network socket and routes the packets received to separate daemons. I am planning to use boost asio but I am stuck with few questions. Here is the design of the server I am planning to implement:
The gateway will be listening for TCP connections using boost asio.
The gateway will also listen for streamed Unix domain connections from daemons using boost asio.
Whenever there is a packet on the tcp connection the gateway looks at the protocol tag in the packet and puts the packet on the unix domain connection on which the service will is listening.
Whenever there is a packet on the service connection the gateway looks at the client tag and puts on the respective client connection.
Every descriptor in the gateway will be a NONBLOCKING one.
I am stuck with one particular problem, when the gateway is writing to the service connection, there are chances of getting an EAGAIN or EWOULDBLOCK error if the service socket is full. I plan to tackle this by queuing the buffers and "waiting for the service connection get ready for write".
If I were to use select system call "waiting for the service connection get ready for write" would translate to adding the fd in the writefd list and passing it to select. Once the service connection is ready for write I will write the enqueued buffers to the connection and will remove the service connection from the writefdlist of select.
How do i do the same thing with boost asio? Is such thing possible?
If you want to go with that approach, then use boost::asio::null_buffers to enable Reactor-Style operations. Additionally, set the Boost.Asio socket to non-blocking through the socket::non_blocking() member function. This option will set the synchronous socket operations to be non-blocking. This is different from setting the native socket as non-blocking, as Boost.Asio sets the native socket as non-blocking, and emulates blocking for synchronous operations.
However, if Proactor-Style operations are an option, then consider using them, as it allows the application to ignore some of the lower level details. When using proactor style operations, Boost.Asio will perform the I/O on the application's behalf, properly handling EWOULDBLOCK, EAGAIN, and ERROR_RETRY logic. For example, when Boost.Asio incurs one of the previously mentioned errors, it pushes the I/O operation back into its internal queue, deferring its reattempt, allowing other operations to be attempted.
Often times, there are two constraints which require the use of Reactor-Style operations instead of Proactor-Style operations:
Another library expects to perform the I/O operations itself.
Memory limitations. With a Proactor, the lifespan of a buffer must exceed the duration of a read or write operation, and concurrent operations may require their own buffer. A Reactor allows for the lifetime of a buffer to begin when data is ready to be read, and end when data is no longer being used.
Using boost::asio you dont need to mess with nonblocking mode and/or with return codes such as EAGAIN EWOULDBLOCK etc. Also, you are not "adding a socket to pool loop" or something like that; this is hidden for you since it more highlevel framework.
Typical pattern is
You create io_service object
You create socket with binding to io_service
You create some async event (async_connect, async_read, async_write or so on) on the socket.
You run dispatching with io_service::run or similar methods.
asio will trigger your handler when time is come.
Check out for examples on the boost::asio page. I think async echo server can illustrate technique for your task.
If multiple threads will be writing to the same socket object used for a connection, then you need to use a mutex (or critical section if using Windows) to single thread the code.
As for - "when the gateway is writing to the service connection, there are chances of getting an EAGAIN or EWOULDBLOCK error if the service socket is full", I believe that ASIO handles that for you internally so you don't have to worry about it.