Do sockets automatically read into internal buffer - sockets

Let's say a Server sends a small amount of data, but a Client never calls recv(), will the data sent by the Server be present in an internal receive buffer on the Client side even though read(), recv(), ... were never called?

Related

Does recvmsg() which returns ENOBUFS also returns available messages?

I'm using NETLINK socket to receive NETLINK_ROUTE notifications in a user-space application.
I understand that ENOBUFS error is returned from recvmsg() when:
The user-space application is too slow to handle all the NETLINK messages that the kernel subsystem sends at a give rate.
The queue that is used to store messages that go from kernel to user-space is too small.
Now, am sure that the second point does not happen in my case since am able to receive certain notifications initially without any error.
After a period of time, I get the ENOBUFS error.
My doubt is that when the recvmsg() returns ENOBUFS:
Will it also fill and return the available messages that are still in
the socket buffer?
Or will it just return ENOBUFS?
Because, according to my understanding, if the socket buffer is full and NETLINK cannot write any more notifications onto the socket buffer, then it means there are still messages that need to be read from the socket.

TCP connection for real time

I want to use a real time TCP connection, I have a streaming of data from server , and I receive it by a client, but this client is too slow to receive as fast as the sender is, so the server buffer the data until it's reach the destination, for example if I "produce" data at time t, and suppose that the client are 10 time slower, then the data produced at time t, will arrive at time 10t.
I want to make the server "drop" the data that can't reach the client at the present time, and send the new data which is expected to arrive at the time?
B.S : I know that UDP protocol do this, but I want to do this by TCP.
I've done this sort of thing in the past, and got reasonably good results. Here's how I did it:
1) On the sending side, use setsockopt(SOL_SOCKET, SO_SNDBUF) to make the server's TCP socket's send buffer as small as you can get away with (since you can't drop data once it's already in the socket's send buffer, you want to keep as little data there as possible)
2) On the sending side, never proactively send() any outgoing data into the socket at all. Instead, write a function (we'll call it DumpCurrentStateToBuffer()) that writes the "current state" bytes (that you want to send to the client) into an in-memory buffer.
3) When the client's socket select()'s (or poll()'s, or whatever mechanism you use) as ready-for-write, call DumpCurrentStateToBuffer() to create a memory-buffer of bytes that are to be sent to the client. Now send that data to the client (if you're using blocking I/O you can do it synchronously, at the cost of potentially stalling your server until the data can be sent; OTOH if you're using non-blocking I/O, you may need to keep the memory-buffer and your current sent-bytes index into the buffer around as state variables, so you can keep sending more sub-chunks of the memory buffer over time, whenever the socket indicates that it can receive more bytes)
4) Once the memory-buffer's contents have been fully sent, you can free the memory buffer, and then wait for the socket to select as ready-for-write again; when it does, goto (3).
This technique doesn't solve all of TCP's non-real-time issues; for example, a dropped TCP packet will still have to be resent to the client. What it does do is guarantee that the client-to-server data backlog will never be more than one or two "states" long, because you never generate any new data unless/until there is at least some room in the socket's output buffer.

Client sending data very fast and server receiving it slowly without any data loss

I am working upon an application based on client and server making use of sockets,in this i am sending data from client to receiver and have made sleep call for 10 sec in server side.Now,when i am sending data from client 1000000 times the server receives it very slowly and the client is printing the values but it is also taking some time in doing so.So, i need to clear following points:
-In both client and server when the values are being displayed there is no loss of data on either side.Does this means that the recv call which is on server side is blocking?
-Secondly,is there any good documentation which could help me to understand better the blocking and non blocking concept of the send and recv calls which are used in sockets programming.

Chatserver. what happens to read an write buffer when ..?

A TCP Chatserver uses polling method for a concurrent service. Client A is sending huge amounts of data constantly. Chatserver tries to send the data from client A to client B and C. But, client B and C is not reading from its read buffer. What happens to the read and write buffer for chatserver, client A, client B and client C.
There are 2 cases
1. Chatserver has blocking sockets.
2. Chatserver has non blocking sockets.
If you're talking about TCP, the receiver's socket receive buffer fills up, so the sender's socket send buffer fills up, so the sender is either blocked (in blocking mode) or returned -1 with errno == EAGAIN/EWOULDBLOCK in non-blocking mode.
If you're talking about UDP the datagrams are dropped.

Receiving FD_CLOSE when there should be FD_READ

I have a strange problem on one of my clients workstation. I have a simple application that exchanges some data over network between two endpoints.
Basically the transaction goes like this:
Client A listens for incomming connection
Client B connects to A and sends some data
Client A read this data for further processing
Now the strange part is that client A does not receive whole data (sometimes it a part of buffer sometimes it is empty).
The A client uses WSAEventSelect function and waits for FD_READ to read data sent by B and for FD_CLOSE to detect disconnection.
Usually ( everytime except this one particular client) the FD_READ is signaled, data is processed and after that FD_CLOSE is signaled and all is fine, but here instead FD_READ i receive FD_CLOSE.
Can someone tell me how this is possible? Another thing is that program was working fine for about a year and suddenly it crashed.
Now the strange part is that client A does not receive whole data (sometimes it a part of buffer sometimes it is empty).
There's nothing strange about that, that's how TCP works, except that you will never receive zero bytes in blocking mode.
Usually ( everytime except this one particular client) the FD_READ is signaled, data is processed and after that FD_CLOSE is signaled and all is fine, but here instead FD_READ i receive FD_CLOSE.
Note that FD_READ can be signalled any number of times, not just once. You're not guaranteed to receive an entire message in a single read.
Can someone tell me how this is possible?
The client has closed the connection.
Quoting http://msdn.microsoft.com/en-us/library/windows/desktop/ms741576%28v=vs.85%29.aspx
"An application should check for remaining data upon receipt of FD_CLOSE to avoid any possibility of losing data."
So if the error code associated with the FD_CLOSE notification is 0, you should check to see if you still have data to read, that might be where your missing data is.
If the error code is NOT 0, then there was an error and the missing data is probably lost.