Does data loss happens in fast sender and very slow receiver? - sockets

I have an application consisting of client and server by making use of sockets.
On the server side in the thread where it is receiving messages from client i have made a sleep call for 10 sec.Now when i send messages from client 1000000 times to server then messages being received from server is very slowly.My question is as follows:
-Does it mean that the receiving call on server side is blocking call?
-Secondly,is there any good document which can make me understand better the blocking and non blocking behavior of send and receive call of sockets.

Depends on whether you're using TCP or UDP sockets. TCP guarantees delivery, UDP doesn't. So in a UDP application packets can be dropped for any number of reasons, including if the servers sends too quickly to the client.
By default, calls on sockets are blocking calls. You have to set non-blocking explicitly.

Related

Do multiple websocket connections share one TCP connection?

Question
Suppose I open websocket connections /A, /B and /C from one client to the same server, and messages are constantly being sent by the client on each ws connection.
If the packet carrying a message for ws connection /A times out and ends up needing to be being retransmitted in the underlying TCP layer, is there any chance that this will impact messages being transmitted by the client on ws connections /B and /C, in the way that multiple packets being sent over a single TCP connection will be delayed when an earlier one times out?
Or is there a chance that each ws connection receives its own TCP connection and so any congestion on one does not impact the packets carrying messages on the others? Could it be implementation-specific?
Overview
After an initial handshake over HTTP, Websockets send their data over a TCP/IP connection.
My assumptions regarding TCP Connections:
For a specific TCP connection on a given port, the connection makes the guarantee that all packets sent by the a sender through it will be received by the receiver, and will do so in order. It is a reliable ordered connection.
This means if two packets are sent over a TCP connection by the sender and the first one does not arrive within a given timeout, the second one is delayed until the first one has been successfully retransmitted by the sender. As far as the receiver is concerned, they always see packet one, then two.
If you create two separate TCP connections on two separate ports, then packets lost on one connection naturally have no impact on packets on the other connection. The reliable ordered guarantee only applies to the packets within one TCP connection, not all of them.
Since an active websocket connection runs on top of a TCP connection, are there any assumptions we can make when we open multiple parallel websocket connections between the same client and server?
If I have a websocket javascript client opening two or more websocket connections to the same server, does the underlying implementation use only a single TCP connection?
Is there a chance that this may be implementation-specific, or is it simply guaranteed that for a given websocket server all connections will occur on the same underlying TCP connection?
Context
The context here is a networked multiplayer game in the browser, where the desired behavior would be to have multiple parallel data streams where any timeout or latency on one stream has no impact on the packets sent on the others.
Of course when low latency is desirable for multiplayer games you generally want to use UDP instead of TCP, but there is no real cross-browser, well supported option currently for that that I am aware of. In an environment where UDP sockets are an option you would implement your own data streams with varying reliability/order guarantees on top of UDP.
However UDP-like low latency when it comes to one packet not blocking another can be achieved by guaranteeing that a TCP connection only ever has one packet in flight, and using more connections to allow in-parallel packets (source). Naturally that loses out on some optimizations, but may be desirable if latency is the primary variable being optimized for.

ZeroMQ and TCP Retransmits on Linux

I'm having issues understanding what socket types are negatively impacted in the event that TCP must try retransmmitting messages.
We have a distributed system that uses a combination of inprocess and TCP connections for internal processes and external devices and applications. My concern is that in the event there is a significant traffic that causes latency and dropped packets, that a TCP retransmit will cause delay in the system.
What I'd like to avoid is an application that has messages compile in a queue waiting to be sent (via a single ZeroMQ TCP socket) because TCP is forcing the socket to repeatedly retransmit messages that never sent an acknowledge.
Is this an issue that can happen using ZeroMQ? Currently I am using PUSH/PULL on a Linux OS.
Or is this not a concern, and if not, why?
It is crucial that messages from the external devices/applications do not feed stale data.
First, the only transport where retransmits are possible is TCP over an actual physical network. And then likely not on a LAN, as it's unlikely that Ethernet packets will go missing on a LAN.
TCP internal to a computer, and especially IPC, INPROC, etc will all have guaranteed delivery of data first time, every time. There is no retransmit mechanism.
If one of the transports being used by socket does experience delays due to transmission errors, that will slow things up. ZMQ cannot consider a message "sent" until it's been propagated via all the transports used by the socket. The external visibility of "sent" is that the outbound message queue has moved away from the high water mark by 1.
It's possible that any one single message will arrive sooner over IPC than TCP, and possible that message 2 will arrive over IPC before message 1 has arrived via TCP. But if you're relying on message timing / relative order, you shouldn't be using ZMQ in the first place; it's Actor model, not CSP.
EDIT For Frank
The difference between Actor and CSP is that the former is asynchronous, the latter is synchronous. Thus for Actor model, the sender has zero information as to when the receiver actually gets a message. For CSP, the sending / receiving is an execution rendevous - the send completes only when the receive is complete.
This can be remarkably useful. If in your system it makes no sense for A to instruct C to do something before (in time, not just in A's code flow) instructing B, then you can do that with CSP (but not Actor model). That's because when A sends to B, B receives the message before A's send completes, freeing A to then send to C.
Unsurprisingly it's real time systems that benefit from CSP.
So consider ZMQ's Actor model with a mix of TCP, IPC and INPROC transports in ZMQ. There's a good chance that messages send via TCP will arrive a good deal later than messages sent through INPROC, even if they were sent first.

Why do we need SIP "100 Trying" response over TCP?

SIP over UDP: It's necessary to have SIP response "100 Trying" for SIP over UDP to shut the Timer-A off that would have been started by caller and hence stopping the re-transmission of the SIP message. Its really important because other responses (provisional and final) might take a while for initial INVITE message as we have to consider the scenario of forking, UE-B not reachable, fallback... etc It might take some time.
SIP over TCP: Timer-A will not be started by caller and thus no re-transmission of message. TCP being reliable, not re-transmission required. Even then, why do most implementation sends 100 Trying over TCP ???
There are few reasons that 100 Trying is still needed for SIP over TCP.
Having a TCP Connection does not guarantee that the SIP Application is working or if its a SIP - Aware application at all. The 100 Trying provides you the feedback that your request is being processed by a SIP Application.
The lack of 100 Trying can also be the right trigger for not just re-transmissions but to re-attempt to maybe a different server in the configuration. You may not want to elapse 32 seconds for every Server in configuration even when the connection is TCP.
In deployment scenarios, if there are elements like a SBC or Load Balancer, the TCP Connection is established with them. The Application behind it can be a different entity and usually these edge elements pass on all messaging or generate messaging to indicate the call in action state.
Probably because it makes the SIP stack implementation easier. It makes life easier if the SIP transaction layer is the same irrespective of the SIP transport that is used. If the transaction layer has different rules for different transports that's extra code for no real benefit, i.e. the bandwidth save by not sending the 100 Trying response is negligible in the scheme of things.

Receiving TCP packets as messages instead of using gen_tcp:recv/2

I'm writing a distributed chat application in Erlang for my own learning/benefit. I have a client and a server which maintain a persistent TCP connection. The client initiates the connection using gen_tcp:connect/3. The server is actually distributed over several nodes.
The gen_tcp documentation says:
Packets can be sent to the returned socket Socket using send/2. Packets sent from the peer are delivered as messages:
{tcp, Socket, Data}
Because of this, my client is able to receive any data the server sends as a normal Erlang message. This is desirable for my application.
The problem is that I can't see any way to make the connection on the server act the same way. I would love it if my server could receive sent data as an Erlang message. This way, the server can send data (i.e. when another person in the chat room sends a message) while waiting for the client to send a message.
Is there any way to implement this behavior?
EDIT: I'm aware of prim_inet:async_accept/2, but I'd prefer a documented approach if possible.
Look at inet:setopts with option {active, once|true}. Good article about

Differences between TCP sockets and web sockets, one more time [duplicate]

This question already has answers here:
What is the fundamental difference between WebSockets and pure TCP?
(4 answers)
Closed 4 years ago.
Trying to understand as best as I can the differences between TCP socket and websocket, I've already found a lot of useful information within these questions:
fundamental difference between websockets and pure TCP
How to establish a TCP Socket connection from a web browser (client side)?
and so on...
In my investigations, I went through this sentence on wikipedia:
Websocket differs from TCP in that it enables a stream of messages instead of a stream of bytes
I'm not totally sure about what it means exactly. What are your interpretations?
When you send bytes from a buffer with a normal TCP socket, the send function returns the number of bytes of the buffer that were sent. If it is a non-blocking socket or a non-blocking send then the number of bytes sent may be less than the size of the buffer. If it is a blocking socket or blocking send, then the number returned will match the size of the buffer but the call may block. With WebSockets, the data that is passed to the send method is always either sent as a whole "message" or not at all. Also, browser WebSocket implementations do not block on the send call.
But there are more important differences on the receiving side of things. When the receiver does a recv (or read) on a TCP socket, there is no guarantee that the number of bytes returned corresponds to a single send (or write) on the sender side. It might be the same, it may be less (or zero) and it might even be more (in which case bytes from multiple send/writes are received). With WebSockets, the recipient of a message is event-driven (you generally register a message handler routine), and the data in the event is always the entire message that the other side sent.
Note that you can do message based communication using TCP sockets, but you need some extra layer/encapsulation that is adding framing/message boundary data to the messages so that the original messages can be re-assembled from the pieces. In fact, WebSockets is built on normal TCP sockets and uses frame headers that contains the size of each frame and indicate which frames are part of a message. The WebSocket API re-assembles the TCP chunks of data into frames which are assembled into messages before invoking the message event handler once per message.
WebSocket is basically an application protocol (with reference to the ISO/OSI network stack), message-oriented, which makes use of TCP as transport layer.
The idea behind the WebSocket protocol consists of reusing the established TCP connection between a Client and Server. After the HTTP handshake the Client and Server start speaking WebSocket protocol by exchanging WebSocket envelopes. HTTP handshaking is used to overcome any barrier (e.g. firewalls) between a Client and a Server offering some services (usually port 80 is accessible from anywhere, by anyone). Client and Server can switch over speaking HTTP in any moment, making use of the same TCP connection (which is never released).
Behind the scenes WebSocket rebuilds the TCP frames in consistent envelopes/messages. The full-duplex channel is used by the Server to push updates towards the Client in an asynchronous way: the channel is open and the Client can call any futures/callbacks/promises to manage any asynchronous WebSocket received message.
To put it simply, WebSocket is a high level protocol (like HTTP itself) built on TCP (reliable transport layer, on per frame basis) that makes possible to build effective real-time application with JS Clients (previously Comet and long-polling techniques were used to pull updates from the Server before WebSockets were implemented. See Stackoverflow post: Differences between websockets and long polling for turn based game server ).