TCP retry in PHP sockets - sockets

I have a socket server that listens to connection on port 5001, when a connection is accepted and data is received i request my database to create a packet of data in a particular format and write it back to client.
To make the data transmission more reliable i have to implement a TCP retry in PHP, how do i go about this my current implementation uses a thread class that fires a thread to check and see if ack has been received for that packet till timeout else it retires 3 times till timeout, but havent hand any success with the same.
Is there a better way to implement the same.

To make the data transmission more
reliable i have to implement a TCP
retry in PHP
No you don't. TCP is already reliable and it already implements retry. And you don't have any way of knowing whether an ACK has been received or not so you can't implement what you described anyway. Unless you are talking about application-level ACKs? in which case you need to clarify your question.

Related

how to find that the client is reading from tcp buffer in go

I'm starting to use golang for a quite amount of time for a project. In my project I have to implement a tcp server which responds to tcp clients. The server has to send a number of messages to a client.
The problem is that when a server writes a message to a client connection, it has to wait until the client has read that message from buffer and then send another message (the server has to wait until the client calls the reader.ReadString('\n') method).
In my server code I wrote:
for {
data := <-client.outgoing
client.writer.WriteString(data + "\n")
client.writer.Flush()
}
but the server sends all the messages to client without waiting for ReadString in client.
How to make server wait until the client read a message and then send the other message?
I think that either the assignment is ambiguous or you're misinterpreting it and solving the XY problem.
The short answer is that you can never know whether the client has read a message just by looking at the TCP conversation. You have to implement this "protocol" in your application.
Here are a few problems:
From your application you don't really have access to what TCP is doing. You get a stream on which you can perform I/O.
The fact that a write to your stream "succeeds" only means that TCP has agreed to try to transport your stuff and has an independent copy. It doesn't say anything about whether the data has been received and it doesn't even mean the data has been even sent
You may find certain mechanisms to peer into TCP's inner workings (such as ioctls, SIOCINQ, SIOCOUTQ or various setsockopts): these won't help
Even if you find out what your TCP is doing, this only tells you what the remote TCP is doing. So if you have full control over your TCP and even see the acknowledgments from the peer, you still don't know what the application is doing. It's very possible the application didn't read the data yet (it might not have requested the data, the TCP might be withholding it in a buffer for some weird reason, the scheduler might not have scheduled the remote process etc.)
Going back to your question, a way to really know whether the remote application has received your message is to have the remote application tell you. This means you have to restructure your protocol to:
Send stuff from the server
Wait for a message from the application telling you it received your stuff
Send more stuff (because you know from point 2 it's safe to do so)

Does Accept event happens after the three way handshake?

I am writing an application on Linux (Client and Server) with socket programming. I came across the scenario, where my server application never responds to the initial SYN packet of the other end.
I am still debugging the issue.
Since my server is listening on a port, it never generates the accept event. Is the accept event is generated after the TCP handshake is done OR the accept event is generated when the initial SYN packet is received?
Some useful links, would be helpful.
Best
Is the accept event is generated after the TCP handshake is done
Yes.
OR the accept event is generated when the initial SYN packet is received?
No. The handshake has already happened. accept() just delivers you a socket from a queue of already accepted connections. While the queue is empty, it blocks.
This means that a client can connect even if the server has never called accept().
Accept() is not exactly an event, but a function that encapsulates the server side logic for the TCP handshake. The function is called beforehand(waiting for a client connection) and it returns after the handshake is over (it received the ACK from the client).
Some detailed explanations here:
http://lwn.net/Articles/508865/
http://www.ibm.com/developerworks/aix/library/au-tcpsystemcalls/
What kind of error do you get? Make sure your server is reachable for the client.
The TCP handshake is handled by the kernel; the server process is not involved. The kernel maintains two queues, one for incomplete connections (initial SYN received) and one for complete connections (3-way handshake complete).
The accept call retrieves the first entry in the complete queue, if the queue is empty and the socket is blocking the call blocks until a connection is made. If the socket is nonblocking, the call fails with EAGAIN or EWOULDBLOCK.
refs:
https://books.google.com/books?id=ptSC4LpwGA0C&lpg=PP1&pg=PA104#v=onepage&q&f=false/0131411551_ch04lev1sec5.html
https://man7.org/linux/man-pages/man2/accept.2.html

Differences between TCP sockets and web sockets, one more time [duplicate]

This question already has answers here:
What is the fundamental difference between WebSockets and pure TCP?
(4 answers)
Closed 4 years ago.
Trying to understand as best as I can the differences between TCP socket and websocket, I've already found a lot of useful information within these questions:
fundamental difference between websockets and pure TCP
How to establish a TCP Socket connection from a web browser (client side)?
and so on...
In my investigations, I went through this sentence on wikipedia:
Websocket differs from TCP in that it enables a stream of messages instead of a stream of bytes
I'm not totally sure about what it means exactly. What are your interpretations?
When you send bytes from a buffer with a normal TCP socket, the send function returns the number of bytes of the buffer that were sent. If it is a non-blocking socket or a non-blocking send then the number of bytes sent may be less than the size of the buffer. If it is a blocking socket or blocking send, then the number returned will match the size of the buffer but the call may block. With WebSockets, the data that is passed to the send method is always either sent as a whole "message" or not at all. Also, browser WebSocket implementations do not block on the send call.
But there are more important differences on the receiving side of things. When the receiver does a recv (or read) on a TCP socket, there is no guarantee that the number of bytes returned corresponds to a single send (or write) on the sender side. It might be the same, it may be less (or zero) and it might even be more (in which case bytes from multiple send/writes are received). With WebSockets, the recipient of a message is event-driven (you generally register a message handler routine), and the data in the event is always the entire message that the other side sent.
Note that you can do message based communication using TCP sockets, but you need some extra layer/encapsulation that is adding framing/message boundary data to the messages so that the original messages can be re-assembled from the pieces. In fact, WebSockets is built on normal TCP sockets and uses frame headers that contains the size of each frame and indicate which frames are part of a message. The WebSocket API re-assembles the TCP chunks of data into frames which are assembled into messages before invoking the message event handler once per message.
WebSocket is basically an application protocol (with reference to the ISO/OSI network stack), message-oriented, which makes use of TCP as transport layer.
The idea behind the WebSocket protocol consists of reusing the established TCP connection between a Client and Server. After the HTTP handshake the Client and Server start speaking WebSocket protocol by exchanging WebSocket envelopes. HTTP handshaking is used to overcome any barrier (e.g. firewalls) between a Client and a Server offering some services (usually port 80 is accessible from anywhere, by anyone). Client and Server can switch over speaking HTTP in any moment, making use of the same TCP connection (which is never released).
Behind the scenes WebSocket rebuilds the TCP frames in consistent envelopes/messages. The full-duplex channel is used by the Server to push updates towards the Client in an asynchronous way: the channel is open and the Client can call any futures/callbacks/promises to manage any asynchronous WebSocket received message.
To put it simply, WebSocket is a high level protocol (like HTTP itself) built on TCP (reliable transport layer, on per frame basis) that makes possible to build effective real-time application with JS Clients (previously Comet and long-polling techniques were used to pull updates from the Server before WebSockets were implemented. See Stackoverflow post: Differences between websockets and long polling for turn based game server ).

Will a TCP RST cause a host to drop the receive buffer?

Upon receiving a TCP RST packet, will the host drop all the remaining data in the receive buffer that has already been ACKed by the remote host but not read by the application process using the socket?
I'm wondering if it's dangerous to close a socket as soon as I'm not interested in what the other host has to say anymore (e.g. to conserver resources); e.g. if that could cause the other party to lose any data I've already sent, but he has not yet read.
Should RSTs generally be avoided and indicate a complete, bidirectional failure of communication, or are they a relatively safe way to unidirectionally force a connection teardown as in the example above?
I've found some nice explanations of the topic, they indicate that data loss is quite possible in that case:
http://blog.olivierlanglois.net/index.php/2010/02/06/tcp_rst_flag_subtleties
http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable also gives some more information on the topic, and offers a solution that I've used in my code. So far, I've not seen any RSTs sent by my server application.
Application-level close(2) on a socket does not produce an RST but a FIN packet sent to the other side, which results in normal four-way connection tear-down. RSTs are generated by the network stack in response to packets targeting not-existing TCP connection.
On the other hand, if you close the socket but the other side still has some data to write, its next send(2) will result in EPIPE.
With all of the above in mind, you are much better off designing your own protocol on top of TCP that includes explicit "logout" or "disconnect" message.

In TCP/IP sockets, how would the server know that a client is busy and not receiving data ?

In TCP/IP sockets, how would the server know that a client is busy and not receiving data ?
My solution:
Use connect(),
I am not sure.
thanks
In TCP/IP sockets, how would the server know that a client is busy and
not receiving data
If a TCP is constantly pushing data that the peer doesn't acknowledge, eventually the send window will fill up. At that point the TCP is going to buffer data to "send later". Eventually the buffer size will be reached and send(2) will hang (something it doesn't usually do).
If send(2) starts hanging it means the peer TCP isn't acknowledging data.
Obviously, even if the peer TCP accepts data it doesn't mean the peer application actually uses it. You could implement your own ACK mechanism on top of TCP, and it's not as unreasonable as it sounds. It would involve having the client send a "send me more" message once in a while.
A client will almost always receive your data, by which I mean the OS will accept the packets and queue them up for reading. If that queue fills up, then the sender will block (TCP, anyways). You can't actually know the activity of the client code. Pretty much your only option is to use timeouts.