So i got this issue here, I'm using a openfl.net.Socket to connect with my server and receive messages from it.
The problem is the server is sending two messages almost at the same time and my socket appears to read only one, i tried putting a breakpoint on the second message and releasing it just after it stops (like a sleep of 0.5 seconds), and so my client receive both messages, but sending both at almost same time i get only one... Tips?
socket.addEventListener(ProgressEvent.SOCKET_DATA, onResponse);
function onResponse(e:ProgressEvent):Void
{
trace("response");
if (socket.bytesAvailable > 0)
{
var size:Int = socket.readInt();
var domainId:Int = socket.readInt();
var messageId:Int = socket.readInt();
var count:Int = socket.readInt();
var socketData:String = socket.readUTFBytes(socket.bytesAvailable);
trace("RECEIVE: " + socketData);
var message:Message = Message.JSONToMessage(socketData);
Domain.processMessage(message);
}
}
I hope i made myself clear
So, on the receiving side, in one recv you are getting all data that sent from the other end.
One thing you should know about TCP is that it does not maintain message boundaries. It in fact does not know what a "message" is. Its a byte stream protocol. Three sends here will can result in three recv at the other end or even one recv at the other end for the full exchange of data.
Applications using TCP should construct "messages" out of what the TCP is handing over to them. TCP just ensures the data is given in the order it was sent, and tries its best to deliver the packets sent to the receiver. It is up to the application protocol to define what should be done with the data.
Related
My client side cannot recv the two messages if the sender sends too quickly.
sender.py
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', int(port)))
sock.listen(1)
conn, addr = sock.accept()
#conn.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
# sends message 1 and message 2
conn.send(pickle.dumps(message1))
#time.sleep(1)
conn.send(pickle.dumps(message2))
Where both message 1 and message 2 are pickled objects.
client.py
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect((ip,int(port)))
message1 = pickle.loads(sock.recv(1024))
print(message1)
message2 = pickle.loads(sock.recv(1024))
When i run this code as it is, i am able to print out message1 but i am unable to receive message2 from the sender. The socket blocks at message2.
Also, if i uncomment time.sleep(1) in my sender side code, i am able to receive both messages just fine. Not sure what the problem is. I tried to flush my TCP buffer everytime by setting TCP_NODELAY but that didnt work. Not sure what is actually happening ? How would i ensure that i receive the two messages
Your code assumes that each send on the server side will match a recv on the client side. But, TCP is byte stream and not a message based protocol. This means that it is likely that your first recv will already contain data from the second send which might be simply discarded by pickle.loads as junk after the pickled data. The second recv will only receive the remaining data (or just block since all data where already received) so pickle.loads will fail.
The common way to deal with this situation is to construct a message protocol on top of the TCP byte stream. This can for example be done by prefixing each message with a fixed-length size (for example as 4 byte uint using struct.pack('L',...)) when sending and for reading first read the fixed-length size value and then read the message with the given size.
As I couldn't find any way to peek for data (read data without consuming the buffer) as asked at How to peek StreamSocket for data in UWP apps I'm now trying to make my own "peek" but still no luck.
I don't see how I can read data from StreamSocket in the manner which will let me use timeouts and leave the connection usable in case if timeout elapses.
In the end, the problem is as follows. In my, let's say, IMAP client, I get response from a server and if this response is negative, I need to wait a bit to see if the server immediately sends yet another response (sometimes, the server can do it, with extra details on the error or even a zero packet to close the connection). if the server didn't send another response, I'm fine, just leaving the method and returning to the caller. The caller can then send more data to the stream, receive more responses, etc.
So, after sending a request and getting initial response I need in some cases to read socket once again with a very small timeout interval and if no data arrives, just do nothing.
You can use a CancelationTokenSource to generate a timeout and stop an async operation.
The DataReader consumes the data from the input stream of the StreamSocket. Its LoadAsync() method will return when there is at least one byte of data. Here, we are adding a cancellation source that will cancel the asynchronous task after 1 second to stop the DataReader.LoadAsync() if no data has been consumed.
var stream = new StreamSocket();
var inputStream = stream.InputStream;
var reader = new DataReader(inputStream);
reader.InputStreamOptions = InputStreamOptions.Partial;
while(true)
{
try
{
var timeoutSource = new CancellationTokenSource(TimeSpan.FromSeconds(1));
var data = await reader.LoadAsync(1).AsTask(timeoutSource.Token);
while(reader.UnconsumedBufferLength > 0)
{
var read = reader.ReadUInt32();
}
}
catch(TaskCanceledException)
{
// timeout
}
}
Do no forget that disposing the DataReader will close the stream and the connection.
I apologize before hand if some of these questions might be obvious for expert network programmers. I have researched and read about coding in networking and it is still not clear to me how to do this.
Assume that I want to write a tcp proxy (in go) with the connection between some TCP client and some TCP server. Something like this:
First assume that these connection are semi-permanent (will be closed after a long long while) and I need the data to arrive in order.
The idea that I want to implement is the following: whenever I get a request from the client, I want to forward that request to the backend server and wait (and do nothing) until the backend server responds to me (the proxy) and then forward that response to the client (assume that both TCP connection will be maintained in the common case).
There is one main problem that I am not sure how to solve. When I forward the request from the proxy to the server, and get the response, how do I know when the server has sent me all the information that I need if I do not know beforehand the format of the data being sent from the server to the proxy (i.e. I don't know if the response from the server is of the form of type-length-value scheme nor do I know if `\r\n\ indicates the end of the message form the server). I was told that I should assume that I get all the data from the server connection whenever my read size from the tcp connection is zero or smaller than the read size that I expected. However, this does not seem correct to me. The reason it might not be correct in general is the following:
Assume that the server for some reason is only writing to its socket one byte at a time but the total length of the response to the "real" client is much much much longer. Therefore, isn't it possible that when the proxy reads the tcp socket connected to the server, that the proxy only reads one byte and if it loops fast enough (to do a read before it receives more data), then read zero and incorrectly concludes that It got all the message that the client intended to receive?
One way to fix this might be to wait after each read from the socket, so that the proxy doesn't loop faster than it gets bytes. The reason that I am worried is, assume there is a network partition and i can't talk to the server anymore. However, it is not disconnected from me long enough to timeout the TCP connection. Thus, isn't it possible that I try to read from the tcp socket to the server again (faster than I get data) and read zero and incorrectly conclude that its all the data and then send it pack to the client? (remember, the promise I want to keep is that I only send whole messages to the client when i write to the client connection. Thus, its illegal to consider correct behaviour if the proxy goes, reads the connection again at a later time after it already wrote to the client, and sends the missing chunk at a later time, maybe during the response of a different request).
The code that I have written is in go-playground.
The analogy that I like to use to explain why I think this method doesn't work is the following:
Say we have a cup and the proxy is drinking half the cup every time it does a read from the server, but the server only puts 1 teaspoon at a time. Thus, if the proxy drinks faster than it gets teaspoons it might reach zero too soon and conclude that its socket is empty and that its ok to move on! Which is wrong if we want to guarantee we are sending full messages every time. Either, this analogy is wrong and some "magic" from TCP makes it work or the algorithm that assumes until the socket is empty is just plain wrong.
A question that deals with a similar problems here suggests to read until EOF. However, I am unsure why that would be correct. Does reading EOF mean that I got the indented message? Is an EOF sent each time someone writes a chunk of bytes to a tcp socket (i.e. I am worried that if the server writes one byte at a time, that it sends 1 EOF per bytes)? However, EOF might be some of the "magic" of how a TCP connection really works? Does sending EOF's close the connection? If it does its not a method that I want to use. Also, I have no control of what the server might be doing (i.e. I do not know how often it wants to write to the socket to send data to the proxy, however, its reasonable to assume it writes to the socket with some "standard/normal writing algorithm to sockets"). I am just not convinced that reading till EOF from the socket from server is correct. Why would it? When can I even read to EOF? Are EOFs part of the data or are they in the TCP header?
Also, the idea that I wrote about putting a wait just epsilon bellow the time-out, would that work in the worst-case or only on average? I was also thinking, I realized that if the Wait() call is longer than the time-out, then if you return to the tcp connection and it doesn't have anything, then its safe to move on. However, if it doesn't have anything and we don't know what happened to the server, then we would time out. So its safe to close the connection (because the timeout would have done that anyway). Thus, I think if the Wait call is at least as long as the timeout, this procedure does work! What do people think?
I am also interested in an answer that can justify maybe why this algorithm work on some cases. For example, I was thinking, even if the server only write a byte at a time, if the scenario of deployment is a tight data centre, then on average, because delays are really small and the wait call is almost certainly enough, then wouldn't this algorithm be fine?
Also, are there any risks of the code I wrote getting into a "deadlock"?
package main
import (
"fmt"
"net"
)
type Proxy struct {
ServerConnection *net.TCPConn
ClientConnection *net.TCPConn
}
func (p *Proxy) Proxy() {
fmt.Println("Running proxy...")
for {
request := p.receiveRequestClient()
p.sendClientRequestToServer(request)
response := p.receiveResponseFromServer() //<--worried about this one.
p.sendServerResponseToClient(response)
}
}
func (p *Proxy) receiveRequestClient() (request []byte) {
//assume this function is a black box and that it works.
//maybe we know that the messages from the client always end in \r\n or they
//they are length prefixed.
return
}
func (p *Proxy) sendClientRequestToServer(request []byte) {
//do
bytesSent := 0
bytesToSend := len(request)
for bytesSent < bytesToSend {
n, _ := p.ServerConnection.Write(request)
bytesSent += n
}
return
}
// Intended behaviour: waits until ALL of the response from backend server is obtained.
// What it does though, assumes that if it reads zero, that the server has not yet
// written to the proxy and therefore waits. However, once the first byte has been read,
// keeps writting until it extracts all the data from the server and the socket is "empty".
// (Signaled by reading zero from the second loop)
func (p *Proxy) receiveResponseFromServer() (response []byte) {
bytesRead, _ := p.ServerConnection.Read(response)
for bytesRead == 0 {
bytesRead, _ = p.ServerConnection.Read(response)
}
for bytesRead != 0 {
n, _ := p.ServerConnection.Read(response)
bytesRead += n
//Wait(n) could solve it here?
}
return
}
func (p *Proxy) sendServerResponseToClient(response []byte) {
bytesSent := 0
bytesToSend := len(request)
for bytesSent < bytesToSend {
n, _ := p.ServerConnection.Write(request)
bytesSent += n
}
return
}
func main() {
proxy := &Proxy{}
proxy.Proxy()
}
Unless you're working with a specific higher-level protocol, there is no "message" to read from the client to relay to the server. TCP is a stream protocol, and all you can do is shuttle bytes back and forth.
The good news is that this is amazingly easy in go, and the core part of this proxy will be:
go io.Copy(server, client)
io.Copy(client, server)
This is obviously missing error handling, and doesn't shut down cleanly, but clearly shows how the core data transfer is handled.
I am working with client-server programming I am referring this link and my server is successfully running.
I need to send data continuously to the server.
I don't want to connect() every time before sending each packet. So for first time I just created a socket and send the first packet, the rest of the data I just used write() function to write data to the socket.
But my problem is while sending data continuously if the server is not there or my Ethernet is disabled still it successfully write data to socket.
Is there any method by which I can create socket only at once and send data continuously with knowing server failure?.
The main reason for doing like this that, on the server side I am using GPRS modem and on each time when call connect() function for each packet the modem get hanged.
For creating socket I using below code
Gprs_sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (Gprs_sockfd < 0)
{
Display("ERROR opening socket");
return 0;
}
server = gethostbyname((const char*)ip_address);
if (server == NULL)
{
Display("ERROR, no such host");
return 0;
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,(char *)&serv_addr.sin_addr.s_addr,server->h_length);
serv_addr.sin_port = htons(portno);
if (connect(Gprs_sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
{
Display("ERROR connecting");
return 0;
}
And each time I writing to the socket using the below code
n = write(Gprs_sockfd,data,length);
if(n<0)
{
Display("ERROR writing to socket");
return 0;
}
Thanks in advance.............
TCP was designed to tolerate temporary failures. It does byte sequencing, acknowledgments, and, if necessary, retransmissions. All unacknowledged data is buffered inside the kernel network stack. If I remember correctly the default is three re-transmission attempts (somebody correct me if I'm wrong) with exponential back-off timeouts. That quickly adds up to dozens of seconds, if not minutes.
My suggestion would be to design application-level acknowledgments into your protocol, meaning the server would send a short reply saying that it received that much data up to now, say every second. If the client does not receive suck ack in say 3 seconds, the client knows the connection is unusable and can close it. By the way, this is easier done with non-blocking sockets and polling functions like select(2) or poll(2).
Edit 0:
I think this would be very relevant here - "The ultimate SO_LINGER page, or: why is my tcp not reliable".
Nikolai is correct here, the behaviour you experience here is desirable as basically you could continue transfering data after network outage without any logic in your application. If your application should detect outages longer that specified amount of time, you need to add heartbeating into your protocol. This is standard way of solving the problem. It can also allow you for detect situation when network is all right, receiver is alive, but it has deadlocked (due to to a software bug).
Heartbeating could be as simple as mentioned by Nikolai -- sending a small packet every X seconds; if the server can't see the packet for N*X seconds, the connection would be dropped.
I starts learning TCP protocol from internet and having some experiments. After I read an article from http://www.diffen.com/difference/TCP_vs_UDP
"TCP is more reliable since it manages message acknowledgment and retransmissions in case of lost parts. Thus there is absolutely no missing data."
Then I do my experiment, I write a block of code with TCP socket:
while( ! EOF (file))
{
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
I think it's good because "TCP is reliable" and it "retransmissions lost parts"... But it's not good at all. A small file is OK but when it comes to about 2MB, sometimes it's OK but not always...
Now, I try another one:
while( ! EOF (file))
{
wait_for_ACK();//or sleep 5 seconds
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
It's good now...
All I can think of is that the 1st one fails because of:
1. buffer overflow on sender because the sending rate is slower than the writing rate of the program (the sending rate is controlled by TCP)
2. Maybe the sending rate is greater than writing rate but some packets are lost (after some retransmission, still fails and then TCP gives up...)
Any ideas?
Thanks.
TCP will ensure that you don't lose data but you should check how many bytes actually got accepted for transmission... the typical loop is
while (size > 0)
{
int sz = send(socket, bufptr, size, 0);
if (sz == -1) ... whoops, error ...
size -= sz; bufptr += sz;
}
when the send call accepts some data from your program then it's a job of the OS to get that to destination (including retransmission), but the buffer for sending may be smaller than the size you need to send, and that's why the resulting sz (number of bytes accepted for transmission) may be less than size.
It's also important to consider that sending is asynchronous, i.e. after the send function returns the data is not already at the destination, it's has been only assigned to the TCP transport system to be delivered. If you want to know when it will be received then you'll have to use other systems (e.g. a reply message from your counterpart).
You have to check write(socket) to make sure it writes what you ask.
Loop until you've sent everything or you've calculated a time out.
Do not use indefinite timeouts on socket read/write. You're asking for trouble if you do, especially on Windows.