Matlab sockets wait for response - matlab

I'm trying to run the following client and server socket example code in matlab:
http://www.mathworks.com/help/instrument/using-tcpip-server-sockets.html
This is my code.
Server:
t=tcpip('0.0.0.0', 9994, 'NetworkRole', 'server');
fopen(t);
data=fread(t, t.BytesAvailable, 'double');
plot(data);
Client:
data=sin(1:64);
t=tcpip('localhost', 9994, 'NetworkRole', 'client');
fopen(t);
fwrite(t, data, 'double');
This is what happens: I run the server code-> The program waits for the connection from the client-> I run the client code ->In the server console I get:
Error using icinterface/fread (line 163)
SIZE must be greater than 0.
Error in socketTentativaMatlab (line 3)
data=fread(t, t.BytesAvailable, 'double');
What am I doing wrong? It looks like the server doesn't wait for the client to send anything to try to read the data, so there's no data to read (It waits for the client connection thought).
Edit1:
Ok, I'm sending chars now, so we know for sure that t.BytesAvaiable = number of elements.
I have been able to successfully receive synchronously in the following way (this is server code, client code is the same but I send chars now and pause 1 second after establishing the connection with the server):
t=tcpip('0.0.0.0', 30000, 'NetworkRole', 'server');
fopen(t);
data=strcat(fread(t, 1, 'uint8')');
if get(t,'BytesAvailable') > 1
data=strcat(data,fread(t, t.BytesAvailable, 'uint8')');
end
data
This is because I suspected that bytesAvaiable is the number of bytes left to read after attempting to read at least once... this doesn't seem very logical, but it apparently is what happens. Since I have to read at least once to know how many bytes the message has...I choose to read 1 byte only the first time. I then read what's left, if there is something left...
I can make this work between matlab processes, but I can't do it between C++ and matlab. The C++ client successfully connects to the matlab server, and can send the data without problems or errors. However, on the matlab server side, I can't read it.
Something seems very wrong with all this matlab tcpip implementation!
Edit2:
If I properly close all the sockets in both client and server (basically don't let the program exit with open sockets), the above code seams to work consistently. I went to console and typed "netstat" to see all the connections ...It turns out since I was leaving open sockets, some connections were in the FIN_WAIT_2 state, which apparently rendered the ports of those connections unusable. Eventually the connection times out definitely, but it takes a minute or more, so, it's really best practice to just make sure the sockets are always properly closed.
I don't understand thought what is the logic behind t.BytesAvaiable... it doesn't seam to make much sense the way it is. If I loop and wait for it to become greater then 0, it eventually happens, but this is not the way things are supposed to be with synchronous sockets. My code lets one do things synchronously, even though I don't understand why t.BytesAvaiable isn't properly set the first time.
Final server code:
t=tcpip('0.0.0.0', 30000, 'NetworkRole', 'server');
fopen(t);
data=strcat(fread(t, 1, 'uint8'));
if get(t,'BytesAvailable') > 1
data=strcat(data,fread(t, t.BytesAvailable, 'uint8')');
end
fclose(t);
Final client code:
Your typical socket client, implemented in any language, but you will have to make sure that between successive calls of send() method/function (or between calling connect() and send()), at least 100ms (lower number seam to be risky) are ellapsed.

You are right, the server doesn't appear to be waiting for the client, even though the default mode of communication is synchronous. You can implement the waiting yourself, for example by inserting
while t.BytesAvailable == 0
pause(1)
end
before the read.
However, I've found that there are more problems – it's weird that the code from the MathWorks site is so bad – namely, t.BytesAvailable gives a number of bytes, while fread expects a number of values, and since one double value needs 8 bytes it has to say
data=fread(t, floor(t.BytesAvailable / 8), 'double');
Moreover, if the client writes the data immediately after opening the connection, I've found that the server simply overlooks them. I was able to fix this by inserting a pause(1) in the client code, like this
data=sin(1:64);
t=tcpip('localhost', 9994, 'NetworkRole', 'client');
fopen(t);
pause(1)
fwrite(t, data, 'double');
My impression is that Matlab's implementation of TCP/IP server client communication is quite fragile and needs a lot of workarounds...

Related

Is it OK to shutdown socket after write all data to it?

I'm writing simple http server.
I want to shutdown socket after server send all data.
I considered that compare return byte of write() to socket with actuall content length, but I had read that the return value just means that data moved to send-buffer of the socket. (Im not sure and I don't know how can I check it)
If so, can I shutdown the socket just after check the bytes are same? What if the datas sended need to be retransmitted at TCP level after server send FIN flag?
The OS does not discard data you have written when you call shutdown(SHUT_WR). If the other end already shut down its end (you can tell because you received 0 bytes) then you should be able to close the socket, and the OS will keep it open until it has finished sending everything.
The FIN is treated like part of the data. It has to be retransmitted if the other end doesn't receive it, and it doesn't get processed until everything before it has been received. This is called "graceful shutdown" or "graceful close". This is unlike RST, which signals that the connection should be aborted immediately.

Bidirectional communication of Unix sockets

I'm trying to create a server that sets up a Unix socket and listens for clients which send/receive data. I've made a small repository to recreate the problem.
The server runs and it can receive data from the clients that connect, but I can't get the server response to be read from the client without an error on the server.
I have commented out the offending code on the client and server. Uncomment both to recreate the problem.
When the code to respond to the client is uncommented, I get this error on the server:
thread '' panicked at 'called Result::unwrap() on an Err value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/main.rs:77:42
MRE Link
Your code calls set_read_timeout to set the timeout on the socket. Its documentation states that on Unix it results in a WouldBlock error in case of timeout, which is precisely what happens to you.
As to why your client times out, the likely reason is that the server calls stream.read_to_string(&mut response), which reads the stream until end-of-file. On the other hand, your client calls write_all() followed by flush(), and (after uncommenting the offending code) attempts to read the response. But the attempt to read the response means that the stream is not closed, so the server will wait for EOF, and you have a deadlock on your hands. Note that none of this is specific to Rust; you would have the exact same issue in C++ or Python.
To fix the issue, you need to use a protocol in your communication. A very simple protocol could consist of first sending the message size (in a fixed format, perhaps 4 bytes in length) and only then the actual message. The code that reads from the stream would do the same: first read the message size and then the message itself. Even better than inventing your own protocol would be to use an existing one, e.g. to exchange messages using serde.

How to implement Socket.PollAsync in C#

Is it possible to implement the equivalent of Socket.Poll in async/await paradigm (or BeginXXX/EndXXX async pattern)?
A method which would act like NetworkStream.ReadAsync or Socket.BeginReceive but:
leave the data in the socket buffer
complete after the specified interval of time if no data arrived (leaving the socket in connected state so that the polling operation can be retried)
I need to implement IMAP IDLE so that the client connects to the mail server and then goes into waiting state where it received data from the server. If the server does not send anything within 10 minutes, the code sends ping to the server (without reconnecting, the connection is never closed), and starts waiting for data again.
In my tests, leaving the data in the buffer seems to be possible if I tell Socket.BeginReceive method to read no more than 0 bytes, e.g.:
sock.BeginReceive(b, 0, 0, SocketFlags.None, null, null)
However, not sure if it indeed will work in all cases, maybe I'm missing something. For instance, if the remote server closes the connection, it may send a zero-byte packet and not sure if Socket.BeginReceive will act identically to Socket.Poll in this case or not.
And the main problem is how to stop socket.BeginReceive without closing the socket.

erlang sockets and gen_server - no data received on server side

In a nutshell:
I am trying to make a socket server to which clients connect and send/receive messages (based on the sockserv code in Learn you some erlang tutorial http://learnyousomeerlang.com/buckets-of-sockets)
Server side components:
supervisor - unique, started at the very beginning, spawns processes with gen_server behaviour
gen_server behaviour processes - each one deals with a connection.
Client side:
client which connects to the socket and sends a few bytes of data and then disconnects.
Code details
My code is pretty much the same as in the presented tutorial. The supervisor is identical. The gen_server component is simplified so that it has only one handle_info case which is supposed to catch everything and just print it.
Problem
The connection succeeds, but when the client sends data, the server behaves as though no data is received (I am expecting that handle_info is called when that happens).
handle_info does get called but only when the client disconnects and this event is reported with a message.
My attempts
I have played around with different clients written in Erlang or Java, I have tried setting the active/passive state of the socket. The author of the tutorial sets {active, once} after sending a message. I ended up just setting {active, true} after the AcceptSocket is created as such: (the gen_server proc is initialized with a state which contains the original ListenSocket created by the supervisor)
handle_cast(accept, S = #state{socket=ListenSocket}) ->
{ok, AcceptSocket} = gen_tcp:accept(ListenSocket),
io:format("accepted connection ~n", []),
sockserv_sup:start_socket(), % a new acceptor is born, praise the lord
inet:setopts(AcceptSocket, [{active, true}]),
send(AcceptSocket, "Yellow", []),
{noreply, S#state{socket=AcceptSocket, next=name}}.
send(Socket, Str, Args) ->
ok = gen_tcp:send(Socket, io_lib:format(Str++"~n", Args)),
ok.
handle_info(E, S) ->
io:format("mothereffing unexpected: ~p~n", [E]),
{noreply, S}.
It has aboslutely no effect. handle_info only gets called when the connection is lost because the client disconnects. whenever the client sends data nothing happens.
What could be the problem? I have spend quite some time on this, I really have no idea.
Many thanks.
Have you tried setting the other options in http://www.erlang.org/doc/man/inet.html#setopts-2
inet:setopts(AcceptSocket, [{active, true}])
for example:
{packet, line} to read in a line at a time
and
binary to read in data as a binary.
I also was working through a similar exercise based on that tutorial recently and my options used were:
inet:setopts(LSocket, [{active,true}, {packet, line}, binary, {reuseaddr, true}]),
To conclude, watch out for the options. I was indeed not paying attention to the implications of the set of options. I tried with a more narrowed down situation and worked it out. My problem was the {packet, line} option which implies that \n is considered a message delimiter.

Reading the socket buffer

I am attempting to write an FTP Client and I need to print out the server response to my commands. One of these commands is STAT. The server returns the response and as I understand it the response is in the socket buffer which I can read using the read() command. The problem is I only need the response for STAT so I know it will end with END OF STATUS. This is the code I wrote to read the response:
in = read(connFd, &timebuffer, sizeof(timebuffer));;
while(in>0){
printf("%s", timebuffer);
memset(&timebuffer, 0, sizeof timebuffer);
in = read(connFd, &timebuffer, sizeof(timebuffer));
}
memset(&timebuffer, 0, sizeof timebuffer);
The problem I am getting is that once the read() function goes through the buffer and finishes reading the while loop does not terminate and continues infinitely, my program just sits there. I assume it is because the read() function is waiting for data so I was wondering if there is a way to tell read() to stop once the end of the buffer is reached. I thought this would happen automagically since read() would return something x<1 but if it is waiting I understand what the problem is. So how would I fix it? Is there a way to set up a timeout(0) so it would only read data if it is there already? Also I know there are "flags" that I set to 0 but I can't find much info on them. I appreciate any help. Would the only way be to check for "END OF STATUS" string in the buffer? Would I use strstr(buffer)
read is a blocking call (unless you've set the socket to be non-blocking) and so will only return once its received the exact number of bytes you've requested or the socket gets closed.
If the socket is set to be non-blocking then you will get a zero return to "read" but you may get that even when you haven't reached the end of your response because your program will certainly be faster than the network.
As an additional note... You can't use strstr() unless you concatenate all your reads. You could get 1/2 of the terminate message in one read and the remaining in the next read.