Reading the socket buffer - sockets

I am attempting to write an FTP Client and I need to print out the server response to my commands. One of these commands is STAT. The server returns the response and as I understand it the response is in the socket buffer which I can read using the read() command. The problem is I only need the response for STAT so I know it will end with END OF STATUS. This is the code I wrote to read the response:
in = read(connFd, &timebuffer, sizeof(timebuffer));;
while(in>0){
printf("%s", timebuffer);
memset(&timebuffer, 0, sizeof timebuffer);
in = read(connFd, &timebuffer, sizeof(timebuffer));
}
memset(&timebuffer, 0, sizeof timebuffer);
The problem I am getting is that once the read() function goes through the buffer and finishes reading the while loop does not terminate and continues infinitely, my program just sits there. I assume it is because the read() function is waiting for data so I was wondering if there is a way to tell read() to stop once the end of the buffer is reached. I thought this would happen automagically since read() would return something x<1 but if it is waiting I understand what the problem is. So how would I fix it? Is there a way to set up a timeout(0) so it would only read data if it is there already? Also I know there are "flags" that I set to 0 but I can't find much info on them. I appreciate any help. Would the only way be to check for "END OF STATUS" string in the buffer? Would I use strstr(buffer)

read is a blocking call (unless you've set the socket to be non-blocking) and so will only return once its received the exact number of bytes you've requested or the socket gets closed.
If the socket is set to be non-blocking then you will get a zero return to "read" but you may get that even when you haven't reached the end of your response because your program will certainly be faster than the network.
As an additional note... You can't use strstr() unless you concatenate all your reads. You could get 1/2 of the terminate message in one read and the remaining in the next read.

Related

Is it OK to shutdown socket after write all data to it?

I'm writing simple http server.
I want to shutdown socket after server send all data.
I considered that compare return byte of write() to socket with actuall content length, but I had read that the return value just means that data moved to send-buffer of the socket. (Im not sure and I don't know how can I check it)
If so, can I shutdown the socket just after check the bytes are same? What if the datas sended need to be retransmitted at TCP level after server send FIN flag?
The OS does not discard data you have written when you call shutdown(SHUT_WR). If the other end already shut down its end (you can tell because you received 0 bytes) then you should be able to close the socket, and the OS will keep it open until it has finished sending everything.
The FIN is treated like part of the data. It has to be retransmitted if the other end doesn't receive it, and it doesn't get processed until everything before it has been received. This is called "graceful shutdown" or "graceful close". This is unlike RST, which signals that the connection should be aborted immediately.

OpenSSL Nonblocking Socket Accept And Connect Failed

Here is my question:
Is it bad to set socket to nonblocking before I call accept or connect? or it should be using blocking accept and connect, then change the socket to nonblocking?
I'm new to OpenSSL and not very experienced with network programming. My problem here is I'm trying to use OpenSSL with a nonblocking socket network to add security. When I call SSL_accept on server side and SSL_connect on client side, and check return error code using
SSL_get_error(m_ssl, n);
char error[65535];
ERR_error_string_n(ERR_get_error(), error, 65535);
the return code from SSL_get_error indicates SSL_ERROR_WANT_READ, while ERR_error_string_n prints out "error:00000000:lib(0):func(0):reason(0)", which I think it means no error. SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Then I use a loop to retry those function, but this just leads to a infinite loop :(
I believe I have initialized SSL properly, here is the code
//CRYPTO_malloc_init();
SSL_library_init();
const SSL_METHOD *method;
// load & register all cryptos, etc.
OpenSSL_add_all_algorithms();
// load all error messages
SSL_load_error_strings();
if (server) {
// create new server-method instance
method = SSLv23_server_method();
}
else {
// create new client-method instance
method = SSLv23_client_method();
}
// create new context from method
m_ctx = SSL_CTX_new(method);
if (m_ctx == NULL) {
throwError(-1);
}
If there is any part I haven't mentioned but you think it could be the problem, please let me know.
SSL_ERROR_WANT_READ means I need to retry both SSL_accept and SSL_connect.
Yes, but this is not the full story.
You should retry the call only after the socket gets readable, e.g. you need to use select or poll or similar functions to wait, until the socket gets readable. Same applies to SSL_ERROR_WANT_WRITE, but here you have to wait for the socket to get writable.
If you just retry without waiting it will probably eventually succeed, but only after having 100s of failed calls. While doing select does not guarantee that it will succeed immediately at the next call it will take only a few calls of SSL_connect/SSL_accept until it succeeds and it will not busy loop and eat CPU in the mean time.

Matlab sockets wait for response

I'm trying to run the following client and server socket example code in matlab:
http://www.mathworks.com/help/instrument/using-tcpip-server-sockets.html
This is my code.
Server:
t=tcpip('0.0.0.0', 9994, 'NetworkRole', 'server');
fopen(t);
data=fread(t, t.BytesAvailable, 'double');
plot(data);
Client:
data=sin(1:64);
t=tcpip('localhost', 9994, 'NetworkRole', 'client');
fopen(t);
fwrite(t, data, 'double');
This is what happens: I run the server code-> The program waits for the connection from the client-> I run the client code ->In the server console I get:
Error using icinterface/fread (line 163)
SIZE must be greater than 0.
Error in socketTentativaMatlab (line 3)
data=fread(t, t.BytesAvailable, 'double');
What am I doing wrong? It looks like the server doesn't wait for the client to send anything to try to read the data, so there's no data to read (It waits for the client connection thought).
Edit1:
Ok, I'm sending chars now, so we know for sure that t.BytesAvaiable = number of elements.
I have been able to successfully receive synchronously in the following way (this is server code, client code is the same but I send chars now and pause 1 second after establishing the connection with the server):
t=tcpip('0.0.0.0', 30000, 'NetworkRole', 'server');
fopen(t);
data=strcat(fread(t, 1, 'uint8')');
if get(t,'BytesAvailable') > 1
data=strcat(data,fread(t, t.BytesAvailable, 'uint8')');
end
data
This is because I suspected that bytesAvaiable is the number of bytes left to read after attempting to read at least once... this doesn't seem very logical, but it apparently is what happens. Since I have to read at least once to know how many bytes the message has...I choose to read 1 byte only the first time. I then read what's left, if there is something left...
I can make this work between matlab processes, but I can't do it between C++ and matlab. The C++ client successfully connects to the matlab server, and can send the data without problems or errors. However, on the matlab server side, I can't read it.
Something seems very wrong with all this matlab tcpip implementation!
Edit2:
If I properly close all the sockets in both client and server (basically don't let the program exit with open sockets), the above code seams to work consistently. I went to console and typed "netstat" to see all the connections ...It turns out since I was leaving open sockets, some connections were in the FIN_WAIT_2 state, which apparently rendered the ports of those connections unusable. Eventually the connection times out definitely, but it takes a minute or more, so, it's really best practice to just make sure the sockets are always properly closed.
I don't understand thought what is the logic behind t.BytesAvaiable... it doesn't seam to make much sense the way it is. If I loop and wait for it to become greater then 0, it eventually happens, but this is not the way things are supposed to be with synchronous sockets. My code lets one do things synchronously, even though I don't understand why t.BytesAvaiable isn't properly set the first time.
Final server code:
t=tcpip('0.0.0.0', 30000, 'NetworkRole', 'server');
fopen(t);
data=strcat(fread(t, 1, 'uint8'));
if get(t,'BytesAvailable') > 1
data=strcat(data,fread(t, t.BytesAvailable, 'uint8')');
end
fclose(t);
Final client code:
Your typical socket client, implemented in any language, but you will have to make sure that between successive calls of send() method/function (or between calling connect() and send()), at least 100ms (lower number seam to be risky) are ellapsed.
You are right, the server doesn't appear to be waiting for the client, even though the default mode of communication is synchronous. You can implement the waiting yourself, for example by inserting
while t.BytesAvailable == 0
pause(1)
end
before the read.
However, I've found that there are more problems – it's weird that the code from the MathWorks site is so bad – namely, t.BytesAvailable gives a number of bytes, while fread expects a number of values, and since one double value needs 8 bytes it has to say
data=fread(t, floor(t.BytesAvailable / 8), 'double');
Moreover, if the client writes the data immediately after opening the connection, I've found that the server simply overlooks them. I was able to fix this by inserting a pause(1) in the client code, like this
data=sin(1:64);
t=tcpip('localhost', 9994, 'NetworkRole', 'client');
fopen(t);
pause(1)
fwrite(t, data, 'double');
My impression is that Matlab's implementation of TCP/IP server client communication is quite fragile and needs a lot of workarounds...

GCDAsyncSocket write timeout does not work

I am trying to set a timeout on write operations when using GCDAsyncSocket. The code is pretty simple and is the following.
[iAsyncSocket writeData:bytesToSend withTimeout:3.0 tag:0];
Then I disable the Internet connection on my Mac and wait for write timeout to occur, but nothing happens. I don't get a disconnection with a GCDAsyncSocketWriteTimeoutError error as I should.
I have also validated that my server stops, as expected, receiving the messages after I turn off the Internet connection.
I have looked inside the source code and I have found out that the writeTimer, that is responsible for firing a write timeout event, is always cancelled (function endCurrentWrite is called). Tracing back to where the timer is cancelled, I ended up at the following line of code.
ssize_t result = write(socketFD, buffer, (size_t)bytesToWrite);
The write system call always returns the total number of bytes that I am sending, as if the socket manages to send the data although there is no Internet connection. Is this logical?
Has anyone come up with the same problem or seen similar behaviour? Or has anyone managed to set a write timeout for a GCDAsyncSocket?
Thanks a lot.

TCP socket question

I starts learning TCP protocol from internet and having some experiments. After I read an article from http://www.diffen.com/difference/TCP_vs_UDP
"TCP is more reliable since it manages message acknowledgment and retransmissions in case of lost parts. Thus there is absolutely no missing data."
Then I do my experiment, I write a block of code with TCP socket:
while( ! EOF (file))
{
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
I think it's good because "TCP is reliable" and it "retransmissions lost parts"... But it's not good at all. A small file is OK but when it comes to about 2MB, sometimes it's OK but not always...
Now, I try another one:
while( ! EOF (file))
{
wait_for_ACK();//or sleep 5 seconds
data = read_from(file, 5KB); //read 5KB from file
write(data, socket); //write data to socket to send
}
It's good now...
All I can think of is that the 1st one fails because of:
1. buffer overflow on sender because the sending rate is slower than the writing rate of the program (the sending rate is controlled by TCP)
2. Maybe the sending rate is greater than writing rate but some packets are lost (after some retransmission, still fails and then TCP gives up...)
Any ideas?
Thanks.
TCP will ensure that you don't lose data but you should check how many bytes actually got accepted for transmission... the typical loop is
while (size > 0)
{
int sz = send(socket, bufptr, size, 0);
if (sz == -1) ... whoops, error ...
size -= sz; bufptr += sz;
}
when the send call accepts some data from your program then it's a job of the OS to get that to destination (including retransmission), but the buffer for sending may be smaller than the size you need to send, and that's why the resulting sz (number of bytes accepted for transmission) may be less than size.
It's also important to consider that sending is asynchronous, i.e. after the send function returns the data is not already at the destination, it's has been only assigned to the TCP transport system to be delivered. If you want to know when it will be received then you'll have to use other systems (e.g. a reply message from your counterpart).
You have to check write(socket) to make sure it writes what you ask.
Loop until you've sent everything or you've calculated a time out.
Do not use indefinite timeouts on socket read/write. You're asking for trouble if you do, especially on Windows.