Tcp server only taking one command. Need to flush recv buffer? - sockets

I am able to send the command "insert data" to the tcp server and it will do what is suppose to. I would like the server to take multiple commands one after the other. At the moment if i send "insert data" and then hit enter and then send "bob" which should not do anything the server responds as if i sent "insert data" again. If you think i should post full source code up let me know in comments.. Screenshot: http://imgur.com/UNRFb5n
#define buf 2000
void *connection_handler(void *socket_desc)
{
//Get the socket descriptor
int sock = *(int*)socket_desc;
ssize_t read_size;
char *message , client_message[buf];
//char *contents;
//contents = "hello";
//strcpy(mess,contents);
//Send some messages to the client
message = "Greetings! I am your connection handler\n";
write(sock , message , strlen(message));
message = "Now type something and i shall repeat what you type \n";
write(sock , message , strlen(message));
//Receive a message from client
while( (read_size = recv(sock , client_message , buf , 0 )) > 0 )
{
//write(sock , client_message , strlen(client_message));
char start_char[] = "start";
char insert_demo_char[] = "insert_demo";
char *inserting = "Inserting Data\n";
char *complete = "Task Complete\n";
if(strcmp(message, start_char))
{
printf("Starting...\n");
//start();
//printf("it works");
//fflush( stdout );
}
if(strcmp(message, insert_demo_char))
{
write(sock , inserting , strlen(inserting));
printf("Inserting data\n");
insert_demo();
write(sock, complete, strlen(complete));
printf("Finished Inserting Data\n");
}
}
if(read_size == 0)
{
puts("Client disconnected");
fflush(stdout);
}
else if(read_size == -1)
{
perror("recv failed");
}
//Free the socket pointer
free(socket_desc);
return 0;
}

while( (read_size = recv(sock , client_message , buf , 0 )) > 0 )
{
[...]
if(strcmp(message, start_char))
After you receive data into client_message, you are checking the buffer named message instead. Since you didn't recv() into that buffer, of course it has not changed.
Also note that strcmp() returns 0 if the two strings are equal, and non-zero if the two strings are different; you may have that backwards in your if(strcmp()) tests (I'm not sure what behavior you intended).

Since TCP is an octet streaming service, and cannot send application level messages longer than one byte, sending 'insert data' from the client may result in the recv() call loading the buffer with any of:
i
in
ins
inse
inser
insert
insert
insert d
insert da
insert dat
insert data
In the cases of incomplete application-level messages, more calls to recv() will be required to receive the remaining bytes of the message.
Note that sending 'insert data\0' will only result in a null-terminated char array in the buffer if recv() happens to return all that data in one call, which is why the 'read_size' returned by recv() is the ONLY way to determine how many bytes were loaded when transferring binary data: using strXXX calls on such a buffer is UB. You can use 'read_size' when transferring text to ensure that the text is null terminated, so preventing UB if the null terminator is not in the buffer:
while( (read_size = recv(sock , client_message , buf-1 , 0 )) > 0 )
client_message[read_size]:=\0;
..will at least give you a 'client_message' that is guaranteed null-terminated, though it will not help with strcmp() failing to identify partial application-level messages in the buffer.
To transfer application messages larger than one byte, you need a protocol on top of TCP that can parse the messages out from the byte stream.

Related

C select is overwriting timeout value [duplicate]

This question already has answers here:
Is timeout changed after a call to select in c?
(5 answers)
Closed 25 days ago.
In a very simple C program, using select() to check any new read data on a socket, when I use the optional timeout parameter, it is being overwritten by select(). It looks like it resets it to values of seconds and microseconds it actually waited, so when data is coming sooner than the timeout, it will have much smaller values, leading to smaller and smaller timeouts unless timeout is reset, when select() is called in a loop.
I could not find any information on this behavior in select() description. I am using Linux Ubuntu 18.04 in my testing. It looks like I have to reset the timeout value every time before calling select() to keep the same timeout?
The code snippet is this:
void *main_udp_loop(void *arg)
{
struct UDP_CTX *ctx = (UDP_CTX*)arg;
fd_set readfds = {};
struct sockaddr peer_addr = { 0 };
int peer_addr_len = sizeof(peer_addr);
while (1)
{
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 850000; // wait 0.85 second.
FD_ZERO(&readfds);
FD_SET(ctx->udp_socketfd, &readfds);
int activity = select( ctx->udp_socketfd + 1 , &readfds , NULL , NULL , &timeout);
if ((activity < 0) && (errno != EINTR))
{
printf("Select error: Exiting main thread\n");
return NULL;
}
if (timeout.tv_usec != 850000)
{
printf ("Timeout changed: %ld %ld\n", (long)timeout.tv_sec, (long)timeout.tv_usec);
}
if (activity == 0)
{
printf ("No activity from select: %ld \n", (long)time(0));
continue;
}
...
}
This is documented behavior in the Linux select() man page:
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
So, yes, you have to reset the timeout value every time you call select().

How to read all the data of unknown length from a StreamSocket in WinRT using DataReader

I have configured my socket to read partial data too like this:
#socket = new Windows.Networking.Sockets.StreamSocket()
hostName = new Windows.Networking.HostName(#ip)
#ensureConnection = #socket.connectAsync(hostName, #port.toString())
.then () =>
#writer = new DataWriter(#socket.outputStream)
#reader = new DataReader(#socket.inputStream)
#reader.inputStreamOptions = InputStreamOptions.partial
Then my function to read from the socket looks like this:
readLineAsync = (reader, buffer = "") ->
while reader.unconsumedBufferLength
byte = reader.readByte()
if byte is 0
return WinJS.Promise.as(buffer)
buffer += String.fromCharCode(byte)
reader.loadAsync(1024).then (readBytes) ->
if readBytes is 0
WinJS.Promise.as(buffer)
else
while reader.unconsumedBufferLength
byte = reader.readByte()
if byte is 0
return WinJS.Promise.as(buffer)
buffer += String.fromCharCode(byte)
readLineAsync(reader, buffer)
There are 2 problems with this function:
With very large responses, the stack builds up with recursive readLineAsync calls. How can I prevent that? Should I use the WinJS Scheduler API or similar to queue the next call to readLineAsync?
Sometimes the reader.loadAsync does not finish when no data is on the socket anymore. Sometimes it does and readByte is 0 then. Why is that?
Why do I loop over the reader's uncunsumedBufferLength on 2 locations in that function? I initially had this code in the loadAsync continuation handler. But since a response can contain a terminating \0 char I need to check for unread data in the readers buffer upon function entry too.
Thats the pseudo loop to send/receive to/from the socket:
readResponseAsync = (reader) ->
return readLineAsync(#reader).then (line) ->
result = parseLine(line)
if result.unknown then return readResponseAsync(reader)
return result
#ensureConnection.then () =>
sendCommand(...)
readResponseAsync(#reader).then (response) ->
# handle response
All the WinRT samples from MS deal with known amount of data on the socket, so they not really fit my scenario.

Checking sockets using select() - Winsock

I have been exploring on function select() to check if some sockets are ready to read and I must admit that I'm a bit confused. The MSDN says "The select function returns the total number of socket handles that are ready and contained in the fd_set structures".
Suppose I have 3 sockets and 2 sockets are ready, select() returns 2, but this gives me no information which 2 of these 3 sockets are ready to read so how can I check it?
On stack overflow I came across this: When select returns, it has updated the sets to show which file descriptors have become ready for read/write/exception
So I put breakpoints in my program to track my fd_set structure. What I have realized is that ( just one socket in fd_set): If socket is ready to read select():
returns 1
leaves fd_count (The number of sockets in the set) untouched
leaves fd_array (An array of sockets that are in the set.) untouched
If client did not send any data addressed to that socket select():
returns 0
decreases fd_count to 0
leaves fd_array untouched
If I call select() again and client again sent no data:
return -1 (I think this is because of the fd_count value - 0)
I guess I miss some crucial rules how select() works and what this function does but I can't figure out it.
Here is some code snippet to show what I do to call select():
CServer::CServer(char *ipAddress,short int portNumber)
{ // Creating socket
ServerSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (ServerSocket == INVALID_SOCKET)
std::cout << "I was not able to create ServerSocket\n";
else
std::cout << "ServerSocket created successfully\n";
// Initialization of ServerSocket Address
ServerSocketAddress.sin_family = AF_INET;
ServerSocketAddress.sin_addr.S_un.S_addr = inet_addr(ipAddress);
ServerSocketAddress.sin_port = htons(portNumber);
// Binding ServerSocket to ServerSocket Address
if (bind(ServerSocket, (SOCKADDR*)&ServerSocketAddress, sizeof(ServerSocketAddress)) == 0)
std::cout << "Binding ServersSocket and ServerSocketAddress ended with success\n";
else
std::cout << "There were problems with binding ServerSocket and ServerSocket Address\n";
// Initialization of the set of sockets
ServerSet.fd_count = 1;
ServerSet.fd_array[0] = ServerSocket;
}
In main :
CServer Server(IP_LOOPBACK_ADDRESS, 500);
tmp = select(0, &Server.ServerSet, NULL, NULL, &TimeOut);
Should't the fd_array be filled with 0 values after the select() call, when there is no socket that can be read?
You're suppose to use the FD_SET macro and friends. You're not doing that.

Broadcast sendto failed

I am trying to broadcast data but the output is udp send failed. I chose a random port 33333. What's wrong with my code?
int main()
{
struct sockaddr_in udpaddr = { sin_family : AF_INET };
int xudpsock_fd,sock,len = 0,ret = 0,optVal = 0;
char buffer[255];
char szSocket[64];
memset(buffer,0x00,sizeof(buffer));
memset(&udpaddr,0,sizeof(udpaddr));
udpaddr.sin_addr.s_addr = INADDR_BROADCAST;
udpaddr.sin_port = htons(33333);
xudpsock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP);
optVal = 1;
ret = setsockopt(xudpsock_fd,SOL_SOCKET,SO_BROADCAST,(char*)&optVal,sizeof(optVal));
strcpy(buffer,"this is a test msg");
len = sizeof(buffer);
ret = sendto(xudpsock_fd,buffer,len,0,(struct sockaddr*)&udpaddr,sizeof(udpaddr));
if (ret == -1)
printf("udp send failed\n");
else
printf("udp send succeed\n");
return (0);
}
One problem is that the address family you are trying to send to is zero (AF_UNSPEC). Although you initialize the family to AF_INET at the top of the function, you later zero it out with memset.
On the system I tested with, the send actually works anyway for some strange reason despite the invalid address family, but you should definitely try fixing that first.
You probably had a problem with your default route (eg, you didn't have one). sendto needs to pick an interface to send the packet on, but the destination address was probably outside the Destination/Genmask for each defined interface (see the 'route' command-line tool).
The default route catches this type of packet and sends it through an interface despite this mismatch.
Setting the destination to 127.255.255.255 will usually cause the packet to be sent through the loopback interface (127.0.0.1), meaning it will be able to be read by applications that (in this case) are run on the local machine.

Why code shows "Error 354 (net::ERR_CONTENT_LENGTH_MISMATCH): The server unexpectedly closed the connection."

I am building my HTTP WEB SERVER in JAVA.
If client request any file and that file is on that place in server, then server gives that file to client. I also made this code, and it works fine.
The part of code, that shows above functionality,
File targ = [CONTAINS ONE FILE]
PrintStream ps;
InputStream is = new FileInputStream(targ.getAbsolutePath());
while ((n = is.read(buf)) > 0) {
System.out.println(n);
ps.write(buf, 0, n);
}
But now to make my code optimized, I replace this code with below code,
InputStream is = null;
BufferedReader reader = null;
String output = null;
is = new FileInputStream(targ.getAbsolutePath());
reader = new BufferedReader(new InputStreamReader(is));
while( (output = reader.readLine()) != null) {
System.out.println("new line");
//System.out.println(output);
ps.print(output);
}
But it sometimes shows one error Why code shows "Error 354 (net::ERR_CONTENT_LENGTH_MISMATCH): The server unexpectedly closed the connection.". I didn't understand, why it shows this error. This error is very weird, because server shows 200 code, that means, that file is there.
Help me please.
Edit no. 1
char[] buffer = new char[1024*16];
int k = reader.read(buffer);
System.out.println("size : " + k);
do {
System.out.println("\tsize is : " + k);
//System.out.println(output);
ps.println(buffer);
}while( (k = reader.read(buffer)) != -1 );
This prints all the file, but for bigger files, it shows unreadable characters.
It shows below output (Snapshot of client browser)
You do output = reader.readLine() to get the data, which omits the newline characters. Then you ps.print(output), so the newline characters are not sent to the client.
Say you read this
Hello\r\n
World\r\n
Then you send this:
Content-length: 14
HelloWorld
And then close the connection, confusing the browser as it still was waiting for the other 4 bytes.
I guess you'll have to use ps.println(output).
You would have seen this if you were monitoring the network traffic, which can prove quite useful when writing or debugging a server that is supposed to communicate using the network.
Anyway this will cause trouble if the newlines of the file and the system have a mismatch (\n vs \r\n). Say you have this file:
Hello\r\n
World\r\n
Its length is 14 bytes. However when your system treats a newline when printing as \n, your code with println() will print this:
Hello\n
World\n
Which is 12 bytes, not 14. You better just print what you read.