WSAEventSelect() makes a socket descriptor no longer a socket - winsock

I am writing a cross-platform socket handling library (which also handles serial and a whole bunch of other protocols in a protocol agnostic way. - I am not re-inventing the wheel).
I need to emulate the Linux poll function. The code I started with used select and worked fine, but there was no way to interrupt it from another thread, and so I was forced to start using event objects. My initial attempt called:
WSACreateEvent()
WSAEventSelect() to associate the socket with the event object.
WaitForMultipleObjectsEx() to wait on all sockets plus my interrupt event object.
select() to work out what events actually occurred on the socket.
accept()/send()/recv() to process the sockets (later and elsewhere).
This failed. accept() was claiming that the file descriptor was not a socket. If I commented out the call to WSAEventSelect(), essentially reverting to my earlier code, it all works fine (except that I cannot interrupt).
I then realised that I did something wrong (according to the Microsoft dictatorship). Instead of using select() to work out what events have happened on each socket, I should be using WSAEnumNetworkEvents(). So I rewrote my code to do it the proper way, remembering to call WSAEventSelect() afterwards to disassociate the event object from the file descriptor so that (fingers crossed) accept() would now work.
Now WSAEnumNetworkEvents() is returning an error and WSAGetLastError() tells me that the error is WSAENOTSOCK.
This IS a socket. I am doing things the way MSDN tells me I should (allowing for the general poor quality of the documentation). It appears however that WSAEventSelect() is causing the file descriptor to be marked as a file rather than a socket.
I hate Microsoft so much right now.
Here is a cut down version of my code:
bool do_poll(std::vector<struct pollfd> &poll_data, int timeout)
{
...
for (const auto &fd_data : poll_data) {
event_mask = 0;
if (0 != (fd_data.events & POLLIN)) {
// select() will mark a socket as readable when it closes (read size = 0) or (for
// a listen socket) when there is an incoming connection. This is the *nix paradigm.
// WSAEventSelect() hasseparate events.
event_mask |= FD_READ;
event_mask |= FD_ACCEPT;
event_mask |= FD_CLOSE;
}
if (0 != (fd_data.events & POLLOUT)) {
event_mask |= FD_WRITE;
}
event_obj = WSACreateEvent();
events.push_back(event_obj);
if (WSA_INVALID_EVENT != event_obj) {
(void)WSAEventSelect((SOCKET)fd_data.fd, event_obj, event_mask);
}
}
lock.lock();
if (WSA_INVALID_EVENT == interrupt_obj) {
interrupt_obj = WSACreateEvent();
}
if (WSA_INVALID_EVENT != interrupt_obj) {
events.push_back(interrupt_obj);
}
lock.unlock();
...
(void)WaitForMultipleObjectsEx(events.size(), &(events[0]), FALSE, dw_timeout, TRUE);
for (i = 0u; i < poll_data.size(); i++) {
if (WSA_INVALID_EVENT == events[i]) {
poll_data[i].revents |= POLLERR;
} else {
if (0 != WSAEnumNetworkEvents((SOCKET)(poll_data[i].fd), events[i], &revents)) {
poll_data[i].revents |= POLLERR;
} else {
if (0u != (revents.lNetworkEvents & (FD_READ | FD_ACCEPT | FD_CLOSE))) {
poll_data[i].revents |= POLLIN;
}
if (0u != (revents.lNetworkEvents & FD_WRITE)) {
poll_data[i].revents |= POLLOUT;
}
}
(void)WSAEventSelect((SOCKET)(poll_data[i].fd), NULL, 0);
(void)WSACloseEvent(event_obj);
}
}
...
}

Related

C socket server: What's the right way to read all of an unknown length XMLHttpRequest?

I have a simple XMLHttpRequest handler written in C. It reads and processes requests coming from a JavaScript XMLHttpRequest send() running in a browser.
The parent process accepts incoming connections and forks a child process for each incoming connection to read and process the data.
It works perfectly for most requests, but fails in some cases (apparently related to the network infrastructure between the client and the server) if the request is over about 2 kB in length. I'm assuming that the request is being broken into multiple packets somewhere between the browser and my socket server.
I can't change the request format, but I can see the request being sent and verify the content. The data is a 'GET' with an encoded URI that contains a 'type' field. If the type is 'file', the request could be as long as 3 kB, otherwise it's a couple of hundred bytes at most. 'File' requests are rare - the user is providing configuration data to be written to a file on the server. All other requests work fine, and any 'file' requests shorter than about 2 kB work fine.
What's the preferred technique for ensuring that I have all of the data in this situation?
Here's the portion of the parent that accepts the connection and forks the child (non-blocking version):
for (hit = 1;; hit++) {
length = sizeof(cli_addr);
if ((socketfd = accept4(listensd, (struct sockaddr *) &cli_addr, &length, SOCK_NONBLOCK)) < 0){
//if ((socketfd = accept(listensd, (struct sockaddr *) &cli_addr, &length)) < 0){
exit(3);
}
if ((pid = fork()) < 0) {
exit(3);
} else {
if (pid == 0) { /* child */
//(void) close(listensd);
childProcess(socketfd, hit); /* never returns. Close listensd when done*/
} else { /* parent */
(void) close(socketfd);
}
}
}
Here's the portion of the child process that performs the initial recv(). In the case of long 'file' requests, the child's first socket recv() gets about 1700 bytes of payload followed by the browser-supplied connection data.
ret = recv(socketfd, recv_data, BUFSIZE, 0); // read request
if (ret == 0 || ret == -1) { // read failure stop now
sprintf(sbuff, "failed to read request: %d", ret);
logger(&shm, FATAL, sbuff, socketfd);
}
recv_data[ret] = 0;
len = ret;
If the type is 'file', there could be more data. The child process never gets the rest of the data. If the socket is blocking, a second read attempt simply hangs. If the socket is non-blocking (as in the snippet below) all subsequent reads return -1 with error 'Resource temporarily unavailable' until it times out:
// It's a file. Could be broken into multiple blocks. Try second read
sleep(1);
ret = recv(socketfd, &recv_data[len], BUFSIZE, 0); // read request
while (ret != 0){
if (ret > 0){
recv_data[len+ret] = 0;
len += ret;
} else {
sleep(1);
}
ret = recv(socketfd, &recv_data[len], BUFSIZE, 0); // read request
}
I expected that read() would return 0 when the client closes the connection, but that doesn't happen.
A GET request only has a head and no body (well, almost always), so you have everything the client has sent as soon as you have the request head, and you know when you have read the whole request head when you read a blank line i.e. two returns (and no sooner or later).
If the client sends just a part, without the blank line, you are supposed to wait for the rest. I would put a time-out on that and reject the whole request if it takes too long.
BTW there are still browsers out there, and maybe some proxies as well, with a URL length limit of about 2000 characters.

STM32 FreeRTOS - UART Deferred Interrupt Problem

I am trying to read data with unkown size using UART Receive Interrupt. In the call back function, I enabled Rx interrupt in order to read characters until \n is gotten. If \n is get, then higher priority task which is deferred interrupt handler is woken. The problem is that I tried to read one by one byte via call back function and I tried to put each character into a buffer, but unfortunately buffer could not get any character. Moreover, deferred interrupt handler could not be woken.
My STM32 board is STM32F767ZI, and my IDE is KEIL.
Some Important notes before sharing the code:
1. rxIndex and gpsBuffer are declared as global.
2. Periodic function works without any problem.
Here is my code:
Periodic Function, Priority = 1
void vPeriodicTask(void *pvParameters)
{
const TickType_t xDelay500ms = pdMS_TO_TICKS(500UL);
while (1) {
vTaskDelay(xDelay500ms);
HAL_UART_Transmit(&huart3,(uint8_t*)"Imu\r\n",sizeof("Imu\r\n"),1000);
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_7);
}
}
Deferred Interrupt, Priority = 3
void vHandlerTask(void *pvParameters)
{
const TickType_t xMaxExpectedBlockTime = pdMS_TO_TICKS(1000);
while(1) {
if (xSemaphoreTake(xBinarySemaphore,xMaxExpectedBlockTime) == pdPASS) {
HAL_UART_Transmit(&huart3,(uint8_t*)"Semaphore Acquired\r\n",sizeof("Semaphore
Acquired\r\n"),1000);
// Some important processes will be added here
rxIndex = 0;
HAL_GPIO_TogglePin(GPIOB,GPIO_PIN_14);
}
}
}
Call back function:
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
BaseType_t xHigherPriorityTaskWoken;
xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
}
Main function
HAL_UART_Receive_IT(&huart3,&rData,1);
xBinarySemaphore = xSemaphoreCreateBinary();
if (xBinarySemaphore != NULL) {
//success
xTaskCreate(vHandlerTask,"Handler",128,NULL,1,&vHandlerTaskHandler);
xTaskCreate(vPeriodicTask,"Periodic",128,NULL,3,&vPeriodicTaskHandler);
vTaskStartScheduler();
}
Using HAL for it is a best way to get into the troubles. It uses HAL_Delay which is systick dependant and you should rewrite this function to read RTOS tick instead.
I use queues to pass the data (the references to data) but it should work. There is always a big question mark when using the HAL functions.
void HAL_UART_RxCptlCallBack(UART_HandleTypeDef *huart)
{
BaseType_t xHigherPriorityTaskWoken = pdFALSE;
gpsBuffer[rxIndex++] = rData;
if (rData == 0x0A) {
if(xSemaphoreGiveFromISR(xBinarySemaphore,&xHigherPriorityTaskWoken) == pdFALSE)
{
/* some error handling */
}
}
HAL_UART_Receive_IT(huart,(uint8_t*)&rData,1);
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
Concluding if I use HAL & RTOS I always modify the way HAL handles timeouts.

Read all available bytes from TCP Socket (unknown byte count)

I am having Problems useing the Indy TIdTCPClient.
I want to call a function, everytime if there is Data available on the socket. For this I have a Thread calling IdTCPClient->Socket->Readable(100).
The function itself looks like this:
TMemoryStream *mStream = new TMemoryStream;
int len = 0;
try
{
if(!Form1->IdTCPClient2->Connected())
Form1->IdTCPClient2->Connect();
mStream->Position = 0;
do
{
Form1->IdTCPClient2->Socket->ReadStream(mStream, 1);
}
while(Form1->IdTCPClient2->Socket->Readable(100));
len = mStream->Position;
mStream->Position = 0;
mStream->Read(Buffer, len);
}catch(Exception &Ex) {
Form1->DisplaySSH->Lines->Add(Ex.Message);
Form1->DisplaySSH->GoToTextEnd();
}
delete mStream;
It will not be called directly within the thread, but the thread triggers an event, which is calling this function. Which means I am using Readable(100) twice, without reading data in betwee.
So since I dont know how many bytes I have to read I thought I can read one byte, check if there is more available and then read another byte.
The Problem here is that the do while loop doesnt loop, it just runs once.
I am guessing that Readable does not quite wokt the way I need it to.
Is there any other way to receive all the bytes available in the Socket?
You should not be using Readable() directly in this situation. That call reports whether the underlying socket has pending unread data in its internal kernel buffer. That does not take into account that the TIdIOHandler may already have unread data in its InputBuffer that is left over from a previous read operation.
Use the TIdIOHandler::CheckForDataOnSource() method instead of TIdIOHandler::Readable():
TMemoryStream *mStream = new TMemoryStream;
try
{
if (!Form1->IdTCPClient2->Connected())
Form1->IdTCPClient2->Connect();
mStream->Position = 0;
do
{
if (Form1->IdTCPClient2->IOHander->InputBufferIsEmpty())
{
if (!Form1->IdTCPClient2->IOHander->CheckForDataOnSource(100))
break;
}
Form1->IdTCPClient2->IOHandler->ReadStream(mStream, Form1->IdTCPClient2->IOHandler->InputBuffer->Size, false);
/* alternatively:
Form1->IdTCPClient2->IOHandler->InputBuffer->ExtractToStream(mStream);
*/
}
while (true);
// use mStream as needed...
}
catch (const Exception &Ex) {
Form1->DisplaySSH->Lines->Add(Ex.Message);
Form1->DisplaySSH->GoToTextEnd();
}
delete mStream;
Or, you can alternatively use TIdIOHandler::ReadBytes() instead of TIdIOHandler::ReadStream(). If you set its AByteCount parameter to -1, it will return only the bytes that are currently available (if the InputBuffer is empty, ReadBytes() will wait up to the ReadTimeout interval for the socket to receive any new bytes) 1:
try
{
if (!Form1->IdTCPClient2->Connected())
Form1->IdTCPClient2->Connect();
TIdBytes data;
do
{
if (Form1->IdTCPClient2->IOHander->InputBufferIsEmpty())
{
if (!Form1->IdTCPClient2->IOHander->CheckForDataOnSource(100))
break;
}
Form1->IdTCPClient2->IOHandler->ReadBytes(data, -1, true);
/* alternatively:
Form1->IdTCPClient2->IOHandler->InputBuffer->ExtractToBytes(data, -1, true);
*/
}
while (true);
// use data as needed...
}
catch (const Exception &Ex) {
Form1->DisplaySSH->Lines->Add(Ex.Message);
Form1->DisplaySSH->GoToTextEnd();
}
1: make sure you are using an up-to-date snapshot of Indy 10. Prior to Oct 6 2016, there was a logic bug in ReadBytes() when AByteCount=-1 that didn't take the InputBuffer into account before checking the socket for new bytes.

Asynchronous sending data using kqueue

I have a server written in plain-old C accepting TCP connections using kqueue on FreeBSD.
Incoming connections are accepted and added to a simple connection pool to keep track of the file handle.
When data is received (on EVFILT_READ), I call recv() and then I put the payload in a message queue for a different thread to process it.
Receiving and processing data this way works perfect.
When the processing thread is done, it may need to send something back to the client. Since the processing thread has access to the connection pool and can easily get the file handle, I'm simply calling send() from the processing thread.
This works 99% of the time, but every now and then kqueue gives me a EV_EOF flag, and the connection is dropped.
There is a clear correlation between the frequency of the calls to send() and the number of EV_EOF errors, so I have a feeling the EV_EOF due to some race condition between my kqueue thread and the processing thread.
The calls to send() always returns the expected byte count, so I'm not filling up the tx buffer.
So my question; Is it acceptable to call send() from a separate thread as described here? If not, what would be the right way to send data back to the clients asynchronously?
All the examples I find calls send() in the same context as the kqueue loop, but my processing threads may need to send back data at any time - even minutes after the last received data from the client - so obviously I can't block the kqueue loop for that time..
Relevant code snippets:
void *tcp_srvthread(void *arg)
{
[[...Bunch of declarations...]]
tcp_serversocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
...
setsockopt(tcp_serversocket, SOL_SOCKET, SO_REUSEADDR, &i, sizeof(int));
...
err = bind(tcp_serversocket, (const struct sockaddr*)&sa, sizeof(sa));
...
err = listen(tcp_serversocket, 10);
...
kq = kqueue();
EV_SET(&evSet, tcp_serversocket, EVFILT_READ | EV_CLEAR, EV_ADD, 0, 0, NULL);
...
while(!fTerminated) {
timeout.tv_sec = 2; timeout.tv_nsec = 0;
nev = kevent(kq, &evSet, 0, evList, NLIST, &timeout);
for (i=0; i<nev; i++) {
if (evList[i].ident == tcp_serversocket) { // new connection?
socklen = sizeof(addr);
fd = accept(evList[i].ident, &addr, &socklen); // accept it
if(fd > 0) { // accept ok?
uidx = conn_add(fd, (struct sockaddr_in *)&addr); // Add it to connected controllers
if(uidx >= 0) { // add ok?
EV_SET(&evSet, fd, EVFILT_READ | EV_CLEAR, EV_ADD, 0, 0, (void*)(uint64_t)(0x00E20000 | uidx)); // monitor events from it
if (kevent(kq, &evSet, 1, NULL, 0, NULL) == -1) { // monitor ok?
conn_delete(uidx); // ..no, so delete it from my list also
}
} else { // no room on server?
close(fd);
}
}
else Log(0, "ERR: accept fd=%d", fd);
}
else
if (evList[i].flags & EV_EOF) {
[[ ** THIS IS CALLED SOMETIMES AFTER CALLING SEND - WHY?? ** ]]
uidx = (uint32_t)evList[i].udata;
conn_delete( uidx );
}
else
if (evList[i].filter == EVFILT_READ) {
if((nr = recv(evList[i].ident, buf, sizeof(buf)-2, 0)) > 0) {
uidx = (uint32_t)evList[i].udata;
recv_data(uidx, buf, nr); // This will queue the message for the processing thread
}
}
}
else {
// should not get here.
}
}
}
The processing thread looks something like this (obviously there's a lot of data manipulation going on in addition to what's shown) :
void *parsethread(void *arg)
{
int len;
tmsg_Queue mq;
char is_ok;
while(!fTerminated) {
if((len = msgrcv(msgRxQ, &mq, sizeof(tmsg_Queue), 0, 0)) > 0) {
if( process_message(mq) ) {
[[ processing will find the uidx of the client and build the return data ]]
send( ctl[uidx].fd, replydata, replydataLen, 0 );
}
}
}
}
Appreciate any ideas or nudges in the right direction. Thanks.
EV_EOF
If you write to a socket after the peer closed the reading part of it, you will receive a RST, which triggered EVFILT_READ with EV_EOF set.
Async
You should try aio_read and aio_write.

WSAConnect returns WSAEINVAL on WindowsXP

I use sockets in non-blocking mode, and sometimes WSAConnect function returns WSAEINVAL error.
I investigate a problem and found, that it occurs if there is no pause (or it is very small ) between
WSAConnect function calls.
Does anyone know how to avoid this situation?
Below you can found source code, that reproduce the problem. If I increase value of parameter in Sleep function to 50 or great - problem dissapear.
P.S. This problem reproduces only on Windows XP, on Win7 it works well.
#undef UNICODE
#include <winsock2.h>
#include <ws2tcpip.h>
#include <stdio.h>
#include <iostream>
#include <windows.h>
#pragma comment(lib, "Ws2_32.lib")
static int getError(SOCKET sock)
{
DWORD error = WSAGetLastError();
return error;
}
void main()
{
SOCKET sock;
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData) != 0) {
fprintf(stderr, "Socket Initialization Error. Program aborted\n");
return;
}
for (int i = 0; i < 1000; ++i) {
struct addrinfo hints;
struct addrinfo *res = NULL;
memset(&hints, 0, sizeof(hints));
hints.ai_flags = AI_PASSIVE;
hints.ai_socktype = SOCK_STREAM;
hints.ai_family = AF_INET;
hints.ai_protocol = IPPROTO_TCP;
if (0 != getaddrinfo("172.20.1.59", "8091", &hints, &res)) {
fprintf(stderr, "GetAddrInfo Error. Program aborted\n");
closesocket(sock);
WSACleanup();
return;
}
struct addrinfo *ptr = 0;
for (ptr=res; ptr != NULL ;ptr=ptr->ai_next) {
sock = WSASocket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol, NULL, 0, NULL); //
if (sock == INVALID_SOCKET)
int err = getError(sock);
else {
u_long noblock = 1;
if (ioctlsocket(sock, FIONBIO, &noblock) == SOCKET_ERROR) {
int err = getError(sock);
closesocket(sock);
sock = INVALID_SOCKET;
}
break;
}
}
int ret;
do {
ret = WSAConnect(sock, ptr->ai_addr, (int)ptr->ai_addrlen, NULL, NULL, NULL, NULL);
if (ret == SOCKET_ERROR) {
int error = getError(sock);
if (error == WSAEWOULDBLOCK) {
Sleep(5);
continue;
}
else if (error == WSAEISCONN) {
fprintf(stderr, "+");
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
else if (error == 10037) {
fprintf(stderr, "-");
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
else {
fprintf(stderr, "Connect Error. [%d]\n", error);
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
}
else {
int one = 1;
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&one, sizeof(one));
fprintf(stderr, "OK\n");
break;
}
}
while (1);
}
std::cout<<"end";
char ch;
std::cin >> ch;
}
You've got a whole basketful of errors and questionable design and coding decisions here. I'm going to have to break them up into two groups:
Outright Errors
I expect if you fix all of the items in this section, your symptom will disappear, but I wouldn't want to speculate about which one is the critical fix:
Calling connect() in a loop on a single socket is simply wrong.
If you mean to establish a connection, drop it, and reestablish it 1000 times, you need to call closesocket() at the end of each loop, then call socket() again to get a fresh socket. You can't keep re-connecting the same socket. Think of it like a power plug: if you want to plug it in twice, you have to unplug (closesocket()) between times.
If instead you mean to establish 1000 simultaneous connections, you need to allocate a new socket with socket() on each iteration, connect() it, then go back around again to get another socket. It's basically the same loop as for the previous case, except without the closesocket() call.
Beware that since XP is a client version of Windows, it's not optimized for handling thousands of simultaneous sockets.
Calling connect() again is not the correct response to WSAEWOULDBLOCK:
if (error == WSAEWOULDBLOCK) {
Sleep(5);
continue; /// WRONG!
}
That continue code effectively commits the same error as above, but worse, if you only fix the previous error and leave this, this usage will then make your code start leaking sockets.
WSAEWOULDBLOCK is not an error. All it means after a connect() on a nonblcoking socket is that the connection didn't get established immediately. The stack will notify your program when it does.
You get that notification by calling one of select(), WSAEventSelect(), or WSAAsyncSelect(). If you use select(), the socket will be marked writable when the connection gets established. With the other two, you will get an FD_CONNECT event when the connection gets established.
Which of these three APIs to call depends on why you want nonblocking sockets in the first place, and what the rest of the program will look like. What I see so far doesn't need nonblocking sockets at all, but I suppose you have some future plan that will inform your decision. I've written an article, Which I/O Strategy Should I Use (part of the Winsock Programmers' FAQ) which will help you decide which of these options to use; it may instead guide you to another option entirely.
You shouldn't use AI_PASSIVE and connect() on the same socket. Your use of AI_PASSIVE with getaddrinfo() tells the stack you intend to use this socket to accept incoming connections. Then you go and use that socket to make an outgoing connection.
You've basically lied to the stack here. Computers find ways to get revenge when you lie to them.
Sleep() is never the right way to fix problems with Winsock. There are built-in delays within the stack that your program can see, such as TIME_WAIT and the Nagle algorithm, but Sleep() is not the right way to cope with these, either.
Questionable Coding/Design Decisions
This section is for things I don't expect to make your symptom go away, but you should consider fixing them anyway:
The main reason to use getaddrinfo() — as opposed to older, simpler functions like inet_addr() — is if you have to support IPv6. That kind of conflicts with your wish to support XP, since XP's IPv6 stack wasn't nearly as heavily tested during the time XP was the current version of Windows as its IPv4 stack. I would expect XP's IPv6 stack to still have bugs as a result, even if you've got all the patches installed.
If you don't really need IPv6 support, doing it the old way might make your symptoms disappear. You might end up needing an IPv4-only build for XP.
This code:
for (int i = 0; i < 1000; ++i) {
// ...
if (0 != getaddrinfo("172.20.1.59", "8091", &hints, &res)) {
...is inefficient. There is no reason you need to keep reinitializing res on each loop.
Even if there is some reason I'm not seeing, you're leaking memory by not calling freeaddrinfo() on res.
You should initialize this data structure once before you enter the loop, then reuse it on each iteration.
else if (error == 10037) {
Why aren't you using WSAEALREADY here?
You don't need to use WSAConnect() here. You're using the 3-argument subset that Winsock shares with BSD sockets. You might as well use connect() here instead.
There's no sense making your code any more complex than it has to be.
Why aren't you using a switch statement for this?
if (error == WSAEWOULDBLOCK) {
// ...
}
else if (error == WSAEISCONN) {
// ...
}
// etc.
You shouldn't disable the Nagle algorithm:
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, ...);