NB: the OP confirms in the comment thread that the problem was due to a typo, not shown in the posted code.
I was expecting to get a notification using GetQueuedCompletionStatus after scheduling an overlapped disconnect with DisconnectEx. I never get one - is this by design? If I specify a manual reset event in the OVERLAPPED structure this is signalled to indicate that the disconnect is complete, but GetQueuedCompletionStatus never returns.
My call to DisconnectEx looks a bit like this (note that context has an operator LPOVERLAPPED and ol is the first element in the structure):
context.ol.hEvent = hEvent;
BOOL result = DisconnectEx(context.socket, context, TF_REUSE_SOCKET, 0);
if (result)
{
// we completed synchronously:
ProcessCompletion(0, context, 0);
}
else
{
int error = WSAGetLastError();
if (error != ERROR_IO_PENDING)
{
throw ServerSocketException("DisconnectEx failed");
}
WaitForSingleObject(hEvent, INFINITE);
std::cout << "disconnected - event signalled\n";
}
I added the WaitForSingleObject when I found that GetQueuedCompletionStatus didn't return. What is the correct way to detect DisconnectEx completing? I want to use the socket again in a call to AcceptEx.
It appears that this was because of a typo on the OP's part.
(Posting an answer so other people don't have to read the comment thread...)
Related
Can anyone please help me with this code , what is the use of "boost::asio::async_write" function here
Does it sends acknowledgment back to the client ?
void handle_read(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_write(socket_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
}
else
{
delete this;
}
}
It looks like this is from an "echo server" example. async_write writes the contents of boost::asio::buffer(data_, bytes_transferred) to the socket.
Since we're inside handle_read we can guess that this function itself is the completion handler for a likely async_read call that filled that data_ buffer. Since we use the exact number of bytes reported back by async_read (bytes_transferred) and there's no visible manipulation on data_, we can assume that this simply sends the exact message (or data in general) received to socket_. If socket_ was also the endpoint in the async_read this is the definition of an echo server.
I have a case to handle. There is one thread calling WSAPoll() to receive data from TCP connection. The code looks like this:
int result = WSAPoll(fdSocket, 1, timeout);
if (result == 0)
{
// time out
}
else if (result == -1)
{
// socket error
}
If I set timeout to be a negative number, the thread will wait indefinitely. However, I want to make this function return a value, such as 0, directly to this thread if I call a function StopWait() from another thread.
So what can I do to make this work? Add an asynchronous procedure call to this blocking thread through function StopWait() bu the other thread? If it is, what to add can make it return the value I want?
Thanks!
Okay this is my first question here on Stack Overflow, so bare over with it if I'm not asking properly.
Basically I'm trying to code some asynchronous sockets using std.socket, but I'm not sure if I've understood the concept correct. I've only ever worked with asynchronous sockets in C# and in D it seem to be on a much lower level. I've researched a lot and looked up a lot of code, documentation etc. both for D and C/C++ to get an understanding, however I'm not sure if I understand the concept correctly and if any of you have some examples. I tried looking at splat, but it's very outdated and vibe seems to be too complex just for a simple asynchronous socket wrapper.
If I understood correctly there is no poll() function in std.socket so you'd have to use SocketSet with a single socket on select() to poll the status of the socket right?
So basically how I'd go about handling the sockets is polling to get the read status of the socket and if it has a success (value > 0) then I can call receive() which will return 0 for disconnection else the received value, but I'd have to keep doing this until the expected bytes are received.
Of course the socket is set to nonblocked!
Is that correct?
Here is the code I've made up so far.
void HANDLE_READ()
{
while (true)
{
synchronized
{
auto events = cast(AsyncObject[int])ASYNC_EVENTS_READ;
foreach (asyncObject; events)
{
int poll = pollRecv(asyncObject.socket.m_socket);
switch (poll)
{
case 0:
{
throw new SocketException("The socket had a time out!");
continue;
}
default:
{
if (poll <= -1)
{
throw new SocketException("The socket was interrupted!");
continue;
}
int recvGetSize = (asyncObject.socket.m_readBuffer.length - asyncObject.socket.readSize);
ubyte[] recvBuffer = new ubyte[recvGetSize];
int recv = asyncObject.socket.m_socket.receive(recvBuffer);
if (recv == 0)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.socket.disconnect();
continue;
}
asyncObject.socket.m_readBuffer ~= recvBuffer;
asyncObject.socket.readSize += recv;
if (asyncObject.socket.readSize == asyncObject.socket.expectedReadSize)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.event(asyncObject.socket);
}
break;
}
}
}
}
}
}
So basically how I'd go about handling the sockets is polling to get the read status of the socket
Not quite right. Usually, the idea is to build an event loop around select, so that your application is idle as long as there are no network or timer events that need to be handled. With polling, you'd have to check for new events continuously or on a timer, which leads to wasted CPU cycles, and events getting handled a bit later than they occur.
In the event loop, you populate the SocketSets with sockets whose events you are interested in. If you want to be notified of new received data on a socket, it goes to the "readable" set. If you have data to send, the socket should be in the "writable" set. And all sockets should be on the "error" set.
select will then block (sleep) until an event comes in, and fill the SocketSets with the sockets which have actionable events. Your application can then respond to them appropriately: receive data for readable sockets, send queued data for writable sockets, and perform cleanup for errored sockets.
Here's my D implementation of non-fiber event-based networking: ae.net.asockets.
Is it ok to invoke WSAAsyncSelect in the WM_CREATE message of a Window Process (WinProc), and then perform all recv actions inside the same WinProc (e.g. to recv and populate a control with the received byte data) under WM_SOCKET?
For example, I know that performing long tasks inside the WinProc can cause the window to be unresponsive (since it cannot handle other messages until this message is completed), but I've seen no examples that treat this recv I/O with a thread or event object. Is it completely unnecessary?
Here's the example case in the WinProc I've seen on the net, and also in Petzold the recv is handled in a similar fashion:
case WM_SOCKET:
{
if(WSAGETSELECTERROR(lParam))
{
MessageBox(hWnd,
"Connection to server failed",
"Error",
MB_OK|MB_ICONERROR);
SendMessage(hWnd,WM_DESTROY,NULL,NULL);
break;
}
switch(WSAGETSELECTEVENT(lParam))
{
case FD_READ:
{
char szIncoming[1024];
ZeroMemory(szIncoming,sizeof(szIncoming));
int inDataLength=recv(Socket,
(char*)szIncoming,
sizeof(szIncoming)/sizeof(szIncoming[0]),
0);
strncat(szHistory,szIncoming,inDataLength);
strcat(szHistory,"\r\n");
SendMessage(hEditIn,
WM_SETTEXT,
sizeof(szIncoming)-1,
reinterpret_cast<LPARAM>(&szHistory));
}
break;
case FD_CLOSE:
{
MessageBox(hWnd,
"Server closed connection",
"Connection closed!",
MB_ICONINFORMATION|MB_OK);
closesocket(Socket);
SendMessage(hWnd,WM_DESTROY,NULL,NULL);
}
break;
}
}
Yes, this is perfectly acceptable. Though typically you would wait until CreateWindow/Ex() exits before then calling WSAAsyncSelect(). But either way works fine. Just be sure to handle the case where recv() fails, or returns fewer bytes than you asked for.
Here's the code:
ALint cProcessedBuffers = 0;
ALenum alError = AL_NO_ERROR;
alGetSourcei(m_OpenALSourceId, AL_BUFFERS_PROCESSED, &cProcessedBuffers);
if((alError = alGetError()) != AL_NO_ERROR)
{
throw "AudioClip::ProcessPlayedBuffers - error returned from alGetSroucei()";
}
alError = AL_NO_ERROR;
if (cProcessedBuffers > 0)
{
alSourceUnqueueBuffers(m_OpenALSourceId, cProcessedBuffers, arrBuffers);
if((alError = alGetError()) != AL_NO_ERROR)
{
throw "AudioClip::ProcessPlayedBuffers - error returned from alSourceUnqueueBuffers()";
}
}
The call to alGetSourcei returns with cProcessedBuffers > 0, but the following call to alSourceUnqueueBuffers fails with an INVALID_OPERATION. This in an erratic error that does not always occur. The program containing this sample code is a single-threaded app running in a tight loop (typically would be sync'ed with a display loop, but in this case I'm not using a timed callback of any sort).
Try alSourceStop(m_OpenALSourceId) first.
Then alUnqueueBuffers(), and after that, Restart playing by alSourcePlay(m_OpenALSourceId).
I solved the same problem by this way. But I don't know why have to do so in
Mentioned in this SO thread,
If you have AL_LOOPING enabled on a streaming source the unqueue operation will fail.
The looping flag has some sort of lock on the buffers when enabled. The answer by #MyMiracle hints at this as well, stopping the sound releases that hold, but it's not necessary..
AL_LOOPING is not meant to be set on a streaming source, as you manage the source data in the queue. Keep queuing, it will keep playing. Queue from the beginning of the data, it will loop.