Okay this is my first question here on Stack Overflow, so bare over with it if I'm not asking properly.
Basically I'm trying to code some asynchronous sockets using std.socket, but I'm not sure if I've understood the concept correct. I've only ever worked with asynchronous sockets in C# and in D it seem to be on a much lower level. I've researched a lot and looked up a lot of code, documentation etc. both for D and C/C++ to get an understanding, however I'm not sure if I understand the concept correctly and if any of you have some examples. I tried looking at splat, but it's very outdated and vibe seems to be too complex just for a simple asynchronous socket wrapper.
If I understood correctly there is no poll() function in std.socket so you'd have to use SocketSet with a single socket on select() to poll the status of the socket right?
So basically how I'd go about handling the sockets is polling to get the read status of the socket and if it has a success (value > 0) then I can call receive() which will return 0 for disconnection else the received value, but I'd have to keep doing this until the expected bytes are received.
Of course the socket is set to nonblocked!
Is that correct?
Here is the code I've made up so far.
void HANDLE_READ()
{
while (true)
{
synchronized
{
auto events = cast(AsyncObject[int])ASYNC_EVENTS_READ;
foreach (asyncObject; events)
{
int poll = pollRecv(asyncObject.socket.m_socket);
switch (poll)
{
case 0:
{
throw new SocketException("The socket had a time out!");
continue;
}
default:
{
if (poll <= -1)
{
throw new SocketException("The socket was interrupted!");
continue;
}
int recvGetSize = (asyncObject.socket.m_readBuffer.length - asyncObject.socket.readSize);
ubyte[] recvBuffer = new ubyte[recvGetSize];
int recv = asyncObject.socket.m_socket.receive(recvBuffer);
if (recv == 0)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.socket.disconnect();
continue;
}
asyncObject.socket.m_readBuffer ~= recvBuffer;
asyncObject.socket.readSize += recv;
if (asyncObject.socket.readSize == asyncObject.socket.expectedReadSize)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.event(asyncObject.socket);
}
break;
}
}
}
}
}
}
So basically how I'd go about handling the sockets is polling to get the read status of the socket
Not quite right. Usually, the idea is to build an event loop around select, so that your application is idle as long as there are no network or timer events that need to be handled. With polling, you'd have to check for new events continuously or on a timer, which leads to wasted CPU cycles, and events getting handled a bit later than they occur.
In the event loop, you populate the SocketSets with sockets whose events you are interested in. If you want to be notified of new received data on a socket, it goes to the "readable" set. If you have data to send, the socket should be in the "writable" set. And all sockets should be on the "error" set.
select will then block (sleep) until an event comes in, and fill the SocketSets with the sockets which have actionable events. Your application can then respond to them appropriately: receive data for readable sockets, send queued data for writable sockets, and perform cleanup for errored sockets.
Here's my D implementation of non-fiber event-based networking: ae.net.asockets.
Related
I am observing high CPU usage in my UDP server implementation which runs an infinite loop expecting 15 1.5KB packets every milliseconds. It looks like below:
struct RecvContext
{
enum { BufferSize = 1600 };
RecvContext()
{
senderSockAddrLen = sizeof(sockaddr_storage);
memset(&overlapped, 0, sizeof(OVERLAPPED));
overlapped.hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
memset(&sendersSockAddr, 0, sizeof(sockaddr_storage));
buffer.clear();
buffer.resize(BufferSize);
wsabuf.buf = (char*)buffer.data();
wsabuf.len = ULONG(buffer.size());
}
void CloseEventHandle()
{
if (overlapped.hEvent != INVALID_HANDLE_VALUE)
{
CloseHandle(overlapped.hEvent);
overlapped.hEvent = INVALID_HANDLE_VALUE;
}
}
OVERLAPPED overlapped;
int senderSockAddrLen;
sockaddr_storage sendersSockAddr;
std::vector<uint8_t> buffer;
WSABUF wsabuf;
};
void Receive()
{
DWORD flags = 0, bytesRecv = 0;
SOCKET sockHandle =...;
while (//stopping condition//)
{
std::shared_ptr<RecvContext> _recvContext = std::make_shared<IO::RecvContext>();
if (SOCKET_ERROR == WSARecvFrom(sockHandle, &_recvContext->wsabuf, 1, nullptr, &flags, (sockaddr*)&_recvContext->sendersSockAddr,
(LPINT)&_recvContext->senderSockAddrLen, &_recvContext->overlapped, nullptr))
{
if (WSAGetLastError() != WSA_IO_PENDING)
{
//error
}
else
{
if (WSA_WAIT_FAILED == WSAWaitForMultipleEvents(1, &_recvContext->overlapped.hEvent, FALSE, INFINITE, FALSE))
{
//error
}
if (!WSAGetOverlappedResult(sockHandle, &_recvContext->overlapped, &bytesRecv, FALSE, &flags))
{
//error
}
}
}
_recvContext->CloseEventHandle();
// async task to process _recvContext->buffer
}
}
The cpu consumption for this udp server is very high even when the packets are not being processed post receipt. How can the cpu consumption be improved here?
You've chosen about the most inefficient combination of mechanisms imaginable.
Why use overlapped I/O if you're only going to pend one operation and then wait for it complete?
Why use an event, which is about the slowest notification scheme that Windows has.
Why do you only pend one operation at a time? You're forcing the implementation to stash datagrams in its own buffers and then copy them into yours.
Why do you post the receive operation right before you're going to wait for it to complete rather than right after the previous one completes?
Why do you create a new receive context each time instead of re-using the existing buffer, event, and so on?
Use IOCP. Windows events are very slow and heavy.
Post lots of operations. You want the operating system to be able to put the datagram right in your buffer rather than having to allocate another buffer that it copies data into and out of.
Re-use your buffers and allocate all your receive buffers from a contiguous pool rather than fragmenting them throughout process memory. The memory used for your buffers has to be pinned and you want to minimize the amount of pinning needed.
Re-post operations as soon as they complete. Don't process them and then re-post. There's no reason to delay starting the operation. You can probably ignore this if you followed all the other suggestions because you wouldn't have a "spare" buffer to post anyway.
Alternatively, you can probably get away with having a thread that spins on a blocking receive operation. Just make sure your code has a loop that is as tight as possible, posting a different (already-allocated) buffer as soon as it returns after dispatching another thread to process the buffer it just filled with the receive operation.
I am trying to test WiFi data transfer between cell phone and Esp32 (Arduino), when ESP32 reads file data via WiFi, even there is still data in, client.read() often return -1, I have to add other conditions to check reading finished or not.
My question is why there are so many failed reads, any ideas are highly appreciated.
void setup()
{
i=0;
Serial.begin(115200);
Serial.println("begin...");
// You can remove the password parameter if you want the AP to be open.
WiFi.softAP(ssid, password);
IPAddress myIP = WiFi.softAPIP();
Serial.print("AP IP address: ");
Serial.println(myIP);
server.begin();
Serial.println("Server started");
}
// the loop function runs over and over again until power down or reset
void loop()
{
WiFiClient client = server.available(); // listen for incoming clients
if(client) // if you get a client,
{
Serial.println("New Client."); // print a message out the serial port
Serial.println(client.remoteIP().toString());
while(client.connected()) // loop while the client's connected
{
while(client.available()>0) // if there's bytes to read from the client,
{
char c = client.read(); // read a byte, then
if(DOWNLOADFILE ==c){
pretime=millis();
uint8_t filename[32]={0};
uint8_t bFilesize[8];
long filesize;
int segment=0;
int remainder=0;
uint8_t data[512];
int len=0;
int totallen=0;
delay(50);
len=client.read(filename,32);
delay(50);
len=client.read(bFilesize,8);
filesize=BytesToLong(bFilesize);
segment=(int)filesize/512;
delay(50);
i=0; //succeed times
j=0; //fail times
////////////////////////////////////////////////////////////////////
//problem occures here, to many "-1" return value
// total read 24941639 bytes, succeed 49725 times, failed 278348 times
// if there were no read problems, it should only read 48,715 times and finish.
//But it total read 328,073 times, including 278,348 falied times, wasted too much time
while(((len=client.read(data,512))!=-1) || (totallen<filesize))
{
if(len>-1) {
totallen+=len;
i++;
}
else{
j++;
}
}
///loop read end , too many times read fail//////////////////////////////////////////////////////////////////
sprintf(toClient, "\nfile name %s,size %d, total read %d, segment %d, succeed %d times, failed %d times\n",filename,filesize,totallen,segment,i,j);
Serial.write(toClient);
curtime=millis();
sprintf(toClient, "time splashed %d ms, speed %d Bps\n", curtime-pretime, filesize*1000/(curtime-pretime));
Serial.write(toClient);
client.write(RETSUCCESS);
}
else
{
Serial.write("Unknow command\n");
}
}
}
// close the connection:
client.stop();
Serial.println("Client Disconnected.");
}
When you call available() and check for > 0, you are checking to see if there is one or more characters available to read. It will be true if just one character has arrived. You read one character, which is fine, but then you start reading more without stopping to see if there are more available.
TCP doesn't guarantee that if you write 100 characters to a socket that they all arrive at once. They can arrive in arbitrary "chunks" with arbitrary delays. All that's guaranteed is that they will eventually arrive in order (or if that's not possible because of networking issues, the connection will fail.)
In the absence of a blocking read function (I don't know if those exist) you have to do something like what you are doing. You have to read one character at a time and append it to a buffer, gracefully handing the possibility of getting a -1 (the next character isn't here yet, or the connection broke). In general you never want to try to read multiple characters in a single read(buf, len) unless you've just used available() to make sure len characters are actually available. And even that can fail if your buffers are really large. Stick to one-character-at-a-time.
It's a reasonable idea to call delay(1) when available() returns 0. In the places where you try to guess at something like delay(20) before reading a buffer you are rolling the dice - there's no promise that any amount of delay will guarantee bytes get delivered. Example: Maybe a drop of water fell on the chip's antenna and it won't work until the drop evaporates. Data could be delayed for minutes.
I don't know how available() behaves if the connection fails. You might have to do a read() and get back a -1 to diagnose a failed connection. The Arduino documentation is absolutely horrible, so you'll have to experiment.
TCP is much simpler to handle on platforms that have threads, blocking read, select() and other tools to manage data. Having only non-blocking read makes things harder, but there it is.
In some situations UDP is actually a lot simpler - there are more guarantees about getting messages of certain sizes in a single chunk. But of course whole messages can go missing or show up out of order. It's a trade-off.
I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.
I'm working with a windows form application in C#. I'm using a socket client which is connecting in an asynchronous way to a server. I would like the socket to try reconnecting immediately to the server if the connection is broken for any reason. Which is the best design to approach the problem? Should I build a thread which is continuously checking if the connection is lost and tries to reconnect to the server?
Here is the code of my XcomClient class which is handling the socket communication:
public void StartConnecting()
{
socketClient.BeginConnect(this.remoteEP, new AsyncCallback(ConnectCallback), this.socketClient);
}
private void ConnectCallback(IAsyncResult ar)
{
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete the connection.
client.EndConnect(ar);
// Signal that the connection has been made.
connectDone.Set();
StartReceiving();
NotifyClientStatusSubscribers(true);
}
catch(Exception e)
{
if (!this.socketClient.Connected)
StartConnecting();
else
{
}
}
}
public void StartReceiving()
{
StateObject state = new StateObject();
state.workSocket = this.socketClient;
socketClient.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(OnDataReceived), state);
}
private void OnDataReceived(IAsyncResult ar)
{
try
{
StateObject state = (StateObject)ar.AsyncState;
Socket client = state.workSocket;
// Read data from the remote device.
int iReadBytes = client.EndReceive(ar);
if (iReadBytes > 0)
{
byte[] bytesReceived = new byte[iReadBytes];
Buffer.BlockCopy(state.buffer, 0, bytesReceived, 0, iReadBytes);
this.responseList.Enqueue(bytesReceived);
StartReceiving();
receiveDone.Set();
}
else
{
NotifyClientStatusSubscribers(false);
}
}
catch (SocketException e)
{
NotifyClientStatusSubscribers(false);
}
}
Today I try to catch a disconnection by checking the number of bytes received or catching a socket exception.
If your application only receives data on a socket, then in most cases, you will never detect a broken connection. If you don't receive any data for a long time, you don't know if it's because the connection is broken or if the other end simply hasn't sent any data. You will, of course, detect (as EOF on the socket) connections closed by the other end in the normal fashion despite this.
In order to detect a broken connection, you need a keepalive. You need to either:
make the other end guarantee that it will send data on a set schedule, and you time out and close the connection if you don't get it, or,
send a probe to the other end once in a while. In this case the OS will take care of noticing a broken connection and you will get an error reading the socket if it's broken, either promptly (connection reset by peer) or eventually (connection timed out).
Either way, you need a timer. Whether you implement the timer as an event in an event loop or as a thread that sleeps is up to you and the best solution probably depends on how the rest of your application is structured. If you have a main thread that runs an event loop then it's probably best to hook in to that.
You can also enable the TCP keepalives option on the socket, but an application-layer keepalive is generally considered more robust.
I am implementing a server that sends xml to clients using boost. The problem I am facing is that the buffer doesn't get sent immediately and accumulates to a point then sends the whole thing. This cause a problem on my client side, when it parses the xml, it may have incomplete xml tag (incomplete message). Is there a way in boost to flush out the socket whenever it needs to send out a message? Below is server's write code.
void
ClientConnection::handle_write(const boost::system::error_code& error)
{
if (!error)
{
m_push_message_queue.pop_front ();
if (!m_push_message_queue.empty () && !m_disconnected)
{
boost::asio::async_write(m_socket,
boost::asio::buffer(m_push_message_queue.front().data(),
m_push_message_queue.front().length()),
boost::bind(&ClientConnection::handle_write, this,
boost::asio::placeholders::error));
}
}
else
{
std::err << "Error writting out message...\n";
m_disconnected = true;
m_server->DisconnectedClient (this);
}
}
Typically when creating applications using TCP byte streams the sender sends a fixed length header so the receiver knows how many bytes to expect. Then the receiver reads that many bytes and parses the resulting buffer into an XML object.
I assume you are using TCP connection. TCP is stream type, so you can't assume your packet will come in one big packet. You need to fix your communication design, by sending size length first like San Miller answer, or sending flag or delimiter after all xml data has been sent.
Assuming you are definitely going to have some data on the socket you want to clear, you could do something like this:
void fulsh_socket()
{
boost::asio::streambuf b;
boost::asio::streambuf::mutable_buffers_type bufs = b.prepare(BUFFER_SIZE);
std::size_t bytes = socket_.receive(bufs); // !!! This will block until some data becomes available
b.commit(bytes);
boost::asio::socket_base::bytes_readable command(true);
socket_.io_control(command);
while(command.get())
{
bufs = b.prepare(BUFFER_SIZE);
bytes = socket_.receive(bufs);
b.commit(bytes);
socket_.io_control(command); // reset for bytes pending
}
return;
}
where socket_ is a member variable.
HTH