Boost ASIO asynchronous socket with timeout - sockets

I am trying to find the proper / canonical way to implement the code below that provides a synchronous wrapper around async asio methods in order to have a timeout. The code appears to work, but none of the examples I have looked at use the boolean in the lambda to terminate the do/while loop running i/o service, so I'm not sure if this is the proper form or if it will have unintended consequences down the road. Some do things like
while(IOService.run_one);
but that never terminates.
Edit:
I'm trying to follow this example:
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/example/timeouts/blocking_tcp_client.cpp
But in this code they avoid needing the number of bytes read by using a \n terminator. I need the number of bytes read, hence the callback.
I have seen many other solutions that use boost async futures as well as other methods, but they do not seem to compile with the versions of gcc / boost standard for Ubuntu 16.04 and I would like to stay with those versions.
ByteArray SessionInfo::Read(const boost::posix_time::time_duration &timeout)
{
Deadline.expires_from_now(timeout);
auto bytes_received = 0lu;
auto got_callback = false;
SessionSocket->async_receive(boost::asio::buffer(receive_buffer_,
1024),
[&bytes_received, &got_callback](const boost::system::error_code &error, std::size_t bytes_transferred) {
bytes_received = bytes_transferred;
got_callback = true;
});
do
{
IOService.run_one();
}while (!got_callback);
auto bytes = ByteArray(receive_buffer_, receive_buffer_ + bytes_received);
return bytes;
}

This is how I'd do it. The first event that fires causes io_service::run() to return.
ByteArray SessionInfo::Read(const boost::posix_time::time_duration &timeout)
{
Deadline.expires_from_now(timeout); // I assume this is a member of SessionInfo
auto got_callback{false};
auto result = ByteArray();
SessionSocket->async_receive( // idem for SessionSocket
boost::asio::buffer(receive_buffer_, 1024),
[&](const boost::system::error_code error,
std::size_t bytes_received)
{
if (!ec)
{
result = ByteArray(receive_buffer_, bytes_received);
got_callback = true;
}
Deadline.cancel();
});
Deadline.async_wait([&](const boost::system::error_code ec)
{
if (!ec)
{
SessionSocket->cancel();
}
});
IOService.run();
return result;
}

Reading the conversation below M. Roy's answer, your goal is to make sure that
IOService.run(); returns. All points are valid, the instance of boost::asio::io_service should only be run once (meaning not simultaneously but it could be run multiple times in series) per thread of execution so it is imperative to know how it is used. That said, to make the IOService stop I would amend M. Roy's solution like so:
ByteArray SessionInfo::Read(const boost::posix_time::time_duration &timeout) {
Deadline.expires_from_now(timeout);
auto got_callback{false};
auto result = ByteArray();
SessionSocket->async_receive(
boost::asio::buffer(receive_buffer_, 1024),
[&](const boost::system::error_code error,
std::size_t bytes_received) {
if (!ec) {
result = ByteArray(receive_buffer_, bytes_received);
got_callback = true;
}
Deadline.cancel();
});
Deadline.async_wait(
[&](const boost::system::error_code ec) {
if (!ec) {
SessionSocket->cancel();
IOService.stop();
}
});
IOService.run();
return result;
}

Related

How to avoid freezing the UI on heavy computation

Trying to decrypt JSON from server with Interceptor (from dio). But UI freezes during decryption.
class DecryptInterceptor extends Interceptor {
#override
Future onResponse(Response response) async {
response.data = decrypt(response.data); //freezes here
return super.onResponse(response);
}
}
Object decrypt(Object object){
// computations
}
Asynchronous programming paradigm is based on single threaded model. Async optimizes CPU usage by not waiting I/O tasks to complete. Instead, it puts a callback to the task and tells it "call this when you done". Now it can handle other work while the task completes and calls the callback. This makes sense when tasks are HTTP requests or file operations since these will handled by other devices not the CPU. But if the task is CPU intensive then using async will not help.
You can have a look at Isolate, equivalent of thread in Dart. You can create a seperate isolate and run your heavy tasks there.
There is also compute() method. It takes a function and argument, then evaluate that function with the supplied argument on a seperate isolate and returns the result as Future. This is much easier and gets the job done.
A dummy method that is CPU intensive:
int heavyTask(int n) {
int z = n;
for (var i = 0; i < n; i++) {
i % 2 == 0 ? z-- : z += 3;
}
return z + n;
}
Using compute() method to run it on a seperate isolate:
compute(heavyTask, 455553000)
.then((res) => print("result is $res"));
You can use compute property that flutter provides to perform tasks in another isolate. It exists exactly for such tasks.
class DecryptInterceptor extends Interceptor {
#override
Future onResponse(Response response) async {
response.data =await compute(decrypt,response.data); //freezes here
return super.onResponse(response);
}
}
Object decrypt(Object object){
return result;
}
It has some restrictions though for the type of data you can pass in argument and retrieve as result. You can learn more here.

Vert.x: How to wait for a future to complete

Is there a way to wait for a future to complete without blocking the event loop?
An example of a use case with querying Mongo:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
// Here I need the result of the DB query
if(dbFut.succeeded()) {
doSomethingWith(dbFut.result());
}
else {
error();
}
I know the doSomethingWith(dbFut.result()); can be moved to the handler, yet if it's long, the code will get unreadable (Callback hell ?) It that the right solution ? Is that the omny solution without additional libraries ?
I'm aware that rxJava simplifies the code, but as I don't know it, learning Vert.x and rxJava is just too much.
I also wanted to give a try to vertx-sync. I put the dependency in the pom.xml; everything got downloaded fine but when I started my app, I got the following error
maurice#mickey> java \
-javaagent:~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar \
-jar target/app-dev-0.1-fat.jar \
-conf conf/config.json
Error opening zip file or JAR manifest missing : ~/.m2/repository/co/paralleluniverse/quasar-core/0.7.5/quasar-core-0.7.5-jdk8.jar
Error occurred during initialization of VM
agent library failed to init: instrument
I know what the error means in general, but I don't know in that context... I tried to google for it but didn't find any clear explanation about which manifest to put where. And as previously, unless mandatory, I prefer to learn one thing at a time.
So, back to the question : is there a way with "basic" Vert.x to wait for a future without perturbation on the event loop ?
You can set a handler for the future to be executed upon completion or failure:
Future<Result> dbFut = Future.future();
mongo.findOne("myusers", myQuery, new JsonObject(), res -> {
if(res.succeeded()) {
...
dbFut.complete(res.result());
}
else {
...
dbFut.fail(res.cause());
}
}
});
dbFut.setHandler(asyncResult -> {
if(asyncResult.succeeded()) {
// your logic here
}
});
This is a pure Vert.x way that doesn't block the event loop
I agree that you should not block in the Vertx processing pipeline, but I make one exception to that rule: Start-up. By design, I want to block while my HTTP server is initialising.
This code might help you:
/**
* #return null when waiting on {#code Future<Void>}
*/
#Nullable
public static <T>
T awaitComplete(Future<T> f)
throws Throwable
{
final Object lock = new Object();
final AtomicReference<AsyncResult<T>> resultRef = new AtomicReference<>(null);
synchronized (lock)
{
// We *must* be locked before registering a callback.
// If result is ready, the callback is called immediately!
f.onComplete(
(AsyncResult<T> result) ->
{
resultRef.set(result);
synchronized (lock) {
lock.notify();
}
});
do {
// Nested sync on lock is fine. If we get a spurious wake-up before resultRef is set, we need to
// reacquire the lock, then wait again.
// Ref: https://stackoverflow.com/a/249907/257299
synchronized (lock)
{
// #Blocking
lock.wait();
}
}
while (null == resultRef.get());
}
final AsyncResult<T> result = resultRef.get();
#Nullable
final Throwable t = result.cause();
if (null != t) {
throw t;
}
#Nullable
final T x = result.result();
return x;
}

Read all available bytes from TCP Socket (unknown byte count)

I am having Problems useing the Indy TIdTCPClient.
I want to call a function, everytime if there is Data available on the socket. For this I have a Thread calling IdTCPClient->Socket->Readable(100).
The function itself looks like this:
TMemoryStream *mStream = new TMemoryStream;
int len = 0;
try
{
if(!Form1->IdTCPClient2->Connected())
Form1->IdTCPClient2->Connect();
mStream->Position = 0;
do
{
Form1->IdTCPClient2->Socket->ReadStream(mStream, 1);
}
while(Form1->IdTCPClient2->Socket->Readable(100));
len = mStream->Position;
mStream->Position = 0;
mStream->Read(Buffer, len);
}catch(Exception &Ex) {
Form1->DisplaySSH->Lines->Add(Ex.Message);
Form1->DisplaySSH->GoToTextEnd();
}
delete mStream;
It will not be called directly within the thread, but the thread triggers an event, which is calling this function. Which means I am using Readable(100) twice, without reading data in betwee.
So since I dont know how many bytes I have to read I thought I can read one byte, check if there is more available and then read another byte.
The Problem here is that the do while loop doesnt loop, it just runs once.
I am guessing that Readable does not quite wokt the way I need it to.
Is there any other way to receive all the bytes available in the Socket?
You should not be using Readable() directly in this situation. That call reports whether the underlying socket has pending unread data in its internal kernel buffer. That does not take into account that the TIdIOHandler may already have unread data in its InputBuffer that is left over from a previous read operation.
Use the TIdIOHandler::CheckForDataOnSource() method instead of TIdIOHandler::Readable():
TMemoryStream *mStream = new TMemoryStream;
try
{
if (!Form1->IdTCPClient2->Connected())
Form1->IdTCPClient2->Connect();
mStream->Position = 0;
do
{
if (Form1->IdTCPClient2->IOHander->InputBufferIsEmpty())
{
if (!Form1->IdTCPClient2->IOHander->CheckForDataOnSource(100))
break;
}
Form1->IdTCPClient2->IOHandler->ReadStream(mStream, Form1->IdTCPClient2->IOHandler->InputBuffer->Size, false);
/* alternatively:
Form1->IdTCPClient2->IOHandler->InputBuffer->ExtractToStream(mStream);
*/
}
while (true);
// use mStream as needed...
}
catch (const Exception &Ex) {
Form1->DisplaySSH->Lines->Add(Ex.Message);
Form1->DisplaySSH->GoToTextEnd();
}
delete mStream;
Or, you can alternatively use TIdIOHandler::ReadBytes() instead of TIdIOHandler::ReadStream(). If you set its AByteCount parameter to -1, it will return only the bytes that are currently available (if the InputBuffer is empty, ReadBytes() will wait up to the ReadTimeout interval for the socket to receive any new bytes) 1:
try
{
if (!Form1->IdTCPClient2->Connected())
Form1->IdTCPClient2->Connect();
TIdBytes data;
do
{
if (Form1->IdTCPClient2->IOHander->InputBufferIsEmpty())
{
if (!Form1->IdTCPClient2->IOHander->CheckForDataOnSource(100))
break;
}
Form1->IdTCPClient2->IOHandler->ReadBytes(data, -1, true);
/* alternatively:
Form1->IdTCPClient2->IOHandler->InputBuffer->ExtractToBytes(data, -1, true);
*/
}
while (true);
// use data as needed...
}
catch (const Exception &Ex) {
Form1->DisplaySSH->Lines->Add(Ex.Message);
Form1->DisplaySSH->GoToTextEnd();
}
1: make sure you are using an up-to-date snapshot of Indy 10. Prior to Oct 6 2016, there was a logic bug in ReadBytes() when AByteCount=-1 that didn't take the InputBuffer into account before checking the socket for new bytes.

Data is getting discarded in TCP/IP with boost::asio::read_some?

I have implemented a TCP server using boost::asio. This server uses basic_stream_socket::read_some function to read data. I know that read_some does not guarantee that supplied buffer will be full before it returns.
In my project I am sending strings separated by a delimiter(if that matters). At client side I am using WinSock::send() function to send data. Now my problem is on server side I am not able to get all the strings which were sent from client side. My suspect is that read_some is receiving some data and discarding leftover data for some reason. Than again in next call its receiving another string.
Is it really possible in TCP/IP ?
I tried to use async_receive but that is eating up all my CPU, also since buffer has to be cleaned up by callback function its causing serious memory leak in my program. (I am using IoService::poll() to call handler. That handler is getting called at a very slow rate compared to calling rate of async_read()).
Again I tried to use free function read but that will not solve my purpose as it blocks for too much time with the buffer size I am supplying.
My previous implementation of the server was with WinSock API where I was able to receive all data using WinSock::recv().
Please give me some leads so that I can receive complete data using boost::asio.
here is my server side thread loop
void
TCPObject::receive()
{
if (!_asyncModeEnabled)
{
std::string recvString;
if ( !_tcpSocket->receiveData( _maxBufferSize, recvString ) )
{
LOG_ERROR("Error Occurred while receiving data on socket.");
}
else
_parseAndPopulateQueue ( recvString );
}
else
{
if ( !_tcpSocket->receiveDataAsync( _maxBufferSize ) )
{
LOG_ERROR("Error Occurred while receiving data on socket.");
}
}
}
receiveData() in TCPSocket
bool
TCPSocket::receiveData( unsigned int bufferSize, std::string& dataString )
{
boost::system::error_code error;
char *buf = new char[bufferSize + 1];
size_t len = _tcpSocket->read_some( boost::asio::buffer((void*)buf, bufferSize), error);
if(error)
{
LOG_ERROR("Error in receiving data.");
LOG_ERROR( error.message() );
_tcpSocket->close();
delete [] buf;
return false;
}
buf[len] ='\0';
dataString.insert( 0, buf );
delete [] buf;
return true;
}
receiveDataAsync in TCP Socket
bool
TCPSocket::receiveDataAsync( unsigned int bufferSize )
{
char *buf = new char[bufferSize + 1];
try
{
_tcpSocket->async_read_some( boost::asio::buffer( (void*)buf, bufferSize ),
boost::bind(&TCPSocket::_handleAsyncReceive,
this,
buf,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred) );
//! Asks io_service to execute callback
_ioService->poll();
}
catch (std::exception& e)
{
LOG_ERROR("Error Receiving Data Asynchronously");
LOG_ERROR( e.what() );
delete [] buf;
return false;
}
//we dont delete buf here as it will be deleted by callback _handleAsyncReceive
return true;
}
Asynch Receive handler
void
TCPSocket::_handleAsyncReceive(char *buf, const boost::system::error_code& ec, size_t size)
{
if(ec)
{
LOG_ERROR ("Error occurred while sending data Asynchronously.");
LOG_ERROR ( ec.message() );
}
else if ( size > 0 )
{
buf[size] = '\0';
emit _asyncDataReceivedSignal( QString::fromLocal8Bit( buf ) );
}
delete [] buf;
}
Client Side sendData function.
sendData(std::string data)
{
if(!_connected)
{
return;
}
const char *pBuffer = data.c_str();
int bytes = data.length() + 1;
int i = 0,j;
while (i < bytes)
{
j = send(_connectSocket, pBuffer+i, bytes-i, 0);
if(j == SOCKET_ERROR)
{
_connected = false;
if(!_bNetworkErrNotified)
{
_bNetworkErrNotified=true;
emit networkErrorSignal(j);
}
LOG_ERROR( "Unable to send Network Packet" );
break;
}
i += j;
}
}
Boost.Asio's TCP capabilities are pretty well used, so I would be hesitant to suspect it is the source of the problem. In most cases of data loss, the problem is the result of application code.
In this case, there is a problem in the receiver code. The sender is delimiting strings with \0. However, the receiver fails to proper handle the delimiter in cases where multiple strings are read in a single read operation, as string::insert() will cause truncation of the char* when it reaches the first delimiter.
For example, the sender writes two strings "Test string\0" and "Another test string\0". In TCPSocket::receiveData(), the receiver reads "Test string\0Another test string\0" into buf. dataString is then populated with dataString.insert(0, buf). This particular overload will copy up to the delimiter, so dataString will contain "Test string". To resolve this, consider using the string::insert() overload that takes the number of characters to insert: dataString.insert(0, buf, len).
I have not used the poll function before. What I did is create a worker thread that is dedicated to processing ASIO handlers with the run function, which blocks. The Boost documentation says that each thread that is to be made available to process async event handlers must first call the io_service:run or io_service:poll method. I'm not sure what else you are doing with the thread that calls poll.
So, I would suggest dedicating at least one worker thread for the async ASIO event handlers and use run instead of poll. If you want that worker thread to continue to process all async messages without returning and exiting, then add a work object to the io_service object. See this link for an example.

Android InputStream

I am learning android but I can't get past the InputStream.read().
This is just a socket test - the server sends back two bytes when it receives a connection and I know that this working fine. All I want to do is read these values. The b = data.read reads both values in turn but then hangs, it never returns the -1 value which is what expect it to. Also it does not throw an exception.
Any ideas?
Thanks.
protected void startLongRunningOperation() {
// Fire off a thread to do some work that we shouldn't do directly in the UI thread
Thread t = new Thread() {
public void run() {
try {
Log.d("Socket", "try connect ");
Socket sock = new Socket("192.168.0.12", 5001);
Log.d("socket", "connected");
InputStream data = sock.getInputStream();
int b = 0;
while (b != -1) {
b = data.read();
}
data.close();
} catch (Exception e) {
Log.d("Socket", e.toString());
}
}
};
t.start();
}
Reaching the end of the stream is a special state. It doesn't happen just because there is nothing left to read. If the stream is still open, but there's nothing to be read, it will "hang" (or block) as you've noticed until a byte comes across.
To do what you want, the server either needs to close/end the stream, or you need to use:
while (data.available() > 0) {
..
When the number of available bytes is zero, there's nothing sitting in the stream buffer to be read.
On the other hand, if you know that there should only ever be two bytes to read, and that's the end of your data, then just read the two bytes and move on (i.e. don't use a while loop). The reason to use a while loop here would only be if you weren't sure how many total bytes to expect.