Will Boost.Asio's async_read_some the whole message? - sockets

I am trying to receive a package via http in my C++ program. I am using Boost.Asio for that purpose.
In my handleAccept method I have this code to read the package:
// m_buffer is declared as: std::array<char, 8192> m_buffer;
auto buf = boost::asio::buffer(m_buffer);
newConnection->getSocket().async_read_some(
buf,
boost::bind(&TcpServer::handleRead, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)
);
Is this enough to get any package? How do I make sure that it will get packages that are bigger than 8192 bytes?

async_read_some reads a received message up to the size of the buffer that you give it. The number of bytes in a received message is returned in the bytes_transferred placeholder of the callback function, see basic_stream_socket::async_read_some
If you know the maximum size of the messages that you expect to receive, you can simply use a buffer that is large enough for that message. Otherwise, you'll have to copy the contents of the async_read_some buffer elsewhere and build up the whole received message there.

Related

What is the correct behavior of C_Decrypt in pkcs#11?

I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.

Reading from Socket Stream Blocking After Retrieval

I'm currently attempting to read an incoming message from a client socket, that, prior to the below procedure has already been connected to the server socket. The below procedure outputs the message, one character at a time, as it retrieves it from the stream.
The problem is that, when the stream is out of information, the call to Ada.Streams.Read is blocking, and stops the application flow completely. According to some examples, it would appear as though Offset should be set to 0 automatically at the end of the stream, but that never happens. Instead the application stops at the call to Read.
procedure Read_From (Channel : Sockets.Stream_Access) is
use Ada.Text_IO;
use Ada.Streams;
Data : Stream_Element_Array (1 .. 1);
Offset : Stream_Element_Offset;
begin
loop
Read (Channel.All, Data, Offset);
exit when Offset = 0;
Put (Character'Val (Data (1)));
end loop;
-- The application never reaches this point.
New_Line;
Put_Line ("Finished reading from client!");
end Read_From;
-- #param Channel `GNAT.Sockets.Stream (Client_Socket)`
I've also attempted the same process with GNAT.Sockets.Receive_Socket, but the same issue remains: the application flow is stopped completely, assumably awaiting further information from the stream, even though there is nothing more to retrieve.
Any pointers in the right direction would be highly appreciated!
Normally, you’d read a (binary) message from a stream knowing how much data needed to be read, so you could read until you’d got that much.
But, if you’re reading a text message from an externally-defined source, as it might be an HTTP request, there needs to be some terminator sequence so you can read character-by-character until you’ve read the terminator. In the case of an HTTP request, that’s a CR/LF/CR/LF sequence. Or it could be a null-terminated C string, in which case you’d be looking for the ASCII.NUL.
The Ada way to transfer variable-length text is to use String’Output/String’Input (see ARM 13.13.2(18)ff). What happens for a String (an array of Character) is that first the bounds are sent, then the content; on reception, the bounds are read, a String with those bounds is created, and the required number of bytes are read into the new String, which is then returned.
Basically that's how Ada streams work. The end of the stream only comes once you reach the final end of the stream, not just the current end of a buffer.
If you want to be able to interrupt reading, you have to use another representation of the connection than GNAT.Sockets.Stream_Access.

Using GSocketClient, how do I read incoming data without knowing how many incoming bytes there will be?

I am still struggling to be able to read incoming response messages from a piece of hardware my program is communicating with.
I am using a GSocketClient and am able to connect and successfully send messages using g_output_stream_write(). I then want to read the response sent back from the device, but I have no way of knowing how many bytes the reply will be in order to use g_input_stream_read(). I have also tried using g_input_stream_read_all(), but this seems to block the application and never return. I don't know how g_input_stream_read_all() determines that it has reached the end of a stream, but I assume the problem is somewhere there?
I know that there is incoming data because I can use g_input_stream_read() with a made-up byte size like 5 and I then see the first 5 incoming bytes, but the response size will always be different.
So my questions is, is there a way to determine how much data is waiting to be read so that I can plug that into g_input_stream_read() as a variable for the size to read? And if not, what is the correct usage of g_input_stream_read_all() to get it to not block like I am seeing it do?
Does something like the following work?
#define BUF_SIZE 1024
guint8 buffer[BUF_SIZE];
GByteArray *array = g_byte_array_new();
gsize bytes_read;
GError *error = NULL;
while (g_input_stream_read_all(istream, buffer, BUF_SIZE, &bytes_read, NULL, &error))
{
g_byte_array_append(array, buffer, bytes_read);
if (bytes_read < BUF_SIZE)
/* We've reached the end of the stream */
break;
}
if (error)
// error handling code

Weird Winsock recv() slowdown

I'm writing a little VOIP app like Skype, which works quite good right now, but I've run into a very strange problem.
In one thread, I'm calling within a while(true) loop the winsock recv() function twice per run to get data from a socket.
The first call gets 2 bytes which will be casted into a (short) while the second call gets the rest of the message which looks like:
Complete Message: [2 Byte Header | Message, length determined by the 2Byte Header]
These packets are round about 49/sec which will be round about 3000bytes/sec.
The content of these packets is audio-data that gets converted into wave.
With ioctlsocket() I determine wether there is some data on the socket or not at each "message" I receive (2byte+data). If there's something on the socket right after I received a message within the while(true) loop of the thread, the message will be received, but thrown away to work against upstacking latency.
This concept works very well, but here's the problem:
While my VOIP program is running and when I parallely download (e.g. via browser) a file, there always gets too much data stacked on the socket, because while downloading, the recv() loop seems actually to slow down. This happens in every download/upload situation besides the actual voip up/download.
I don't know where this behaviour comes from, but when I actually cancel every up/download besides the voip traffic of my application, my apps works again perfectly.
If the program runs perfectly, the ioctlsocket() function writes 0 into the bytesLeft var, defined within the class where the receive function comes from.
Does somebody know where this comes from? I'll attach my receive function down below:
std::string D_SOCKETS::receive_message(){
recv(ClientSocket,(char*)&val,sizeof(val),MSG_WAITALL);
receivedBytes = recv(ClientSocket,buffer,val,MSG_WAITALL);
if (receivedBytes != val){
printf("SHORT: %d PAKET: %d ERROR: %d",val,receivedBytes,WSAGetLastError());
exit(128);
}
ioctlsocket(ClientSocket,FIONREAD,&bytesLeft);
cout<<"Bytes left on the Socket:"<<bytesLeft<<endl;
if(bytesLeft>20)
{
// message gets received, but ignored/thrown away to throw away
return std::string();
}
else
return std::string(buffer,receivedBytes);}
There is no need to use ioctlsocket() to discard data. That would indicate a bug in your protocol design. Assuming you are using TCP (you did not say), there should not be any left over data if your 2byte header is always accurate. After reading the 2byte header and then reading the specified number of bytes, the next bytes you receive after that constitute your next message and should not be discarded simply because it exists.
The fact that ioctlsocket() reports more bytes available means that you are receiving messages faster than you are reading them from the socket. Make your reading code run faster, don't throw away good data due to your slowness.
Your reading model is not efficient. Instead of reading 2 bytes, then X bytes, then 2 bytes, and so on, you should instead use a larger buffer to read more raw data from the socket at one time (use ioctlsocket() to know how many bytes are available, and then read at least that many bytes at one time and append them to the end of your buffer), and then parse as many complete messages are in the buffer before then reading more raw data from the socket again. The more data you can read at a time, the faster you can receive data.
To help speed up the code even more, don't process the messages inside the loop directly, either. Do the processing in another thread instead. Have the reading loop put complete messages in a queue and go back to reading, and then have a processing thread pull from the queue whenever messages are available for processing.

socket receive loop never returns

I have a loop that reads from a socket in Lua:
socket = nmap.new_socket()
socket:connect(host, port)
socket:set_timeout(15000)
socket:send(command)
repeat
response,data = socket:receive_buf("\n", true)
output = output..data
until data == nil
Basically, the last line of the data does not contain a "\n" character, so is never read from the socket. But this loop just hangs and never completes. I basically need it to return whenever the "\n" delimeter is not recognised. Does anyone know a way to do this?
Cheers
Updated
to include socket code
Update2
OK I have got around the initial problem of waiting for a "\n" character by using the "receive_bytes" method.
New code:
--socket set as above
repeat
data = nil
response,data = socket:receive_bytes(5000)
output = output..data
until data == nil
return output
This works and I get the large complete block of data back. But I need to reduce the buffer size from 5000 bytes, as this is used in a recursive function and memory usage could get very high. I'm still having problems with my "until" condition however, and if I reduce the buffer size to a size that will require the method to loop, it just hangs after one iteration.
Update3
I have gotten around this problem using string.match and receive_bytes. I take in at least 80 bytes at a time. Then string.match checks to see if the data variable conatins a certain pattern. If so it exits. Its not the cleanest solution, but it works for what I need it to do. Here is the code:
repeat
response,data = socket:receive_bytes(80)
output = output..data
until string.match(data, "pattern")
return output
I believe the only way to deal with this situation in a socket is to set a timeout.
The following link has a little bit of info, but it's on http socket: lua http socket timeout
There is also this one (9.4 - Non-Preemptive Multithreading): http://www.lua.org/pil/9.4.html
And this question: http://lua-list.2524044.n2.nabble.com/luasocket-howto-read-write-Non-blocking-TPC-socket-td5792021.html
A good discussion on Socket can be found on this link:
http://nitoprograms.blogspot.com/2009/04/tcpip-net-sockets-faq.html
It's .NET but the concepts are general.
See update 3. Because the last part of the data is always the same pattern, I can read in a block of bytes and each time check if that block has the pattern. If it has the pattern it will mean that it is the end of the data, append to the output variable and exit.