What is the correct behavior of C_Decrypt in pkcs#11? - pkcs#11

I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)

I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.

Related

How to get string input from a socket in D?

I'm using this code to listen to a port:
int start(){
ushort port = 61888;
listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
writefln("Listening on port %d.", port);
enum MAX_CONNECTIONS = 60;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
while (true)
{
socketSet.add( listener);
foreach (sock; reads)
socketSet.add(sock);
Socket.select(socketSet, null, null);
}
return 0;
}
As far as I know, sockets interact with the bytes as they are. I want to find a way how to convert these bytes (which are essentially SQL requests) to strings. How can I do so, providing that input is in UTF-8, which is an encoding using variable size?
You seem to have a few questions here.
How do I get chars from bytes?
cast them with cast(char[]) st. This aliases the bytes, giving you a slice of the exact same data, and doesn't require new allocation. You are not yet assuming that the bytes are valid UTF-8, but autodecoding or other parts of your program might complain if they aren't. You can run it by std.utf.validate if you want.
do basically the same thing with std.string.assumeUTF(st), which at least also asserts on invalid UTF in debug builds only.
How do I get a string from char[]?
You can unsafely alias the char[] with std.exception.assumeUnique(st), or you can allocate an immutable copy with st.idup or std.utf.toUTF8(st).
What if my fixed buffer of bytes contains invalid UTF-8 -- because it got cut off?
If that's a risk you can use low level std.utf tools (decodeFront and catching UTFException is one way) to peel off the valid UTF-8 and then check if you have remaining bytes, or to check that the end of the input is valid UTF-8.
How do I know if I've gotten a complete SQL statement with my fixed buffer socket I/O?
Instead of just passing the raw SQL statement over the line, you can define a network protocol that includes information like statement size, or that has 'end of statement' markers that you can read for.
I've a cheat sheet for string type conversions, which links to a file of more elaborate unittests.
Try this:
import std.exception: assumeUnique;
string s = assumeUnique (cast(char[])ubyteArray);

STM32 - I2C - Write Sequential Data

I'm using AT24C512 EEPROM which is 512KB along with my STM32
I'm able to write 128bytes of data at once using
HAL_I2C_Mem_Write(&_EEPROM24XX_I2C,0xa0,Address,I2C_MEMADD_SIZE_16BIT,(uint8_t*)data,size_of_data,100)
but the issue is that i want to write more data after the data that was just wrote, but the EEPROM will replace the data as the Address is the same
so how can i skip the written address ?
This answer is not about using HAL with I2C, but hope it will point you
Just check datasheet (I looking into STM32F0) and you can see that the limit is 255 bytes (register CR2:NBYTES), I'm not sure if there is another limitation in HAL, but using direct access to registers you can sent 255 bytes at once or fragment it and sent how much you want.
For fragmenting there is bit CR2:RELOAD, if you set this, then at the end will be not transfer stopped, and you can update next NBYTES, .. when you will set last block of bytes (which will fit into NBYTES) then clear bit CR2:RELOAD.
This has one disadvantage, that every 255 bytes, you will be interrupted.
i think you should check the AT24C512 datasheet page 7.
If more
than 128 data words are transmitted to the EEPROM, the
data word address will
“
roll over
”
and previous data will be
overwritten. The address
“
roll over
”
during write is from the
last byte of the current page to the first byte of the same
page.

Why do my ath9k generated RadioTap headers seems malformed?

I'm collecting 802.11 packets using scapy on Ubuntu 16.04 (4.4 kernel). The RadioTap headers for my packets have the following present flags:
present=TSFT+Flags+Rate+Channel+dBm_AntSignal+b14+b29+Ext
Given the description of RadioTap, I would expect Channel to start on the 10th byte following the header and preceding fields (8 for TSFT + 1 each for Flags and Rate). Channel has an alignment of 2, so there is no need for padding. Yet this is what is in the undecoded portion of the packet:
notdecoded=' \x08\x00\x00\x00\x00\x00\x00f\xc0 \x02\x00\x00\x00\x00\x10\x02l\t\xa0\x00\xa9\x00\x00\x00\xa9\x00'
In this case the channel number actually appears at bytes 18-19 ('l\t' = 2412), and im not sure exactly what byte contains the dBm signal strength.
Anyone have an idea as to what i'm missing?
Found the answer after digging into the spec a bit deeper:
Scapy doesn't parse extended headers as signified by bit-32 (though it did tell me about them by stating +Ext above). Those extra headers are stuffed on the front of 'notdecoded' section of the packet. I think scapy should, at minimum, remove those extended headers from not-decoded to avoid future confusion.
In this particular case there are two extra 32 bit extended bitmap headers, accounting for the extra 8 bytes.
If someone wants to write an answer up with more detail, ill accept it, otherwise i will clean this answer up and accept it for perpetuity.

How would one transfer files larger than 2,147,483,646 bytes (~2 GiB) with Win32 TransmitFile()?

Quoted from MSDN entry for TransmitFile:
The maximum number of bytes that can be transmitted using a single call to the TransmitFile function is 2,147,483,646, the maximum value for a 32-bit integer minus 1. The maximum number of bytes to send in a single call includes any data sent before or after the file data pointed to by the lpTransmitBuffers parameter plus the value specified in the nNumberOfBytesToWrite parameter for the length of file data to send. If an application needs to transmit a file larger than 2,147,483,646 bytes, then multiple calls to the TransmitFile function can be used with each call transferring no more than 2,147,483,646 bytes. Setting the nNumberOfBytesToWrite parameter to zero for a file larger than 2,147,483,646 bytes will also fail since in this case the TransmitFile function will use the size of the file as the value for the number of bytes to transmit.
Alright. Sending a file of size 2*2,147,483,646 bytes (~ 4 GiB) with TransmitFile would then have to be divided into two parts at minimum (e.g. 2 GiB + 2 GiB in two calls to TransmitFile). But how exactly would one go about doing that, while preferably also keeping the underlying TCP connection alive in between?
When the file is indeed <=2,147,483,646 bytes in size, one could just write:
HANDLE fh = CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, NULL,
OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN, NULL);
TransmitFile(SOCK_STREAM_socket, fh, 0, 0, NULL, NULL, TF_DISCONNECT);
to let Windows handle all the lower-level stuff (caching, chunking the data up into pieces for efficient transmission etc. However, unlike the comparable Linux sendfile() syscall, there is no immediately obvious offset argument in the call (although the fifth argument, LPOVERLAPPED lpOverlapped probably is exactly what I'm looking for). I suppose I could hack something together, but I'm also looking for a graceful, good practice Win32 solution from someone who actually knows about this stuff.
You can use the lpOverlapped parameter to specify a 64-bit offset within the file at which to start the file data transfer by setting the Offset and OffsetHigh member of the OVERLAPPED structure. If lpOverlapped is a NULL pointer, the transmission of data always starts at the current byte offset in the file.
So, for lack of a minimal example readily available on the net, which calls are necessary to accomplish such a task?
Managed to figure it out based on the comments.
So, if LPOVERLAPPED lpOverlapped is a null pointer, the call starts transmission at the current file offset of the file (much like the Linux sendfile() syscall and its off_t *offset parameter). This offset (pointer) can be manipulated with SetFilePointerEx, so one could write:
#define TRANSMITFILE_MAX ((2<<30) - 1)
LARGE_INTEGER total_bytes;
memset(&total_bytes, 0, sizeof(total_bytes));
while (total_bytes < filesize) {
DWORD bytes = MIN(filesize-total_bytes, TRANSMITFILE_MAX);
if (!TransmitFile(SOCK_STREAM_socket, fh, bytes, 0, NULL, NULL, 0))
{ /* error handling */ }
total_bytes.HighPart += bytes;
SetFilePointerEx(fh, total_bytes, NULL, FILE_BEGIN);
}
closesocket(SOCK_STREAM_socket);
to accomplish the task.
Not very elegant imo, but it works.

Encrypting 16 bytes of UTF8 with SecKeyWrapper breaks (ccStatus == -4304)

I'm using Apple's SecKeyWrapper class from the CryptoExercise sample code in the Apple docs to do some symmetric encryption with AES128. For some reason, when I encrypt 1-15 characters or 17 characters, it encrypts and decrypts correctly. With 16 characters, I can encrypt, but on decrypt it throws an exception after the CCCryptorFinal call with ccStatus == -4304, which indicates a decode error. (Go figure.)
I understand that AES128 uses 16 bytes per encrypted block, so I get the impression that the error has something to do with the plaintext length falling on the block boundary. Has anyone run into this issue using CommonCryptor or SecKeyWrapper?
The following lines...
// We don't want to toss padding on if we don't need to
if (*pkcs7 != kCCOptionECBMode) {
if ((plainTextBufferSize % kChosenCipherBlockSize) == 0) {
*pkcs7 = 0x0000;
} else {
*pkcs7 = kCCOptionPKCS7Padding;
}
}
... are the culprits of my issue. To solve it, I simply had to comment them out.
As far as I can tell, the encryption process was not padding on the encryption side, but was then still expecting padding on the decryption side, causing the decryption process to fail (which is generally what I was experiencing).
Always using kCCOptionPKCS7Padding to encrypt/decrypt is working for me so far, for strings that satisfy length % 16 == 0 and those that don't. And, again, this is a modification to the SecKeyWrapper class of the CryptoExercise example code. Not sure how this impacts those of you using CommonCrypto with home-rolled wrappers.
I too have encountered this issue using the CommonCrypto class but for ANY string with a length that was a multiple of 16.
My solution is a total hack since I have not yet found a real solution to the problem.
I pad my string with a space at the end if it is a multiple of 16. It works for my particular scenario since the extra space on the data does not affect the receipt of the data on the other side but I doubt it would work for anyone else's scenario.
Hopefully somebody smarter can point us in the right direction to a real solution.