I am trying to use AES encryption from Crypto++ libary:
CBC_Mode<AES>::Encryption e;
I have a binary data block that I need to encrypt. The class seems to provide a method called ProcessData for this purpose:
virtual void ProcessData(byte *outString, const byte *inString, size_t length);
Looks like the last parameter is the size of the input data. What is not clear is why the method does not return me the size of the encrypted data. Is it assumed that the size of output data block is exactly the same as the length of input data block? Is this valid even if the size of input data is just one byte? Regards.
virtual void ProcessData(byte *outString, const byte *inString, size_t length);
Looks like the last parameter is the size of the input data. What is not clear is why the method does not return me the size of the encrypted data...
ProcessData is the workhorse of all block ciphers (but not stream ciphers or other types of objects). Also see the Crypto++ manual and cryptlib.h File Reference. cryptlib.h is described as "Abstract base classes that provide a uniform interface to this library".
ProcessData operates on block-sized lengths of data. So INSIZE is equal to OUTSIZE is equal to BLOCKSIZE. Note that there is no INSIZE or OUTSIZE - I used them for discussion. Each block cipher will provide a constant for BLOCKSIZE. There will be a AES::BLOCKSIZE, DES_EDE::BLOCKSIZE, etc.
Typically you do not use ProcessData directly. You can use it directly, but you will be responsible for all the associated details (more on the details below).
Typically you use a StreamTransformationFilter, but its not readily apparent why. StreamTransformationFilter provides input buffering, output buffering, and padding as required. Its briefly discussed at Init-Update-Final on the wiki.
Here's how it looks in practice from the CBC mode example on the Crypto++ wiki:
try
{
cout << "plain text: " << plain << endl;
CBC_Mode< AES >::Encryption e;
e.SetKeyWithIV( key, key.size(), iv );
// The StreamTransformationFilter adds padding
// as required. ECB and CBC Mode must be padded
// to the block size of the cipher.
StringSource ss( plain, true,
new StreamTransformationFilter( e,
new StringSink( cipher )
) // StreamTransformationFilter
); // StringSource
}
catch( const CryptoPP::Exception& e )
{
cerr << e.what() << endl;
exit(1);
}
In the above, CBC_mode and StreamTransformationFilter work together to give you desired results. CBC_mode calls ProcessData and handles the cipher chaining details. StreamTransformationFilter feeds the plain text in its preferred size. If there's not enough plain text, then StreamTransformationFilter buffers it on input. If there's no output buffer, then StreamTransformationFilter buffers cipher text, too.
StreamTransformationFilter will also apply padding as required. There's a default padding, but you can override it. StreamTransformationFilter knows what padding to apply because it asks the mode (CBC_mode) if the padding is required and what the padding should be.
... Is it assumed that the size of output data block is exactly the same as the length of input data block? Is this valid even if the size of input data is just one byte?
This is where the StreamTransformationFilter fits into the equation.
Be sure to check out Init-Update-Final on the wiki. It should provide the glue for you if you are used to Java or OpenSSL programming. It should help "make it click" for you.
Related
I'm using this code to listen to a port:
int start(){
ushort port = 61888;
listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
writefln("Listening on port %d.", port);
enum MAX_CONNECTIONS = 60;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
while (true)
{
socketSet.add( listener);
foreach (sock; reads)
socketSet.add(sock);
Socket.select(socketSet, null, null);
}
return 0;
}
As far as I know, sockets interact with the bytes as they are. I want to find a way how to convert these bytes (which are essentially SQL requests) to strings. How can I do so, providing that input is in UTF-8, which is an encoding using variable size?
You seem to have a few questions here.
How do I get chars from bytes?
cast them with cast(char[]) st. This aliases the bytes, giving you a slice of the exact same data, and doesn't require new allocation. You are not yet assuming that the bytes are valid UTF-8, but autodecoding or other parts of your program might complain if they aren't. You can run it by std.utf.validate if you want.
do basically the same thing with std.string.assumeUTF(st), which at least also asserts on invalid UTF in debug builds only.
How do I get a string from char[]?
You can unsafely alias the char[] with std.exception.assumeUnique(st), or you can allocate an immutable copy with st.idup or std.utf.toUTF8(st).
What if my fixed buffer of bytes contains invalid UTF-8 -- because it got cut off?
If that's a risk you can use low level std.utf tools (decodeFront and catching UTFException is one way) to peel off the valid UTF-8 and then check if you have remaining bytes, or to check that the end of the input is valid UTF-8.
How do I know if I've gotten a complete SQL statement with my fixed buffer socket I/O?
Instead of just passing the raw SQL statement over the line, you can define a network protocol that includes information like statement size, or that has 'end of statement' markers that you can read for.
I've a cheat sheet for string type conversions, which links to a file of more elaborate unittests.
Try this:
import std.exception: assumeUnique;
string s = assumeUnique (cast(char[])ubyteArray);
I'm trying to write a driver for the MPU-6050 and I'm stuck on how to proceed regarding reading the raw accelerometer/gyroscope/temperature readings. For instance, the MPU-6050 has the accelerometer X readings in 2 registers: ACCEL_XOUT[15:8] at address 0x3B and ACCEL_XOUT[7:0] at address 0x3C. Of course to read the raw value I need to read both registers and put them together.
BUT
In the description of the registers (in the register map and description sheet, https://invensense.tdk.com/wp-content/uploads/2015/02/MPU-6000-Register-Map1.pdf) it says that to guarantee readings from the same sampling instant I must use burst reads b/c as soon as an idle I2C bus is detected, the sensor registers are refreshed with new data from a new sampling instant. The datasheet snippet shows the simple I2C burst read:
However, this approach (to the best of my understanding) would only work reading the ACCEL_X registers from the same sampling instant if the auto-increment was supported (such that the first DATA in the above sequence would be from ACCEL_XOUT[15:8] # address 0x3B and the second DATA would be from ACCEL_XOUT[7:0] # address 0x3C). But the datasheet (https://invensense.tdk.com/wp-content/uploads/2015/02/MPU-6000-Datasheet1.pdf) only mentions that I2C burst writes support the auto-increment feature. Without auto-increment on the I2C read side how would I go about reading two different registers whilst maintaining the same sampling instant?
I also recognize that I could use the sensor's FIFO feature or the interrupt to accomplish what I'm after, but (for my own curiosity) I would like a solution that didn't rely on either.
I also have the same problem, looks like the documentation on this topic is incomplete.
Reading single sample
I think you can burst read the ACCEL_*OUT_*, TEMP_OUT_* and GYRO_*OUT_*. In fact I tried reading the data one register at once, but I got frequent data corruption.
Then, just to try, I requested 6 bytes from ACCEL_XOUT_H, 6 bytes from GYRO_XOUT_H and 2 bytes from TEMP_OUT_H and... it worked! No more data corruption!
I think they simply forgot to mention this in the register map.
How to
Here is some example code that can work in the Arduino environment.
These are the function that I use, they are not very safe, but it works for my project:
////////////////////////////////////////////////////////////////
inline void requestBytes(byte SUB, byte nVals)
{
Wire.beginTransmission(SAD);
Wire.write(SUB);
Wire.endTransmission(false);
Wire.requestFrom(SAD, nVals);
while (Wire.available() == 0);
}
////////////////////////////////////////////////////////////////
inline byte getByte(void)
{
return Wire.read();
}
////////////////////////////////////////////////////////////////
inline void stopRead(void)
{
Wire.endTransmission(true);
}
////////////////////////////////////////////////////////////////
byte readByte(byte SUB)
{
requestBytes(SUB, 1);
byte result = getByte();
stopRead();
return result;
}
////////////////////////////////////////////////////////////////
void readBytes(byte SUB, byte* buff, byte count)
{
requestBytes(SUB, count);
for (int i = 0; i < count; i++)
buff[i] = getByte();
stopRead();
}
At this point, you can simply read the values in this way:
// ACCEL_XOUT_H
// burst read the registers using auto-increment:
byte data[6];
readBytes(ACCEL_XOUT_H, data, 6);
// convert the data:
acc_x = (data[0] << 8) | data[1];
// ...
Warning!
Looks like this cannot be done for other registers. For example, to read the FIFO_COUNT_* I have to do this (otherwise I get incorrect results):
uint16_t FIFO_size(void)
{
byte bytes[2];
// this does not work
//readBytes(FIFO_COUNT_H, bytes, 2);
bytes[1] = readByte(FIFO_COUNT_H);
bytes[2] = readByte(FIFO_COUNT_L);
return unisci_bytes(bytes[1], bytes[2]);
}
Reading the FIFO
Looks like the FIFO works differently: you can burst read by simply requesting multiple bytes from the FIFO_R_W register and the MPU6050 will give you the bytes in the FIFO without incrementing the register.
I found this example where they use I2Cdev::readByte(SAD, FIFO_R_W, buffer) to read a given number of bytes from the FIFO and if you look at I2Cdev::readByte() (here) it simply requests N bytes from the FIFO register:
// ... send FIFO_R_W and request N bytes ...
for(...; ...; count++)
data[count] = Wire.receive();
// ...
How to
This is simple since the FIFO_R_W does not auto-increment:
byte data[12];
void loop() {
// ...
readBytes(FIFO_R_W, data, 12); // <- replace 12 with your burst size
// ...
}
Warning!
Using FIFO_size() is very slow!
Also my advice is to use 400kHz I2C frequency, which is the MPU6050's maximum speed
Hope it helps ;)
As Luca says, the burst read semantic seems to be different depending on the register the read operation starts at.
Reading consistent samples
To read a consistent set of raw data values, you can use the method I2C.readRegister(int, ByteBuffer, int) with register number 59 (ACCEL_XOUTR[15:8]) and a length of 14 to read all the sensor data ACCEL, TEMP, and GYRO in one operation and get consistent data.
Burst read of FIFO data
However, if you use the FIFO buffer of the chip, you can start the burst read with the same method signature on register 116 (FIFO_R_W) to read the given amount of data from the chip-internal fifo buffer. Doing so you must keep in mind that there is a limit on the number of bytes that can be read in one burst operation. If I'm interpreting https://github.com/joan2937/pigpio/blob/c33738a320a3e28824af7807edafda440952c05d/pigpio.c#L3914 right, a maximum of 31 bytes can be read in a single burst operation.
I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.
I'm currently integrating libFuzzer in a project which parses files on the hard drive. I have some prior experience with AFL, where a command line like this one was used:
afl-fuzz -m500 -i input/ -o output/ -t100 -- program_to_fuzz ##
...where ## was a path to the generated input.
Looking at libFuzzer however, I see that the fuzz targets look like this:
extern "C" int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size) {
DoSomethingInterestingWithMyAPI(Data, Size);
return 0; // Non-zero return values are reserved for future use.
}
I understand that the input isn't provided in the form of a file, but as a buffer in-memory instead. The problem is that the program I'm trying to fuzz works with files and obtains its data through fread() calls. At no point in time is the whole input supposed to be loaded in memory (where, in the general case, it might not even fit); so there's not much I can do with a const uint8_t*.
Writing the buffer back to the hard drive to get back a file seems extremely inefficient. Is there a way around this?
You can do as in this example from google security team.
The buf_to_file defined here takes your buffer and returns a char* pathname you can then pass to you target:
(from https://github.com/google/security-research-pocs/blob/master/autofuzz/fuzz_utils.h#L27 )
// Write the data provided in buf to a new temporary file. This function is
// meant to be called by LLVMFuzzerTestOneInput() for fuzz targets that only
// take file names (and not data) as input.
//
// Return the path of the newly created file or NULL on error. The caller should
// eventually free the returned buffer (see delete_file).
extern "C" char *buf_to_file(const uint8_t *buf, size_t size);
Be sure to free the ressource with the delete_file function.
You could use LD_PRELOAD and override fread.
I am trying to provide DMA via PCI. For that purpose I have an example of sysfs driver. I succesfully stored data to RAM but unfortunately I cant read them. I have a functions store_dmaread and show_dmaread. I acces them via c code like this. The write function works fine but the show function which I open via read() works (reads the DMA data, prints them) but the user space buffer is not visible in that function.
char buf[2] = {3,3};
fw = open("/sys/bus/pci/devices/0000\:01\:00.0/dmaread", O_RDWR);
read (fw,buf, 2);
write (fw, buf, 2);
close(fw);
the function in the driver looks like this:
static ssize_t show_dmaread(struct device *dev, struct device_attribute *attr, char *buf)
{
printk("User space buffer value %d \n", buf[0]) // PRINTS 0
// MORE CODE WHICH WORKS
}
static ssize_t store_dmaread(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
{
// WORKS FINE THE ATTRIBUTE CHANGES ITS VALUE
}
Thanks a lot for help
From your question, it appears you are expecting that the char * buf passed to your show_dmaread function points directly to the userspace buffer passed to read (or at the very least has been populated with the data in the user-side buffer):
However, looking in Documentation/filesystem/sysfs.txt it says:
sysfs allocates a buffer of size (PAGE_SIZE) and passes it to the
method. Sysfs will call the method exactly once for each read or
write. This forces the following behavior on the method
implementations:
On read(2), the show() method should fill the entire buffer. Recall that an attribute should only be exporting one value, or an
array of similar values, so this shouldn't be that expensive.
This allows userspace to do partial reads and forward seeks
arbitrarily over the entire file at will. If userspace seeks back to
zero or does a pread(2) with an offset of '0' the show() method will
be called again, rearmed, to fill the buffer.
Which leads me to believe you are getting a newly allocated buffer and that some other kernel code manages copying your buffer back over to userspace.