AES GCM mechanism parameters in C - aes

I am having problem setting the parameters for the AES GCM mechanism.
I am receving the following error
#define CKR_MECHANISM_PARAM_INVALID 0x00000071UL
What am I doing wrong?
CK_BYTE iv[12] = { 0 };
CK_MECHANISM mechanismAES = { CKM_AES_GCM, NULL_PTR, 0 };
CK_GCM_PARAMS params = {
.pIv=iv,
.ulIvLen=12,
.ulIvBits=96,
.pAAD=NULL,
.ulAADLen=0,
.ulTagBits=0
};
mechanismAES.pParameter = &params;
mechanismAES.ulParameterLen = sizeof(params);
C_EncryptInit(hSession, &mechanismAES, hKey);

.ulTagBits=0 is very likely the issue. The tag size is the size of the authentication tag. You would not have an authenticated mode of encryption if you left it out.
Valid tag sizes of GCM are 128, 120, 112, 104 or 96 bits. Smaller tag sizes such as 64 bits may be acceptable by some API's. You are however strongly encouraged to keep to the 128 bit tag size, as the security of GCM strongly depends on it.
You may also want to specify either the IV len or the IV bits if the error doesn't go away.

Related

CRC CMD18 SD Card

I'm try to read SDHC card with SPI and a DSP.
I succeed to read a lot of information (capacity, some other informations) using CMD17 command.
Now I want to use CMD18 command (READ_MULTIPLE_BLOCK), because I want to read 2 sectors (2 * 512 bytes). I put all the values in a buffer.
When I read it, there are 4 bytes (when I'm using a 4GB Class 4 or 10 bytes when I'm using a 4GB Class 10) between the 2 sectors which are not on the card (I read the 2 sectors with HxD software). What are these values?
This is an example with a 4GB Class 4:
Buffer values :
buffer[511] = 68 **// Good value**
buffer[512] = 143 // Bad value
buffer[513] = 178 // Bad value
buffer[514] = 255 // Bad value
buffer[515] = 254 // Bad value
buffer[516] = 48 **// Good value**
Real values readed with HxD
buffer[511] = 68 **// Good value**
buffer[512] = 48 **// Good value**
buffer[513] = 54 **// Good value**
buffer[514] = 48 **// Good value**
buffer[515] = 52 **// Good value**
buffer[516] = 69 **// Good value**
I don't send CRC (0xFF), does the problem from that?
Thank you for your help.
Regards,
buffer[512] = 143 // Bad value
buffer[513] = 178 // Bad value
These two bytes are CRC for 512-byte block. Can be not used, but can not be rejected from received stream.
buffer[514] = 255 // Bad value
Card pre-charge next block, you will get random quantity (include zero) of 255's each time. (Depends of SPI speed and card speed)
buffer[515] = 254 // Bad value
Card is ready to stream next data block. You should awaiting for 254, then you know the following data will be correct. In fact, even before first block of data, you should also wait/get these 255...255, 254.

How do I get the per-spec output of a LV-MaxSonar LV-EZ0 rangefinder with Android Things?

I'm using this sensor with a Raspberry Pi B 3 and Android Things 1.0
I have it connected as per these instructions
The spec for its output suggests I should receive "an ASCII capital “R”, followed by three ASCII character digits representing the range in inches up to a maximum of 255, followed by a carriage return (ASCII 13)"
I have connected to the device and configured it as follows (connection parameters map to the "Serial, 0 to Vcc, 9600 Baud, 81N" in that spec, I think):
PeripheralManager manager = PeripheralManager.getInstance();
mDevice = manager.openUartDevice(name);
mDevice.setBaudrate(9600);
mDevice.setDataSize(8);
mDevice.setParity(UartDevice.PARITY_NONE);
mDevice.setStopBits(1);
mDevice.registerUartDeviceCallback(mUartCallback);
I am reading from its buffer in that callback as follows:
public void readUartBuffer(UartDevice uart) throws IOException {
// "The output is an ASCII capital “R”, followed by three ASCII character digits
// representing the range in inches up to a maximum of 255, followed by a carriage return
// (ASCII 13)
ByteArrayOutputStream bout = new ByteArrayOutputStream();
final int maxCount = 1024;
byte[] buffer = new byte[maxCount];
int total = 0;
int cycles = 0;
int count;
bout.write(23);
while ((count = uart.read(buffer, buffer.length)) > 0) {
bout.write(buffer, 0, count);
total += count;
cycles++;
}
bout.write(0);
byte[] buf = bout.toByteArray();
String bufStr = Arrays.toString(buf);
Log.d(TAG, "Got " + total + " in " + cycles + ":" + buf.length +"=>" + bufStr);
}
private UartDeviceCallback mUartCallback = new UartDeviceCallback() {
#Override
public boolean onUartDeviceDataAvailable(UartDevice uart) {
// Read available data from the UART device
try {
readUartBuffer(uart);
} catch (IOException e) {
Log.w(TAG, "Unable to access UART device", e);
}
// Continue listening for more interrupts
return true;
}
When I connect the sensor and use this code I get readings back of the form:
05-10 03:59:59.198 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, -77, -84, 15, 0, 0]
05-10 03:59:59.248 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, 102, 101, 121, 0, 0]
05-10 03:59:59.298 1572-1572/org.tomhume.blah D/LVEZ0: Got 7 in 1:9=>[23, 43, 0, 6, 102, 99, 121, 0, 0]
The initial 23 and the final 0 on each line are values I have added. Instead of an expected R\d\d\d\13 I expect between them, I'm getting 7 signed bytes. The variance in some of these byte values appears when I move my hand towards and away from the sensor - i.e. the values I'm getting back vary in a way I might expect, even though the output is completely wrong in form and size.
Any ideas what I'm doing wrong here? I suspect it's something extremely obvious, but I'm stumped. Examining the binary values themselves it doesn't look like bits are shifted around by e.g. a mistake in protocol configuration.
I can't test it on a hardware now, but seems algorithm for reading serial data from a LV-MaxSonar LV-EZ0 rangefinder should be slightly different: you should find "an ASCII capital “R” in UART sequence buffer (because LV-EZ0 starts sending data after power on and not synchronized with Raspberry Pi board and first received byte may be not “R” and quantity of received bytes can be less or more then 5 bytes of LV-MaxSonar response^ in one buffer can be several (or part)of LV-MaxSonar responses) and THAN try to parse 3 character digits representing the range in inches and ending CR symbol (if there is no all 3 bytes for digits and CR symbol - than start to find capital “R” again because LV-MaxSonar response is broken (it also can be)). For example take a look at GPS contrib driver especially at processBuffer() of NmeaGpsModule.java and processMessageFrame() of NmeaParser.java - the difference is in your have only one "sentence" and its start is not "GPRMC", but only "R" symbol.
And other important thing: seems there no capital “R” (ASCII 82) in your response buffers - may be some issues in LV-MaxSonar LV-EZ0 connection. Your schematics should be exactly like top right picture on page 6 of datasheet: Independent Sensor Operation: Serial Output Sensor Operation. Also try, for example, to connect LV-MaxSonar LV-EZ0 to USB<->UART (with power output) cable (e.g. like that) and test it in terminal with your PC.

SwiftCrypto AES 265

I don't why I cannot make a AES encryption using the code below.
let key="12345678900987654321564738290123".bytes;//32
let iv="passwordpasswordwordpasswordpass".bytes;//32
let message="Hello text fcrypt Hello text for enfor r encryptg ";
print(message.count) //50
let data = Padding.pkcs7.add(to: message.bytes, blockSize: AES.blockSize)
do {
let aes = try AES(key: key, blockMode: .CBC(iv: iv), padding: .pkcs7)
let ciphertext = try aes.encrypt(data)
print(ciphertext)
} catch {
print(error)//dataPaddingRequired
}
Your IV should be 16 bytes, not 32. AES always has a block size of 128 bits, which divided by 8 is of course 16 bytes. CBC mode uses an IV of the same size as the block size as most (but not all) modes of encryption do.
The authenticated GCM mode is a bit of an odd one out: it uses a nonce / IV of 12 bytes by default and is only available for ciphers with a block size of 128 bits.

Why SSE4.2 CRC32 hash value is different with software CRC32 hash value?

In my project, CRC32 is calculated very many times.
I have used software CRC32 calculation until now.
But I noticed there is CPU support in SSE4.2 and linux also provides the hardware CRC32 calculation function using the CPU instruction.
I use Intel Xeon E5-2650 CPU so I tried to calculate CRC32 by using the linux function.
But the result is different with software CRC32 function that I used.
I used init value 127 in the both. Software CRC32 function I used is below
static uint32_t crc32_tab[] = {
0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f,
0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2,
0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9,
0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c,
0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423,
0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106,
0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d,
0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950,
0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7,
0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa,
0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81,
0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84,
0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb,
0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e,
0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55,
0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28,
0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f,
0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242,
0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69,
0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc,
0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693,
0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
};
uint32_t crc32(uint32_t crc, const void *buf, size_t size)
{
const uint8_t *p;
p = buf;
crc = 127;
while (size--)
crc = crc32_tab[(crc ^ *p++) & 0xFF] ^ (crc >> 8);
return crc ^ ~0U;
}
That table is for the "standard" CRC-32 used in ethernet, zip, gzip, v.42, many other places.
I need to point out that that code in your question is incorrect, with the crc = 127; statement. crc should not be set at all, since it is an input to the function that is then lost, and it should not be initialized to 127. What should be there is crc = ~crc;. That should also be the approach used at the end, instead of return crc ^ ~0U;. That may not be portable if unsigned is not the same size at uint32_t. The use of ~ is fine with uint32_t, but if a different type is used that is not 32 bits, then more portable still is crc ^ 0xffffffff.
Anyway, to answer your question, Intel chose a different 32-bit CRC to implement in their hardware instruction. That CRC is usually referred to as CRC-32C, using a polynomial discovered by Castagnoli (what the "C" refers to) with better properties than the polynomial used in the standard CRC-32. iSCSI and SCTP use the CRC-32C instead of CRC-32. I presume that that had something to do with Intel's choice.
See this answer for code that computes the CRC-32C in software, as well as in hardware if available.

iPhone to Max with OSC: certain int messages get zero'd

I'm using the berkely OSC library.
I'm using OSC_writeAddress() followed by OSC_writeIntArg() to send OSC buffers from iPhone to max/msp, one packet at a time. my range of values goes from 0 - 320. this works for integers 0 - 53, 74 - 127, 197 - 309 inclusive but all other values that i try to send arrive at max as 0.000000. I'm using the udpreceive object and the opensoundcontrol max external within max
i can't seem to get float args decoded properly at all on the max side and using writeAddressAndTypes doesn't work either.
Does anybody know where these 0's are coming from??
Thanks.
here's the general idea in code:
int addressResult = OSC_writeAddress(&myOSCBuff, (char*)oscAddress);
int writeRestult = OSC_writeIntArg(&myOSCBuff, sendVal);
CFSocketError sendError = CFSocketSendData(udpSocket, connectAddr, OSCPacketWithAddressTest, 30);