Snappy-java uncompress fails for valid data - scala

I am trying to uncompress the bytestring using snappy-java
ByteString(0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59, 1, 14, 0, 0, 38, -104, 43, -49, 0, 0, 0, 6, 0, 0, 0, 0, 79, 75)
It contains two frames, first with chunk value 0xff(stream identifier) and length 6 and second frame of chunk type 1(uncompressed), with length 14. Which is valid as per protocol spec found [here] (http://code.google.com/p/snappy/source/browse/trunk/framing_format.txt)
The code used to uncompress is here
val c = ByteString(0xff, 0x06, 0x00, 0x00, 0x73, 0x4e, 0x61, 0x50, 0x70, 0x59, 1, 14, 0, 0, 38, -104, 43, -49, 0, 0, 0, 6, 0, 0, 0, 0, 79, 75)
Snappy.uncompress(c.toArray)
The code throws FAILED_TO_UNCOMPRESS error, which is part of jna. I am using scala v2.11.3 and snappy-java v1.0.5.4
Exception in thread "main" java.io.IOException: FAILED_TO_UNCOMPRESS(5)
at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:395)
at org.xerial.snappy.Snappy.uncompress(Snappy.java:431)
at org.xerial.snappy.Snappy.uncompress(Snappy.java:407)

Failed to uncompress error is because Snappy.uncompress does not support framed input. The framing format is finalized recently and the implemented is added in SnappyFramedInputStream. The source is located here
The following is the code to decompress snappy frames
def decompress(contents: Array[Byte]): Array[Byte] = {
val is = new SnappyFramedInputStream(new ByteArrayInputStream(contents))
val os = new ByteArrayOutputStream(Snappy.uncompressedLength(contents))
is.transferTo(os)
os.close()
os.toByteArray
}

Related

Repartition with a fixed minimum number of elements in each partition of the RDD using Spark

I have a RDD with the following number of elements in each partition (total number of partitions is val numPart = 32:
1351, 962, 537, 250, 80, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 15, 88, 270, 635, 1028, 1388, 1509
To see the previous output I use this:
def countByPartition[A](anRdd: RDD[A]): RDD[Int] = anRdd.mapPartitions(iter => Iterator(iter.length))
println(countByPartition(anRdd).collect.mkString(", "))
I would like to have on each partition at least a minimum number of elements given by val min = 5.
I've tried to perform anRdd.repartition(numPart) and I get the following:
257, 256, 256, 256, 255, 255, 254, 253, 252, 252, 252, 252, 252, 252,
252, 252, 251, 250, 249, 248, 248, 248, 248, 248, 261, 261, 260, 260,
259, 258, 258, 257
In this case, it was perfect because in each partition I have more than min elements. But it doesn't always gets the same and sometimes I get some partitions with values less than min value.
Is there a way to do what I want?
It is not possible and in general you need to choose partitioning so that the sizes are roughly even. Partitioners in Spark basically implement two methods numPartitions and getPartition. The latter is a function from a single key to a partition number so other elements and thus the potential size of partitions are not known at this point.

CryptoSwift with AES128 CTR Mode - Buggy counter increment?

i've encountered a problem on the CryptoSwift-API (krzyzanowskim) while using AES128 with the CTR-Mode and my test function (nullArrayBugTest()) that produces on specific counter values (between 0 and 25 = on 13 and 24) a wrong array count that should usually be 16!
Even if I use the manually incremented "iv_13" with the buggy value 13 instead of the default "iv_0" and the counter 13...
Test it out to get an idea what I mean.
func nullArrayBugTest() {
var ctr:CTR
let nilArrayToEncrypt = Data(hex: "00000000000000000000000000000000")
let key_ = Data(hex: "000a0b0c0d0e0f010203040506070809")
let iv_0: Array<UInt8> = [0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f]
//let iv_13: Array<UInt8> = [0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x1c]
var decryptedNilArray = [UInt8]()
for i in 0...25 {
ctr = CTR(iv: iv_0, counter: i)
do {
let aes = try AES(key: key_.bytes, blockMode: ctr)
decryptedNilArray = try aes.decrypt([UInt8](nilArrayToEncrypt))
print("AES_testcase_\(i) for ctr: \(ctr) withArrayCount: \(decryptedNilArray.count)")
}catch {
print("De-/En-CryptData failed with: \(error)")
}
}
}
Output with buggy values
The question why I always need the encrypted array with 16 values is not important :D.
Does anybody know why the aes.decrypt()-function handles that like I received?
Thanks for your time.
Michael S.
CryptoSwift defaults to PKCS#7 padding. Your resulting plaintexts have invalid padding. CryptoSwift ignores padding errors, which IMO is a bug, but that's how it's implemented. (All the counters that you're considering "correct" should really have failed to decrypt at all.) (I spoke this over with Marcin and he reminded me that even at this low level, it's normal to ignore padding errors to avoid padding oracle attacks. I'd forgotten that I do it this way too....)
That said, sometimes the padding will be "close enough" that CryptoSwift will try to remove padding bytes. It usually won't be valid padding, but it'll be close enough for CrypoSwift's test.
As an example, your first counter creates the following padded plaintext:
[233, 222, 112, 79, 186, 18, 139, 53, 208, 61, 91, 0, 120, 247, 187, 254]
254 > 16, so CryptoSwift doesn't try to remove padding.
For a counter of 13, the following padded plaintext is returned:
[160, 140, 187, 255, 90, 209, 124, 158, 19, 169, 164, 110, 157, 245, 108, 12]
12 < 16, so CryptoSwift removes 12 bytes, leaving 4. (This is not how PKCS#7 padding works, but it's how CryptoSwift works.)
The underlying problem is you're not decrypting something you encrypted. You're just running a static block through the decryption scheme.
If you don't want padding, you can request that:
let aes = try AES(key: key_.bytes, blockMode: ctr, padding: .noPadding)
This will return you what you're expecting.
Just in case there's any confusion by other readers: this use of CTR is wildly insecure and no part of it should be copied. I'm assuming that the actual encryption code doesn't work anything like this.
I guess the encryption happens without the padding applied, but then u use padding to decrypt. To fix that, use the same technique on both sides. That said, this is a solution (#rob-napier answer is more detailed):
try AES(key: key_.bytes, blockMode: ctr, padding: .noPadding)

Loading RSA private key

I'm trying to load a private key (generated with RSA in an external application) in a javacard. I've written some normal java code to generate a keypair and to print the exponent and modulus of the private key:
public class Main {
public static void main(String[] args) throws NoSuchAlgorithmException {
KeyPairGenerator keyGen = KeyPairGenerator.getInstance("RSA");
keyGen.initialize(512);
KeyPair kp = keyGen.generateKeyPair();
RSAPrivateKey privateKey = (RSAPrivateKey) kp.getPrivate();
BigInteger modulus = privateKey.getModulus();
BigInteger exponent = privateKey.getPrivateExponent();
System.out.println(Arrays.toString(modulus.toByteArray()));
System.out.println(Arrays.toString(exponent.toByteArray()));
}
}
I then copied the byte arrays to the javacard code
try {
RSAPrivateKey rsaPrivate = (RSAPrivateKey) KeyBuilder.buildKey(KeyBuilder.TYPE_RSA_PRIVATE, KeyBuilder.LENGTH_RSA_512, false);
byte[] exponent = new byte[]{113, 63, 80, -115, 103, 13, -90, 75, 85, -31, 83, 84, -15, -8, -73, -68, -67, -27, -114, 48, -103, -10, 27, -77, -27, 70, 61, 102, 17, 36, 0, -112, -10, 111, 40, -117, 116, -120, 76, 35, 54, -109, 115, 70, -11, 118, 92, -43, -15, -38, -67, 112, -13, -115, 7, 65, -41, 89, 127, 62, -48, -66, 8, 17};
byte[] modulus = new byte[]{0, -92, -30, 28, -59, 41, -57, 95, -61, 2, -50, -67, 0, 6, 67, -13, 22, 61, -96, -15, -95, 20, -86, 113, -31, -91, -92, 77, 124, 26, -67, -24, 40, -42, -41, 115, -66, 109, -115, -111, -6, 33, -51, 63, -72, 113, -36, 22, 99, 116, 18, 108, 106, 97, 95, -69, -118, 49, 9, 83, 67, -43, 50, -36, -55};
rsaPrivate.setExponent(exponent, (short) 0, (short) exponent.length);
rsaPrivate.setModulus(modulus, (short) 0, (short) modulus.length);
}
catch (Exception e) {
short reason = 0x88;
if (e instanceof CryptoException)
reason = ((CryptoException)e).getReason();
ISOException.throwIt(reason);
}
Now for some reason, a CryptoException is thrown when setting the modulus with reason 1. According to the API, this means CryptoException.ILLEGAL_VALUE if the input modulus data length is inconsistent with the implementation or if input data decryption is required and fails.
I really got no clue why this is failing. Generating the keys on card is not an option in this project.
And I know 512 bits is not safe anymore, it's just for testing purpose. It will be replaced by 2048 bits in the end.
I figured out that the RSAPrivateKey api expects unsigned values and the toByteArray of a BigInteger returns the signed version. This post ( BigInteger to byte[] ) helped to figure me out I could simply remove the leading zero byte in the modulus byte array. It's working ok now.

opengl glut stopped working

I'm trying to launch hello world program using opengl and glut(eclipse). I always get message Program.exe has stopped working. I'm using windows.
I installed MinGW.
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
void displayCall() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-2.0, 2.0, -2.0, 2.0, -2.0, 500.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(2, 2, 2, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glScalef(.005,.005,.005);
glRotatef(20, 0, 1, 0);
glRotatef(30, 0, 0, 1);
glRotatef(5, 1, 0, 0);
glTranslatef(-300, 0, 0);
glColor3f(1,1,1);
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'H');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'e');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'l');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'l');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'o');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'W');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'o');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'r');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'l');
glutStrokeCharacter(GLUT_STROKE_ROMAN, 'd');
glutStrokeCharacter(GLUT_STROKE_ROMAN, '!');
glutSwapBuffers();
}
int main(int argc, char *argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowSize(500, 500);
glutInitWindowPosition(300, 200);
glutCreateWindow("Hello World!");
glutDisplayFunc(displayCall);
glutMainLoop();
return 0;
}
I copy glut32.dll in C:\Windows\System32 folder, glut.h in the folder C:\MinGW\include\GL and libglut32.a in C:\MinGW\lib.
I set in eclipse project->properties->c/c++ Build->Settings->MinGw c++ Linker->Libraries(add): glut32,glu32,opengl32. I don't know why it crash.
Your code compiles just fine. It also executes OK so its not a coding issue, just a library setup issue. On linux with necessary libraries installed I compiled your code using
gcc -o hello_world_glut hello_world_glut.c -lGL -lglut -lGLU
assure you are looking at the console window in Eclipse for errors

How can I read an hex number with dlmread?

I'm trying to read a .csv file with Octave (I suppose it's equivalent on Matlab). One of the columns contains hexadecimal values identifying MAC addresses, but I'd like to have it parsed anyway, I don't mind if it's converted to decimal.
Is it possible to do this automatically with functions such as dlmread? Or do I have to create a custom function?
This is how the file looks like:
Timestamp, MAC, LastBsn, PRR, RSSI, ED, SQI, RxGain, PtxCoord, Channel: 26
759, 0x35c8cc, 127, 99, -307, 29, 237, 200, -32
834, 0x32d710, 183, 100, -300, 55, 248, 200, -32
901, 0x35c8cc, 227, 100, -300, 29, 238, 200, -32
979, 0x32d6a0, 22, 95, -336, 10, 171, 200, -32
987, 0x32d710, 27, 96, -328, 54, 249, 200, -32
1054, 0x35c8cc, 71, 92, -357, 30, 239, 200, -32
1133, 0x32d6a0, 122, 95, -336, 11, 188, 200, -32
I can accept any output value for the (truncated) MAC addresses, from sequence numbers (1-6) to decimal conversion of the value (e.g. 0x35c8cc -> 3524812).
My current workaround is to use a text editor to manually replace the MAC addresses with decimal numbers, but an automated solution would be handy.
The functions dlmread and csvread will handle numeric files. You can use textscan (which is also present in Matlab), but since you're using Octave, you're better off using csv2cell (part of Octave's io package). It basically reads a csv file and returns a cell array of strings and doubles:
octave-3.8.1> type test.csv
1,2,3,"some",1c:6f:65:90:6b:13
4,5,6,"text",0d:5a:89:46:5c:70
octave-3.8.1> plg load io; # csv2cell is part of the io package
octave-3.8.1> data = csv2cell ("test.csv")
data =
{
[1,1] = 1
[2,1] = 4
[1,2] = 2
[2,2] = 5
[1,3] = 3
[2,3] = 6
[1,4] = some
[2,4] = text
[1,5] = 1c:6f:65:90:6b:13
[2,5] = 0d:5a:89:46:5c:70
}
octave-3.8.1> class (data{1})
ans = double
octave-3.8.1> class (data{9})
ans = char
>> type mycsv.csv
Timestamp, MAC, LastBsn, PRR, RSSI, ED, SQI, RxGain, PtxCoord, Channel: 26
759, 0x35c8cc, 127, 99, -307, 29, 237, 200, -32
834, 0x32d710, 183, 100, -300, 55, 248, 200, -32
901, 0x35c8cc, 227, 100, -300, 29, 238, 200, -32
979, 0x32d6a0, 22, 95, -336, 10, 171, 200, -32
987, 0x32d710, 27, 96, -328, 54, 249, 200, -32
1054, 0x35c8cc, 71, 92, -357, 30, 239, 200, -32
1133, 0x32d6a0, 122, 95, -336, 11, 188, 200, -32
You can read the file with csv2cell. The values starting with "0x" will be automatically converted from hex to decimal values. See:
>> pkg load io % load io package for csv2cell
>> data = csv2cell ("mycsv.csv");
>> data(2,1)
ans =
{
[1,1] = 759
}
To access the cell values use:
>> data{2,1}
ans = 759
>> data{2,2}
ans = 3524812
>> data{2,5}
ans = -307