I have seen this thread, and the encryption techniques mentioned there is working well. But not in all cases.
Requirement:
Simple, take one image, encrypt it, and store the encrypted data. Later, get the encrypted data, decrypt it, recreate the original image and show.
What I have done
From the above mentioned thread, I found NSData additions for AES 256 encryption. I tried to use it but with partial success. This is the code
//encryption
NSData *srcData = UIImageJPEGRepresentation(srcImage, 1.0);
NSLog(#"srcData length : %d",[srcData length]);
NSData *encryptedData = [srcData AES256EncryptWithKey:KEY];
NSLog(#"encrypted data length : %d",[encryptedData length]);
........
//later..
//decryption
decryptedImage = [UIImage imageWithData:[encryptedData AES256DecryptWithKey:KEY]];
imageView.image = decryptedImage;
What is happening
For a small image, say image with resolution 48*48, this code is working successfully. But when I run the code in an image with higher resolutions, say 256 * 256, the method AES256EncryptWithKey failing with error kCCBufferTooSmall (-4301).
Questions
Does AES 256 impose any limit on the size (in bytes) of the payload
to be encrypted?
If the answer to first question is YES, then what kind of
encryption algorithm to use in iphone, to encrypt image (probably
big ones)?
If the answer to the first question is NO, then why this error?
No, not really. Some hash functions do have a maximum, but that's more in the order of 2^64, so generally you don't have to worry.
N/A
It has probably something to do with the dataWithBytesNoCopy in combination with the malloc call, but it is hard to find out without actually running the code.
Note that that wrapper is pretty braindead, as it does require encrypting all at once, without using CCCryptorUpdate. It does not use an IV which jeopardizes security. It handles the keys as strings. Finally, it creates too big a buffer size for decryption. You are better off creating your own using a more reliable source.
Related
I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.
I'm building an App with actionscript 3.0 in my Flash builder. This is a followup question this question.
I need to upload the bytearray to my server, but the function i use to convert the bitmapdata to a ByteArray is super slow, so slow it freezes up my mobile device. My code is as follows:
var jpgenc:JPEGEncoder = new JPEGEncoder(50);
trace('encode');
//encode the bitmapdata object and keep the encoded ByteArray
var imgByteArray:ByteArray = jpgenc.encode(bitmap);
temp2 = File.applicationStorageDirectory.resolvePath("snapshot.jpg");
var fs:FileStream = new FileStream();
trace('fs');
try{
//open file in write mode
fs.open(temp2,FileMode.WRITE);
//write bytes from the byte array
fs.writeBytes(imgByteArray);
//close the file
fs.close();
}catch(e:Error){
Is there a different way to convert it to a byteArray? Is there a better way?
Try to use blooddy library: http://www.blooddy.by . But i didn't test it on mobile devices. Comment if you will have success.
Use BitmapData.encode(), it's faster by orders of magnitude on mobile http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#encode%28%29
You should try to find a JPEG encoder that is capable of encoding asynchronously. That way the app can still be used while the image is being compressed. I haven't tried any of the libraries, but this one looks promising:
http://segfaultlabs.com/devlogs/alchemy-asynchronous-jpeg-encoding-2
It uses Alchemy, which should make it faster than the JPEGEncoder from as3corelib (which I guess is the one you're using at the moment.)
A native JPEG encoder is ideal, asynchronous would be good, but possibly still slow (just not blocking). Another option:
var pixels:ByteArray = bitmapData.getPixels(bitmapData.rect);
pixels.compress();
I'm not sure of native performance, and performance definitely depends on what kind of images you have.
The answer from Ilya was what did it for me. I downloaded the library and there is an example of how to use it inside. I have been working on getting the CameraUI in flashbuilder to take a picture, encode / compress it, then send it over via a web service to my server (the data was sent as a compressed byte array). I did this:
by.blooddy.crypto.image.JPEGEncoder.encode( bmp, 30 );
Where bmp is my bitmap data. The encode took under 3 seconds and was easily able to fit into my flow of control synchronously. I tried async methods but they ultimately took a really long time and were difficult to track for things like when a user moved from cell service to wifi or from tower to tower while an upload was going on.
Comment here if you need more details.
I m new to IPhone Application development.
Actually, I converted UIImage to BASE64 String and sent to server side. It was about 58000 character length.
They, stored as Blob,now it is 5 to 6 character length. They are giving this 5 charecter blob data.
So, How can i reconstructed this blob to string ,which i sent?
Please reply.
Have you tried this:
NSData *cont=[[NSData alloc]initWithBytes:[blob_data bytes] length:[blob_data length]];
and then convert data into NSString using NSString initWithData.
I will suggest you to do encryption AES 128 and after that compression gzip in order to reduce the bandwidth wastage. Also it also make your data secure.At the server side do this in reverse order you will get origional plain data.
I'm using Apple's SecKeyWrapper class from the CryptoExercise sample code in the Apple docs to do some symmetric encryption with AES128. For some reason, when I encrypt 1-15 characters or 17 characters, it encrypts and decrypts correctly. With 16 characters, I can encrypt, but on decrypt it throws an exception after the CCCryptorFinal call with ccStatus == -4304, which indicates a decode error. (Go figure.)
I understand that AES128 uses 16 bytes per encrypted block, so I get the impression that the error has something to do with the plaintext length falling on the block boundary. Has anyone run into this issue using CommonCryptor or SecKeyWrapper?
The following lines...
// We don't want to toss padding on if we don't need to
if (*pkcs7 != kCCOptionECBMode) {
if ((plainTextBufferSize % kChosenCipherBlockSize) == 0) {
*pkcs7 = 0x0000;
} else {
*pkcs7 = kCCOptionPKCS7Padding;
}
}
... are the culprits of my issue. To solve it, I simply had to comment them out.
As far as I can tell, the encryption process was not padding on the encryption side, but was then still expecting padding on the decryption side, causing the decryption process to fail (which is generally what I was experiencing).
Always using kCCOptionPKCS7Padding to encrypt/decrypt is working for me so far, for strings that satisfy length % 16 == 0 and those that don't. And, again, this is a modification to the SecKeyWrapper class of the CryptoExercise example code. Not sure how this impacts those of you using CommonCrypto with home-rolled wrappers.
I too have encountered this issue using the CommonCrypto class but for ANY string with a length that was a multiple of 16.
My solution is a total hack since I have not yet found a real solution to the problem.
I pad my string with a space at the end if it is a multiple of 16. It works for my particular scenario since the extra space on the data does not affect the receipt of the data on the other side but I doubt it would work for anyone else's scenario.
Hopefully somebody smarter can point us in the right direction to a real solution.
I have created a java server, which takes screenshots, resizes them, and sends them over TCP/IP to my iPhone application. The application then uses NSInputStream to collect the incoming image data, create an NSMutableData instance with the byte buffer, and then create a UIImage object to display on the iPhone. Screenshare, essentially. My iPhone code to collect the image data is currently as follow:
- (void)stream:(NSStream *)theStream handleEvent:(NSStreamEvent)streamEvent{
if(streamEvent == NSStreamEventHasBytesAvailable && [iStream hasBytesAvailable]){
uint8_t buffer[1024];
while([iStream hasBytesAvailable]){
NSLog(#"New Data");
int len = [iStream read:buffer maxLength:sizeof(buffer)];
[imgdata appendBytes:buffer length:len];
fullen=fullen+len;
/*Here is where the problem lies. What should be in this
if statement in order to make it test the last byte of
the incoming buffer, to tell if it is the End of Image marker
for the end of incoming JPEG file?
*/
if(buffer[len]=='FFD9'){
UIImage *img = [[UIImage alloc] initWithData:imgdata];
NSLog(#"NUMBER OF BYTES: %u", len);
image.image = img;
}
}
}
}
My problem, as indicated by the in-code comment, is figuring out when to stop collecting data in the NSMutableData object, and use the data to create a UIImage. It seems to make sense to look for the JPEG End of File marker--End of Image (EOI) marker (FFD9)--in the incoming bytes, as the image will be ready for display when this is sent. How can I test for this? I'm either missing something about how the data is stored, or about the marker within the JPEG file, but any help in testing for this would be greatly appreciated!
James
You obviously don't want to close the stream because that would kill performance.
Since you control the client server connection, send down the # of bytes in the image before sending the image data. Better yet, send down # of bytes in the image, the image data, and an easily identified serial # at the end so you can quickly verify that the data has actually arrived.
Much easier and more efficient than actually checking for the end of file marker. Though, of course, you could also just check for that after the # of bytes have been received, too. Easy enough.
Of course, all of this is going to be grossly inefficient for screensharing style purposes in all but the unusual cases. In most cases, only a small part of the screen to be mirrored actually changes with each frame. If you try to send the whole screen with every frame, you'll quickly saturate your connection and the client side will be horribly laggy and unresponsive.
Given that this is an extremely mature market, there are tons of solutions and quite a few open source bits from which you can derive a solution to fit your needs (see VNC, for example).