HmacSHA256 objective-c encryptation - iphone

I wanna encpryt a string with a key, using HmacSHA256. The code everyone use is the one below, but there is one thing that doesn´t make sense.
Why would we use base64 at the end if all we want is the HmacSHA256 hash?
I tried seeing the hash generated after the method CCHmac is called with
NSString *str = [[NSString alloc] initWithData:HMAC encoding:NSASCIIStringEncoding];
NSLog(#"%#", str);
But i don´t get the hash generated, i get null, or garbage, like this:
2011-10-11 09:38:05.082 Hash_HmacSHA256[368:207] (null)
2011-10-11 09:38:05.085 Hash_HmacSHA256[368:207] Rwªb7iså{yyþ§Ù(&oá÷ÛËÚ¥M`f
import < CommonCrypto/CommonHMAC.h>
NSString *key;
NSString *data;
const char *cKey = [key cStringUsingEncoding:NSASCIIStringEncoding];
const char *cData = [data cStringUsingEncoding:NSASCIIStringEncoding];
unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH];
CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC);
NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC
length:sizeof(cHMAC)];
NSString *hash = [HMAC base64Encoding]; //This line doesn´t make sense
[key release];
[data release];

First of all, for those wondering, this is in reference to my answer to this question: Objective-C sample code for HMAC-SHA1
The HMAC you generate is a 256-bit binary value that may or may not start with a 0 byte.
To be able to print it, you need a string representation (binary, hex, decimal, base64, etc.). Base64 is one of the most efficient among these, that's why I used a Base64 encoding there.
The reason you get garbage is that most (if not all) of the octets in the HMAC value are outside the range of printable ASCII characters. If the first octet is 0 (0x00), you get nil. This is why you need an encoding that supports arbitrary values. ASCII doesn't.
Of course, if you don't want to print the HMAC value, then may not need such an encoding, and can keep the HMAC as is (binary NSData).

I spend a whole day, trying to convert the generated hash (bytes) into readable data. I used the base64 encoded you said and it didn´t work at all for me .
So what i did was this:
CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC);
// Now convert to NSData structure to make it usable again
NSData *out = [NSData dataWithBytes:cHMAC length:CC_SHA256_DIGEST_LENGTH];
// description converts to hex but puts <> around it and spaces every 4 bytes
NSString *hash = [out description];
hash = [hash stringByReplacingOccurrencesOfString:#" " withString:#""];
hash = [hash stringByReplacingOccurrencesOfString:#"<" withString:#""];
hash = [hash stringByReplacingOccurrencesOfString:#">" withString:#""];
// hash is now a string with just the 40char hash value in it
NSLog(#"%#",hash);

Don't do "[out description]" to get the hash as a string.
Do [hash base64Encoding] to get the base64 encoding of it. Use http://cybersam.com/ios-dev/http-basic-access-authentication-with-objective-c-and-ios/attachment/nsdataadditions to get the base64Encoding function. The additions class is a category that will add the function base64Encoding to NSData's implementation.
Or you can do [[NSString alloc]initWithData:out encoding:NSUTF8StringEncoding].

Related

Objective-C character encoding - Change char to int, and back

Simple task: I need to convert two characters to two numbers, add them together and change that back to an character.
What I have got: (works perfect in Java - where encoding is handled for you, I guess):
int myChar1 = (int)([myText1 characterAtIndex:i]);
int myChar2 = (int)([myText2 characterAtIndex:keyCurrent]);
int newChar = (myChar1 + myChar2);
//NSLog(#"Int's %d, %d, %d", textChar, keyChar, newChar);
char newC = ((char) newChar);
NSString *tmp1 = [NSString stringWithFormat:#"%c", newC];
NSString *tmp2 = [NSString stringWithFormat:#"%#", newString];
newString = [NSString stringWithFormat:#"%#%#", tmp2, tmp1]; //Adding these char's in a string
The algorithm is perfect, but now I can't figure out how to implement encoding properties. I would like to do everything in UTF-8 but have no idea how to get a char's UTF-8 value, for instance. And if I've got it, how to change that value back to an char.
The NSLog in the code outputs the correct values. But when I try to do the opposite with the algorithm (I.e. - the values) then it goes wrong. It gets the wrong character value for weird/odd characters.
NSString works with unichar characters that are 2 bytes long (16 bits). Char is one byte long so you can only store code point from U+0000 to U+00FF (i.e. Basic Latin and Latin-1 Supplement).
You should do you math on unichar values then use +[NSString stringWithCharacters:length:] to create the string representation.
But there is still an issue with that solution. You code may generate code points between U+D800 and U+DFFF that aren't valid Unicode characters. The standard reserves them to encode code points from U+10000 to U+10FFFF in UTF-16 by pairs of 16-bit code units. In such a case, your string would be ill-formed and could neither be displayed nor converted in UTF8.
Also, the temporary variable tmp2 is useless and you should not create a new newString as you concatenate the string but rather use a NSMutableString.
I am assuming that your strings are NSStrings consisting of numerals which represent a number. If that is the case, you could try the following:
Include the following headers:
#include <inttypes.h>
#include <stdlib.h>
#include <stdio.h>
Then use the following code:
// convert NSString to UTF8 string
const char * utf8String1 = [myText1 UTF8String];
const char * utf8String2 = [myText2 UTF8String];
// convert UTF8 string into long integers
long num1 = strtol(utf8String1, NULL 0);
long num2 = strtol(utf8String2, NULL 0);
// perform calculations
long calc = num1 - num2;
// convert calculated value back into NSString
NSString * calcText = [[NSString alloc] initWithFormat:#"%li" calc];
// convert calculated value back into UTF8 string
char calcUTF8[64];
snprintf(calcUTF8, 64, "%li", calc);
// log results
NSLog(#"calcText: %#", calcText);
NSLog(#"calcUTF8: %s", calcUTF8);
Not sure if this is what you meant, but from what I understood, you wanted to create a NSString with the UTF-8 string encoding from a char?
If that's what you want, maybe you can use the initWithCString:encoding: method in NSString.

Converting an NSString to and from UTF32

I'm working with a database that includes hex codes for UTF32 characters. I would like to take these characters and store them in an NSString. I need to have routines to convert in both ways.
To convert the first character of an NSString to a unicode value, this routine seems to work:
const unsigned char *cs = (const unsigned char *)
[s cStringUsingEncoding:NSUTF32StringEncoding];
uint32_t code = 0;
for ( int i = 3 ; i >= 0 ; i-- ) {
code <<= 8;
code += cs[i];
}
return code;
However, I am unable to do the reverse (i.e. take a single code and convert it into an NSString). I thought I could just do the reverse of what I do above by simply creating a c-string with the UTF32 character in it with the bytes in the correct order, and then create an NSString from that using the correct encoding.
However, converting to / from cstrings does not seem to be reversible for me.
For example, I've tried this code, and the "tmp" string is not equal to the original string "s".
char *cs = [s cStringUsingEncoding:NSUTF32StringEncoding];
NSString *tmp = [NSString stringWithCString:cs encoding:NSUTF32StringEncoding];
Does anyone know what I am doing wrong? Should I be using "wchar_t" for the cstring instead of char *?
Any help is greatly appreciated!
Thanks,
Ron
You have a couple of reasonable options.
1. Conversion
The first is to convert your UTF32 to UTF16 and use those with NSString, as UTF16 is the "native" encoding of NSString. It's not actually all that hard. If the UTF32 character is in the BMP (e.g. it's high two bytes are 0's), you can just cast it to unichar directly. If it's in any other plane, you can convert it to a surrogate pair of UTF16 characters. You can find the rules on the wikipedia page. But a quick (untested) conversion would look like
UTF32Char inputChar = // my UTF-32 character
inputChar -= 0x10000;
unichar highSurrogate = inputChar >> 10; // leave the top 10 bits
highSurrogate += 0xD800;
unichar lowSurrogate = inputChar & 0x3FF; // leave the low 10 bits
lowSurrogate += 0xDC00;
Now you can create an NSString using both characters at the same time:
NSString *str = [NSString stringWithCharacters:(unichar[]){highSurrogate, lowSurrogate} length:2];
To go backwards, you can use [NSString getCharacters:range:] to get the unichar's back and then reverse the surrogate pair algorithm to get your UTF32 character back (any characters which aren't in the range 0xD800-0xDFFF should just be cast to UTF32 directly).
2. Byte buffers
Your other option is to let NSString do the conversion directly without using cStrings. To convert a UTF32 value into an NSString you can use something like the following:
UTF32Char inputChar = // input UTF32 value
inputChar = NSSwapHostIntToLittle(inputChar); // swap to little-endian if necessary
NSString *str = [[[NSString alloc] initWithBytes:&inputChar length:4 encoding:NSUTF32LittleEndianStringEncoding] autorelease];
To get it back out again, you can use
UTF32Char outputChar;
if ([str getBytes:&outputChar maxLength:4 usedLength:NULL encoding:NSUTF32LittleEndianStringEncoding options:0 range:NSMakeRange(0, 1) remainingRange:NULL]) {
outputChar = NSSwapLittleIntToHost(outputChar); // swap back to host endian
// outputChar now has the first UTF32 character
}
There are two probelms here:
1:
The first one is that both [NSString cStringUsingEncoding:] and [NSString getCString:maxLength:encoding:] return the C-string in native-endianness (little) without adding a BOM to it when using NSUTF32StringEncoding and NSUTF16StringEncoding.
The Unicode standard states that: (see, "How I should deal with BOMs")
"If there is no BOM, the text should be interpreted as big-endian."
This is also stated in NSString's documentation: (see, "Interpreting UTF-16-Encoded Data")
"... if the byte order is not otherwise specified, NSString assumes that the UTF-16 characters are big-endian, unless there is a BOM (byte-order mark), in which case the BOM dictates the byte order."
Although they're referring to UTF-16, the same applies to UTF-32.
2:
The second one is that [NSString stringWithCString:encoding:] internally uses CFStringCreateWithCString to create the C-string. The problem with this is that CFStringCreateWithCString only accepts strings using 8-bit encodings. From the documentation: (see, "Parameters" section)
The string must use an 8-bit encoding.
To solve this issue:
Explicitly state the encoding endianness you want to use both ways (NSString -> C-string and C-string -> NSString)
Use [NSString initWithBytes:length:encoding:] when trying to create an NSString from a C-string encoded in UTF-32 or UTF-16.
Hope this helps!

Problem with hash256 in Objective C

when i use this code for generate an hash256 in my iPhone app:
unsigned char hashedChars[32];
NSString *inputString;
inputString = [NSString stringWithFormat:#"hello"];
CC_SHA256([inputString UTF8String],
[inputString lengthOfBytesUsingEncoding:NSASCIIStringEncoding ],
hashedChars);
NSData * hashedData = [NSData dataWithBytes:hashedChars length:32];
The hash256 of inputString, is created correctly, but if i use a string like this #"\x00\x25\x53\b4", the hash256 is different from the real string with "\x" characters.
I think that the problem is in encoding "UTF8" instead of ascii.
Thanks!
I would be suspicious of the first character, "\x00" - thats going to terminate anything that thinks its dealing with "regular C strings".
Not sure whether lengthOfBytesUsingEncoding: takes that stuff into account, but its something I'd experiment with.
You're getting the bytes with [inputString UTF8String] but the length with [inputString lengthOfBytesUsingEncoding:NSASCIIStringEncoding]. This is obviously wrong. Moreover (assuming you mean "\xB4" and that it turns into something not in ASCII), "\xB4" is not likely to be in ASCII. The docs for NSString say
Returns 0 if the specified encoding cannot be used to convert the receiver
So you're calculating the hash of the empty string. Of course it's wrong.
You're less likely to have problems if you only generate the data once:
NSData * inputData = [inputString dataUsingEncoding:NSUTF8StringEncoding];
CC_SHA256(inputData.bytes, inputData.length, hashedChars);

How to Pass byte array as parameter in url in iPhone?

I am using following code to get bytes array. thx to this post.
Data *data = [NSData dataWithContentsOfFile:filePath];
NSUInteger len = [data length];
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [data bytes], len);
it is the right method to do so?
Now, How i can pass bytes array into url?
Thank You for Help,
You're using the right method to extract the raw bytes from the data. To get those into a URL, you'll need to convert them to a string. Exactly what string depends on the format that you're submitting (ie. it could be just a list of 1s and 0s, or YES and NO, or any other character(s) as required by the server you're talking to.)

iPhone --- 3DES Encryption returns "wrong" results?

I have some serious trouble with a CommonCrypto function. There are two existing applications for BlackBerry and Windows Mobile, both use Triple-DES encryption with ECB mode for data exchange. On either the encrypted results are the same.
Now I want to implent the 3DES encryption into our iPhone application, so I went straight for CommonCrypto:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-32207/CommonCrypto/CommonCryptor.h
I get some results if I use CBC mode, but they do not correspond with the results of Java or C#. Anyway, I want to use ECB mode, but I don't get this working at all - there is a parameter error showing up...
This is my call for the ECB mode... I stripped it a little bit:
const void *vplainText;
plainTextBufferSize = [#"Hello World!" length];
bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1);
plainText = (const void *) [#"Hello World!" UTF8String];
NSString *key = #"abcdeabcdeabcdeabcdeabcd";
ccStatus = CCCrypt(kCCEncrypt,
kCCAlgorithm3DES,
kCCOptionECBMode,
key,
kCCKeySize3DES,
nil, // iv, not used with ECB
plainText,
plainTextBufferSize,
(void *)bufferPtr, // output
bufferPtrSize,
&movedBytes);
t is more or less the code from here: http://discussions.apple.com/thread.jspa?messageID=9017515
But as already mentioned, I get a parameter error each time...
When I use kCCOptionPKCS7Padding instead of kCCOptionECBMode and set the same initialization vector in C# and my iPhone code, the iPhone gives me different results. Is there a mistake by getting my output from the bufferPtr? Currently I get the encrypted stuff this way:
NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes];
result = [[NSString alloc] initWithData:myData encoding:NSISOLatin1StringEncoding];
It seems I almost tried every setting twice, different encodings and so on... where is my error?
Can you post the error message?
One of the best ways to troubleshoot this stuff, I've found, is to take known input, known key and known output ("test vectors") and compare the bytes of the expected output with the observed output.
What you're doing here is probably not a good way to test the output:
NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes];
result = [[NSString alloc] initWithData:myData encoding:NSISOLatin1StringEncoding];
How do you know the encrypted binary data can be interpreted with the NSISOLatin1StringEncoding encoding?
Instead, compare the bytes directly (via [myData description] or the like) or translate the output with hexadecimal or base64 encoding.
I believe the problem is that kCCOptionECBMode alone is not enough. You also need padding (since it is a block cypher). If you pass both (i.e. kCCOptionPKCS7Padding | kCCOptionECBMode ) it will work.
I realise this is an old question, but for reference, I think that your key should not be passed in as an NSString. The key should instead be converted from hexadecimal to a byte array. This hexToBytes NSString extension should provide what you need by doing the following:
[[key hexToBytes] bytes]
The key should also be twice as long as the one given (48 characters of hex, i.e. 24 bytes).