iPhone . stringWithCString encoding - iphone

I am compiling a code segment using something like this:
char* filename = (char*) malloc( fileInfo.size_filename +1 );
unzGetCurrentFileInfo(_unzFile, &fileInfo, filename, fileInfo.size_filename + 1, NULL, 0, NULL, 0);
filename[fileInfo.size_filename] = '\0';
NSString * strPath = [NSString stringWithCString:filename];
but stringWithCString is deprecated. I am supposed to change that to
NSString * strPath = [NSString stringWithCString:filename encoding:????];
this filename as the name says represents entries on the file system, files and directories. How do I know the encoding filename is using? I mean, I can put UTF-8, but who knows which encoding users around the world will be using. If I choose any encoding I will be limiting that.
How do I solve that to put the correct encoding for each user.
thanks

Actually, for C paths, you want something a bit uglier:
NSString *strPath = [[NSFileManager defaultManager] stringWithFileSystemRepresentation:filename length:strlen(filename)];
And to go the other way:
const char *cPath = [nsFilename fileSystemRepresentation];

Related

replace line feed char(10) in objective-c

I have some data coming in from my DB (SQL Server 2008) and it has been formatted with char(10) as the line feeds and I want to replace them on my iOS device with \n.
How would I search for the char(10)? I know I would do replaceOccurancesOfStringWith but I need to nail this character down.
Make an NSString to replace using stringWithCString:encoding:, something like
// Make the string to find
char str[2] = { 10, 0 };
NSString *toFind = [NSString stringWithCString:str encoding:NSUTF8StringEncoding];
// Now do your replace :)
NSSTring *out = [input stringByReplacingOccurancesOfString:toFind withString:#"\\n"];

HmacSHA256 objective-c encryptation

I wanna encpryt a string with a key, using HmacSHA256. The code everyone use is the one below, but there is one thing that doesn´t make sense.
Why would we use base64 at the end if all we want is the HmacSHA256 hash?
I tried seeing the hash generated after the method CCHmac is called with
NSString *str = [[NSString alloc] initWithData:HMAC encoding:NSASCIIStringEncoding];
NSLog(#"%#", str);
But i don´t get the hash generated, i get null, or garbage, like this:
2011-10-11 09:38:05.082 Hash_HmacSHA256[368:207] (null)
2011-10-11 09:38:05.085 Hash_HmacSHA256[368:207] Rwªb7iså{yyþ§Ù(&oá÷ÛËÚ¥M`f
import < CommonCrypto/CommonHMAC.h>
NSString *key;
NSString *data;
const char *cKey = [key cStringUsingEncoding:NSASCIIStringEncoding];
const char *cData = [data cStringUsingEncoding:NSASCIIStringEncoding];
unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH];
CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC);
NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC
length:sizeof(cHMAC)];
NSString *hash = [HMAC base64Encoding]; //This line doesn´t make sense
[key release];
[data release];
First of all, for those wondering, this is in reference to my answer to this question: Objective-C sample code for HMAC-SHA1
The HMAC you generate is a 256-bit binary value that may or may not start with a 0 byte.
To be able to print it, you need a string representation (binary, hex, decimal, base64, etc.). Base64 is one of the most efficient among these, that's why I used a Base64 encoding there.
The reason you get garbage is that most (if not all) of the octets in the HMAC value are outside the range of printable ASCII characters. If the first octet is 0 (0x00), you get nil. This is why you need an encoding that supports arbitrary values. ASCII doesn't.
Of course, if you don't want to print the HMAC value, then may not need such an encoding, and can keep the HMAC as is (binary NSData).
I spend a whole day, trying to convert the generated hash (bytes) into readable data. I used the base64 encoded you said and it didn´t work at all for me .
So what i did was this:
CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC);
// Now convert to NSData structure to make it usable again
NSData *out = [NSData dataWithBytes:cHMAC length:CC_SHA256_DIGEST_LENGTH];
// description converts to hex but puts <> around it and spaces every 4 bytes
NSString *hash = [out description];
hash = [hash stringByReplacingOccurrencesOfString:#" " withString:#""];
hash = [hash stringByReplacingOccurrencesOfString:#"<" withString:#""];
hash = [hash stringByReplacingOccurrencesOfString:#">" withString:#""];
// hash is now a string with just the 40char hash value in it
NSLog(#"%#",hash);
Don't do "[out description]" to get the hash as a string.
Do [hash base64Encoding] to get the base64 encoding of it. Use http://cybersam.com/ios-dev/http-basic-access-authentication-with-objective-c-and-ios/attachment/nsdataadditions to get the base64Encoding function. The additions class is a category that will add the function base64Encoding to NSData's implementation.
Or you can do [[NSString alloc]initWithData:out encoding:NSUTF8StringEncoding].

What is wrong with this zlib string decompression?

i'm trying to create a simple stringdecompression algorithm for my app.
/*
Decompresses the source buffer into the destination buffer. sourceLen is
the byte length of the source buffer. Upon entry, destLen is the total size
of the destination buffer, which must be large enough to hold the entire
uncompressed data. (The size of the uncompressed data must have been saved
previously by the compressor and transmitted to the decompressor by some
mechanism outside the scope of this compression library.) Upon exit, destLen
is the actual size of the uncompressed buffer.
uncompress returns Z_OK if success, Z_MEM_ERROR if there was not
enough memory, Z_BUF_ERROR if there was not enough room in the output
buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete.
*/
[Base64 initialize];
NSData * data = [Base64 decode:#"MDAwMDAwNTB42vPMVkhKzVNIBeLsnNTMPB0IpVCWWZyVqpAJkalKTVUoS8xTSMpJLC0HALWrEYi="];
NSString * deBase64 = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding];
int lengteOP = [[deBase64 substringWithRange:NSMakeRange(0,8)] intValue];
NSUInteger lengteIP = [deBase64 length];
const unsigned char *input = (const unsigned char *) [[deBase64 substringFromIndex:8] cStringUsingEncoding:NSASCIIStringEncoding];
unsigned char * dest;
uncompress(dest, lengteOP, input, lengteIP);
A get a EXC_BADD_ACCESS error when i try this.
The string is build with code in delphi using ZLib, same as the library in the iPhone sdk
its a base64 encode string with the first 8 characters representing the length of the string followed by the zlib-ed string.
I did not run the code but your issue is likely to be dest. Here is a snippet from the documentation.
Upon entry, destLen is the total size
of the destination buffer, which must
be large enough to hold the entire
uncompressed data.
The destination needs to have the memory allocated before calling the function, otherwise it will attempt to write data to invalid memory causing EXC_BAD_ACCESS.
Try the following:
unsigned char * dest = malloc(sizeof(unsigned char) * lengteOP);
uncompress(dest, lengteOP, input, lengteIP);
//Use dest (create NSString with proper encoding for example)
free(dest);
- finished code
/*
Decompresses the source buffer into the destination buffer. sourceLen is
the byte length of the source buffer. Upon entry, destLen is the total size
of the destination buffer, which must be large enough to hold the entire
uncompressed data. (The size of the uncompressed data must have been saved
previously by the compressor and transmitted to the decompressor by some
mechanism outside the scope of this compression library.) Upon exit, destLen
is the actual size of the uncompressed buffer.
uncompress returns Z_OK if success, Z_MEM_ERROR if there was not
enough memory, Z_BUF_ERROR if there was not enough room in the output
buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete.
ZEXTERN int ZEXPORT uncompress OF((Bytef *dest, uLongf *destLen,
const Bytef *source, uLong sourceLen));
*/
[Base64 initialize];
NSData * data = [Base64 decode:#"MDAwMDAwNTB42vPMVkhKzVNIBeLsnNTMPB0IpVCWWZyVqpAJkalKTVUoS8xTSMpJLC0HALWrEYi="];
NSString * deBase64 = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding];
uLongf lengthOriginal = [[deBase64 substringWithRange:NSMakeRange(0,8)] floatValue];
uLongf * lengteOP = malloc(sizeof(uLongf));
lengteOP = &lengthOriginal;
NSUInteger lengteIP = [deBase64 length];
NSString * codedString = [deBase64 substringFromIndex:8];
const unsigned char *input = (const unsigned char *) [codedString cStringUsingEncoding:NSISOLatin1StringEncoding];
unsigned char * dest = malloc((sizeof(unsigned char) * lengthOriginal));
uncompress(dest, lengteOP, input, lengteIP);
NSString * bla = #"Decoded string :";
NSLog([bla stringByAppendingString:[NSString stringWithCString:dest encoding:NSASCIIStringEncoding]]);
free(dest);
free(lengteOP);

Problem with hash256 in Objective C

when i use this code for generate an hash256 in my iPhone app:
unsigned char hashedChars[32];
NSString *inputString;
inputString = [NSString stringWithFormat:#"hello"];
CC_SHA256([inputString UTF8String],
[inputString lengthOfBytesUsingEncoding:NSASCIIStringEncoding ],
hashedChars);
NSData * hashedData = [NSData dataWithBytes:hashedChars length:32];
The hash256 of inputString, is created correctly, but if i use a string like this #"\x00\x25\x53\b4", the hash256 is different from the real string with "\x" characters.
I think that the problem is in encoding "UTF8" instead of ascii.
Thanks!
I would be suspicious of the first character, "\x00" - thats going to terminate anything that thinks its dealing with "regular C strings".
Not sure whether lengthOfBytesUsingEncoding: takes that stuff into account, but its something I'd experiment with.
You're getting the bytes with [inputString UTF8String] but the length with [inputString lengthOfBytesUsingEncoding:NSASCIIStringEncoding]. This is obviously wrong. Moreover (assuming you mean "\xB4" and that it turns into something not in ASCII), "\xB4" is not likely to be in ASCII. The docs for NSString say
Returns 0 if the specified encoding cannot be used to convert the receiver
So you're calculating the hash of the empty string. Of course it's wrong.
You're less likely to have problems if you only generate the data once:
NSData * inputData = [inputString dataUsingEncoding:NSUTF8StringEncoding];
CC_SHA256(inputData.bytes, inputData.length, hashedChars);

Detect Unicode characters in NSString on iPhone

I am working on an SMS application for the iPhone. I need to detect if the user has entered any unicode characters inside the NSString they wish to send.
I need to do this is because unicode characters take up more space in the message, and also because I need to convert them into their hexadecimal equivalents.
So my question is how do I detect the presence of a unicode character in an NSString (which I read from a UITextView). Also, how do I then convert those characters into their UCS‐2 hexadecimal equivalents?
E.g 繁 = 7E41, 体 = 4F53, 中 = 4E2D, 文 = 6587
To check for only ascii characters (or another encoding of your choice) use:
[myString canBeConvertedToEncoding:NSASCIIStringEncoding];
It will return NO if the string contains non-ascii characters. You can then convert the string to UCS-2 data with:
[myString dataUsingEncoding:NSUTF16BigEndianStringEncoding];
or NSUTF16LittleEndianStringEncoding depending on your platform. There are slight differences between UCS-2 and UTF-16. UTF-16 has superseded UCS-2. You can read about the differences here:
http://en.wikipedia.org/wiki/UTF-16/UCS-2
I couldn't get this to work.
I has a html string with NON BREAKING SPACE
</div>Great Guildford St/SouthwarkSt & nbsp;Stop:& nbsp; BM<br>Walk to SE1 0HL<br>
"Great Guildford St/SouthwarkSt \U00a0Stop:\U00a0 BM",
I tried 3 types of encode/decode
// NSData *asciiData = [instruction dataUsingEncoding:NSUTF16BigEndianStringEncoding];
// NSString *asciiString = [[NSString alloc] initWithData:asciiData
// encoding:NSUTF16BigEndianStringEncoding];
// NSData *asciiData = [instruction dataUsingEncoding:NSASCIIStringEncoding];
// NSString *asciiString = [[NSString alloc] initWithData:asciiData
// encoding:NSASCIIStringEncoding];
//little endian
NSData *asciiData = [instruction dataUsingEncoding:NSUTF16LittleEndianStringEncoding];
NSString *asciiString = [[NSString alloc] initWithData:asciiData
encoding:NSUTF16LittleEndianStringEncoding];
none of these worked.
They seemed to work as if I NSLog the string it looks ok
NSLog(#"HAS UNICODE :%#", instruction);
..do encode/decode
NSLog(#"UNICODE AFTER:%#", asciiString);
Which output
HAS UNICODE: St/SouthwarkSt  Stop:  BM
UNICODE AFTER: St/SouthwarkSt  Stop:  BM
but I happened to store these in an NSArray and I happened to call [stringArray description] and all the unicode was still in there
instructionsArrayString: (
"Great Guildford St/SouthwarkSt \U00a0Stop:\U00a0 BM",
"Walk to SE1 0HL"
)
So something in NSLog hides but it shows up in NSArray description so you may think youve removed the Unicode when you haven't.
Will try another method that replace the characters.