my problem is the following. I would like to encapsulate a NSString in a NSMutableData object. But I would like to do it together with other items and without first encapsulating it into a NSData. It's only bytes after all isn't it?
My final NSMutableData object would look something like
[header | stringLength | NSString]
where header is a char and stringLength is an unsigned short.
I build my packet like this
unsigned short stringLength = myString.length;
NSMutableData* nData = [NSMutableData dataWithBytes:(const void*)&header length:sizeof(char)];
[nData appendBytes:(const void*)&dataLength length:sizeof(unsigned short)];
[nData appendBytes:(const void*)myString length:stringLength];
I would then send this over a gkSession and at the other end I would extract the sting lenght and then the string itself:
NSString* temp = [NSString alloc];
[data getBytes:(void*)&temp range:NSMakeRange(sizeof(char)+sizeof(unsigned short), stringLenght)];
For some reasons this is giving me bad memory access. I suspect that myString.length is not doing exactly what I would expect. Do you have any hints? Thanks in advance.
This line is incorrect:
[nData appendBytes:(const void*)myString length:stringLength];
This is encoding the first part of the underlying NSString structure (which is larger than stringLength).
What you mean is:
[nData appendBytes:[myString UTF8String]
length:[myString lengthOfBytesUsingEncoding:NSUTF8StringEncoding]];
length is the number of characters. This can be substantially smaller than the number of bytes for multibyte characters.
Just as a side note: if you can shorten your length to 1 byte (0-255), then that encoding is called a Pascal String, and CFString can handle that encoding natively (see CFStringGetPascalString()). Not something you generally want to do, but interesting. It's particularly nice with CFStringCreateWithPascalStringNoCopy(), since you can avoid the memory copy operation entirely. This is mostly for legacy support, and I wouldn't jump through any hoops to use it, but sometimes it's handy.
Related
hey just a couple quick noob questions about writing my first ios app. Ive been searching through the questions here but they all seem to address questions more advanced than mine so im getting confused.
(1) All I want to do is turn a string into an array of integers representing the ASCII code. In other words, I want to convert:
"This is some string. It has spaces, punctuation, AND capitals."
into an array with 62 integers.
(2) How do I get back from the NSArray to a string?
(3) Also, are these expensive operations in terms of memory or computation time? It seems like it might be if we have to create a new variable at every iteration or something.
I know how to declare all the variables and im assuming I run a loop through the length of the string and at each iteration I somehow get the character and convert it into a number with some call to a built in command.
Thanks for any help you can offer or links to posts that might help!
if you want to store the ascii values in an nsarray it is going to be expensive. NSArray can only hold objects so you're going to have to create an NSNumber for each ASCII value:
unsigned len = [string length];
NSMutableArray arr = [NSMutableArray arrayWithCapacity:len];
for (unsigned i = 0; i < len; ++i) {
[arr addObject:[NSNumber numberWithUnsignedShort:[string characterAtIndex:i]]];
}
2) to go back to an NSString you'll need to use an MSMutableString and append each byte to the NSMutableString.
After saying that I'd suggest you don't use this method if you can avoid it.
A better approach would be to use #EmilioPelaez's answer. To go back from a memory buffer to an NSString is simple and inexpensive compared to iterating and concatting strings.
NSString * stringFromMemory = [[NSString alloc] initWithBytes:buffer length:len encoding: NSASCIIStringEncoding];
I ended up using the syntax I found here. Thanks for the help
How to convert ASCII value to a character in Objective-C?
NSString has a method to get the characters in an array:
NSString *string = "This is some string. It has spaces, punctuation, AND capitals.";
unichar *buffer = malloc(sizeof(unichar) * [string lenght]);
[string getCharacters:buffer range:NSMakeRange(0, [string length])];
If you check the definition of unichar, it's an unsigned short.
I'm working with a database that includes hex codes for UTF32 characters. I would like to take these characters and store them in an NSString. I need to have routines to convert in both ways.
To convert the first character of an NSString to a unicode value, this routine seems to work:
const unsigned char *cs = (const unsigned char *)
[s cStringUsingEncoding:NSUTF32StringEncoding];
uint32_t code = 0;
for ( int i = 3 ; i >= 0 ; i-- ) {
code <<= 8;
code += cs[i];
}
return code;
However, I am unable to do the reverse (i.e. take a single code and convert it into an NSString). I thought I could just do the reverse of what I do above by simply creating a c-string with the UTF32 character in it with the bytes in the correct order, and then create an NSString from that using the correct encoding.
However, converting to / from cstrings does not seem to be reversible for me.
For example, I've tried this code, and the "tmp" string is not equal to the original string "s".
char *cs = [s cStringUsingEncoding:NSUTF32StringEncoding];
NSString *tmp = [NSString stringWithCString:cs encoding:NSUTF32StringEncoding];
Does anyone know what I am doing wrong? Should I be using "wchar_t" for the cstring instead of char *?
Any help is greatly appreciated!
Thanks,
Ron
You have a couple of reasonable options.
1. Conversion
The first is to convert your UTF32 to UTF16 and use those with NSString, as UTF16 is the "native" encoding of NSString. It's not actually all that hard. If the UTF32 character is in the BMP (e.g. it's high two bytes are 0's), you can just cast it to unichar directly. If it's in any other plane, you can convert it to a surrogate pair of UTF16 characters. You can find the rules on the wikipedia page. But a quick (untested) conversion would look like
UTF32Char inputChar = // my UTF-32 character
inputChar -= 0x10000;
unichar highSurrogate = inputChar >> 10; // leave the top 10 bits
highSurrogate += 0xD800;
unichar lowSurrogate = inputChar & 0x3FF; // leave the low 10 bits
lowSurrogate += 0xDC00;
Now you can create an NSString using both characters at the same time:
NSString *str = [NSString stringWithCharacters:(unichar[]){highSurrogate, lowSurrogate} length:2];
To go backwards, you can use [NSString getCharacters:range:] to get the unichar's back and then reverse the surrogate pair algorithm to get your UTF32 character back (any characters which aren't in the range 0xD800-0xDFFF should just be cast to UTF32 directly).
2. Byte buffers
Your other option is to let NSString do the conversion directly without using cStrings. To convert a UTF32 value into an NSString you can use something like the following:
UTF32Char inputChar = // input UTF32 value
inputChar = NSSwapHostIntToLittle(inputChar); // swap to little-endian if necessary
NSString *str = [[[NSString alloc] initWithBytes:&inputChar length:4 encoding:NSUTF32LittleEndianStringEncoding] autorelease];
To get it back out again, you can use
UTF32Char outputChar;
if ([str getBytes:&outputChar maxLength:4 usedLength:NULL encoding:NSUTF32LittleEndianStringEncoding options:0 range:NSMakeRange(0, 1) remainingRange:NULL]) {
outputChar = NSSwapLittleIntToHost(outputChar); // swap back to host endian
// outputChar now has the first UTF32 character
}
There are two probelms here:
1:
The first one is that both [NSString cStringUsingEncoding:] and [NSString getCString:maxLength:encoding:] return the C-string in native-endianness (little) without adding a BOM to it when using NSUTF32StringEncoding and NSUTF16StringEncoding.
The Unicode standard states that: (see, "How I should deal with BOMs")
"If there is no BOM, the text should be interpreted as big-endian."
This is also stated in NSString's documentation: (see, "Interpreting UTF-16-Encoded Data")
"... if the byte order is not otherwise specified, NSString assumes that the UTF-16 characters are big-endian, unless there is a BOM (byte-order mark), in which case the BOM dictates the byte order."
Although they're referring to UTF-16, the same applies to UTF-32.
2:
The second one is that [NSString stringWithCString:encoding:] internally uses CFStringCreateWithCString to create the C-string. The problem with this is that CFStringCreateWithCString only accepts strings using 8-bit encodings. From the documentation: (see, "Parameters" section)
The string must use an 8-bit encoding.
To solve this issue:
Explicitly state the encoding endianness you want to use both ways (NSString -> C-string and C-string -> NSString)
Use [NSString initWithBytes:length:encoding:] when trying to create an NSString from a C-string encoded in UTF-32 or UTF-16.
Hope this helps!
when i use this code for generate an hash256 in my iPhone app:
unsigned char hashedChars[32];
NSString *inputString;
inputString = [NSString stringWithFormat:#"hello"];
CC_SHA256([inputString UTF8String],
[inputString lengthOfBytesUsingEncoding:NSASCIIStringEncoding ],
hashedChars);
NSData * hashedData = [NSData dataWithBytes:hashedChars length:32];
The hash256 of inputString, is created correctly, but if i use a string like this #"\x00\x25\x53\b4", the hash256 is different from the real string with "\x" characters.
I think that the problem is in encoding "UTF8" instead of ascii.
Thanks!
I would be suspicious of the first character, "\x00" - thats going to terminate anything that thinks its dealing with "regular C strings".
Not sure whether lengthOfBytesUsingEncoding: takes that stuff into account, but its something I'd experiment with.
You're getting the bytes with [inputString UTF8String] but the length with [inputString lengthOfBytesUsingEncoding:NSASCIIStringEncoding]. This is obviously wrong. Moreover (assuming you mean "\xB4" and that it turns into something not in ASCII), "\xB4" is not likely to be in ASCII. The docs for NSString say
Returns 0 if the specified encoding cannot be used to convert the receiver
So you're calculating the hash of the empty string. Of course it's wrong.
You're less likely to have problems if you only generate the data once:
NSData * inputData = [inputString dataUsingEncoding:NSUTF8StringEncoding];
CC_SHA256(inputData.bytes, inputData.length, hashedChars);
I am using following code to get bytes array. thx to this post.
Data *data = [NSData dataWithContentsOfFile:filePath];
NSUInteger len = [data length];
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [data bytes], len);
it is the right method to do so?
Now, How i can pass bytes array into url?
Thank You for Help,
You're using the right method to extract the raw bytes from the data. To get those into a URL, you'll need to convert them to a string. Exactly what string depends on the format that you're submitting (ie. it could be just a list of 1s and 0s, or YES and NO, or any other character(s) as required by the server you're talking to.)
I have some serious trouble with a CommonCrypto function. There are two existing applications for BlackBerry and Windows Mobile, both use Triple-DES encryption with ECB mode for data exchange. On either the encrypted results are the same.
Now I want to implent the 3DES encryption into our iPhone application, so I went straight for CommonCrypto:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-32207/CommonCrypto/CommonCryptor.h
I get some results if I use CBC mode, but they do not correspond with the results of Java or C#. Anyway, I want to use ECB mode, but I don't get this working at all - there is a parameter error showing up...
This is my call for the ECB mode... I stripped it a little bit:
const void *vplainText;
plainTextBufferSize = [#"Hello World!" length];
bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1);
plainText = (const void *) [#"Hello World!" UTF8String];
NSString *key = #"abcdeabcdeabcdeabcdeabcd";
ccStatus = CCCrypt(kCCEncrypt,
kCCAlgorithm3DES,
kCCOptionECBMode,
key,
kCCKeySize3DES,
nil, // iv, not used with ECB
plainText,
plainTextBufferSize,
(void *)bufferPtr, // output
bufferPtrSize,
&movedBytes);
t is more or less the code from here: http://discussions.apple.com/thread.jspa?messageID=9017515
But as already mentioned, I get a parameter error each time...
When I use kCCOptionPKCS7Padding instead of kCCOptionECBMode and set the same initialization vector in C# and my iPhone code, the iPhone gives me different results. Is there a mistake by getting my output from the bufferPtr? Currently I get the encrypted stuff this way:
NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes];
result = [[NSString alloc] initWithData:myData encoding:NSISOLatin1StringEncoding];
It seems I almost tried every setting twice, different encodings and so on... where is my error?
Can you post the error message?
One of the best ways to troubleshoot this stuff, I've found, is to take known input, known key and known output ("test vectors") and compare the bytes of the expected output with the observed output.
What you're doing here is probably not a good way to test the output:
NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes];
result = [[NSString alloc] initWithData:myData encoding:NSISOLatin1StringEncoding];
How do you know the encrypted binary data can be interpreted with the NSISOLatin1StringEncoding encoding?
Instead, compare the bytes directly (via [myData description] or the like) or translate the output with hexadecimal or base64 encoding.
I believe the problem is that kCCOptionECBMode alone is not enough. You also need padding (since it is a block cypher). If you pass both (i.e. kCCOptionPKCS7Padding | kCCOptionECBMode ) it will work.
I realise this is an old question, but for reference, I think that your key should not be passed in as an NSString. The key should instead be converted from hexadecimal to a byte array. This hexToBytes NSString extension should provide what you need by doing the following:
[[key hexToBytes] bytes]
The key should also be twice as long as the one given (48 characters of hex, i.e. 24 bytes).