(iphone) base64 encoding - iphone

I get KERN_PROTECTION_FAILURE somewhere (stack trace shows it's happening in main loop but won't give me more details because it seems that memory got corrupted in previous loop. I have all the settings to see debug output correctly)
When I remove calling the following code, the symptom goes away.
(Verify receipt for in App purchasee)
- (NSString *)encode:(const uint8_t *)input length:(NSInteger)length {
static char table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
NSMutableData *data = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t *output = (uint8_t *)data.mutableBytes;
for (NSInteger i = 0; i < length; i += 3) {
NSInteger value = 0;
for (NSInteger j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) {
value |= (0xFF & input[j]);
}
}
NSInteger index = (i / 3) * 4;
output[index + 0] = table[(value >> 18) & 0x3F];
output[index + 1] = table[(value >> 12) & 0x3F];
output[index + 2] = (i + 1) < length ? table[(value >> 6) & 0x3F] : '=';
output[index + 3] = (i + 2) < length ? table[(value >> 0) & 0x3F] : '=';
}
return [[[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding] autorelease];
}
I see there are other ways of getting base64 encoding, How do I do base64 encoding on iphone-sdk?
What I find weird is that the 'length' is computed differently.
((length +2 /3) * 4 above, and lentext*4/3+4 below.
Can anyone tell what is going on?
Beside, using the below code, I get 'receipt data-malformed' error when I pass the encoded data to apple server.
+ (NSString *) base64StringFromData: (NSData *)data length: (int)length {
int lentext = [data length];
if (lentext < 1) return #"";
char *outbuf = malloc(lentext*4/3+4); // add 4 to be sure
if ( !outbuf ) return nil;
const unsigned char *raw = [data bytes];
int inp = 0;
int outp = 0;
int do_now = lentext - (lentext%3);
for ( outp = 0, inp = 0; inp < do_now; inp += 3 )
{
outbuf[outp++] = base64EncodingTable[(raw[inp] & 0xFC) >> 2];
outbuf[outp++] = base64EncodingTable[((raw[inp] & 0x03) << 4) | ((raw[inp+1] & 0xF0) >> 4)];
outbuf[outp++] = base64EncodingTable[((raw[inp+1] & 0x0F) << 2) | ((raw[inp+2] & 0xC0) >> 6)];
outbuf[outp++] = base64EncodingTable[raw[inp+2] & 0x3F];
}
if ( do_now < lentext )
{
char tmpbuf[2] = {0,0};
int left = lentext%3;
for ( int i=0; i < left; i++ )
{
tmpbuf[i] = raw[do_now+i];
}
raw = tmpbuf;
outbuf[outp++] = base64EncodingTable[(raw[inp] & 0xFC) >> 2];
outbuf[outp++] = base64EncodingTable[((raw[inp] & 0x03) << 4) | ((raw[inp+1] & 0xF0) >> 4)];
if ( left == 2 ) outbuf[outp++] = base64EncodingTable[((raw[inp+1] & 0x0F) << 2) | ((raw[inp+2] & 0xC0) >> 6)];
}
NSString *ret = [[[NSString alloc] initWithBytes:outbuf length:outp encoding:NSASCIIStringEncoding] autorelease];
free(outbuf);
return ret;
}

I’m using this very small library for encode/decode Base64: http://imthi.com/blog/programming/iphone-sdk-base64-encode-decode.php
It does its work and I assume you use a similar version of it. Did you try to call +initialize before the first usage?

Related

Fatal error: Not enough bits to represent the passed value

Trying to use Mikrotik API library written in Swift:
https://wiki.mikrotik.com/wiki/API_in_Swift
It works well, when I'm sending small commands
However, If I will try to send large script string, I'm getting error:
Fatal error: Not enough bits to represent the passed value
The code that crashes:
private func writeLen(_ command : String) -> Data {
let data = command.data(using: String.Encoding.utf8)
var len = data?.count ?? 0
var dat = Data()
if len < 0x80 {
dat.append([UInt8(len)], count: 1)
}else if len < 0x4000 {
len = len | 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}else if len < 0x20000 {
len = len | 0xC00000;
dat.append(Data(bytes: [UInt8(len >> 16)]))
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
else if len < 0x10000000 {
len = len | 0xE0000000;
dat.append(Data(bytes: [UInt8(len >> 24)]))
dat.append(Data(bytes: [UInt8(len >> 16)]))
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}else{
dat.append(Data(bytes: [0xF0]))
dat.append(Data(bytes: [UInt8(len >> 24)]))
dat.append(Data(bytes: [UInt8(len >> 16)]))
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
return dat
}
The fatal error appears in this part:
else if len < 0x4000 {
len = len | 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
at line:
dat.append(Data(bytes: [UInt8(len)]))
Data size at this moment is 1072 bytes and len equals to 33840, UInt8 cannot be initiated with that len value.
How can I edit the code to avoid the error?
I'm using Swift 4.2
EDIT:
Here is an example of the same logic but written in JavaScript
module.exports.encodeString = function encodeString(s) {
var data = null;
var len = Buffer.byteLength(s);
var offset = 0;
if (len < 0x80) {
data = new Buffer(len + 1);
data[offset++] = len;
} else if (len < 0x4000) {
data = new Buffer(len + 2);
len |= 0x8000;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
} else if (len < 0x200000) {
data = new Buffer(len + 3);
len |= 0xC00000;
data[offset++] = (len >> 16) & 0xff;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
} else if (len < 0x10000000) {
data = new Buffer(len + 4);
len |= 0xE0000000;
data[offset++] = (len >> 24) & 0xff;
data[offset++] = (len >> 16) & 0xff;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
} else {
data = new Buffer(len + 5);
data[offset++] = 0xF0;
data[offset++] = (len >> 24) & 0xff;
data[offset++] = (len >> 16) & 0xff;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
}
data.utf8Write(s, offset);
return data;
};
Maybe someone sees the difference
Thanks for the JavaScript translation. It clearly shows the problem, since the Swift version does not resemble it.
Let's take this stretch of the JavaScript, as it is the part you are stumbling over in Swift:
} else if (len < 0x4000) {
data = new Buffer(len + 2);
len |= 0x8000;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
}
That is "translated" in Swift like this:
} else if len < 0x4000 {
len = len | 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
Well, you can see at once that they are not at all the same. In the last line, the Swift version has forgotten the & 0xff.
If you put that in, everything starts working. And we can make it look a lot more like the JavaScript original too:
} else if len < 0x4000 {
len |= 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len & 0xff)]))
}
So I'd say, yes, use the JavaScript as a guide and you'll be fine. If that last line doesn't feel "swifty" enough to you, then write it like this:
dat.append(Data(bytes: [UInt8(truncatingIfNeeded: len)]))
It's exactly the same result.
I don't guarantee that everything will work perfectly after you make those changes (the Swift code you showed still does not look to me like it does the same thing as the JavaScript), but at least the part where we write the length bytes into the start of the Data will work correctly.

Why .hash section in my ELF file is not valid?

In a .hash section, for some x, if chain[x] != SHN_UNDEF,
it should hold hash(name(bucket[x])) === hash(name(bucket[chain[x]])) % nbucket
But why it's not the case for my shared object file?
For example, name(bucket[224]) == "_ZN9VADEnergyD0Ev" whose (ELF hash % nbucket) is 224,
name(bucket[8]) == "speex_bits_write_whole_bytes" whose (ELF hash % nbucket) is 8,
but chain[224] == 8.
(the file is avalible here)
Or my code for reading elf is wrong?
nbucket = ((int *)hash)[0];
nchain = ((int *)hash)[1];
memcpy(bucket, hash + 8, nbucket * 4);
memcpy(succ, hash + nbucket * 4 + 8, nchain * 4);
for (i = 0; i < nbucket; i++) {
printf("%d %d\n", bucket[i], succ[i]);
if (bucket[i] && succ[i])
pred[succ[i]] = i;
}
printf("%d %d\n", nbucket, nchain);
#define sym_name(x, symtbl, strtbl) (strtbl + symtbl[x].st_name)
for (i = 0; i < nbucket; i++) {
if (pred[i] == 0) {
printf("=======\n");
for (j = i; j; j = succ[j]) {
char *sname = sym_name(bucket[j], dynsym, dynstr);
printf("%d,succ=%d ", j, succ[j]);
printf("%d:%s\n", _dl_elf_hash(sname) % nbucket, sname);
}
}
}
It's my fault. It should be
hash(name(bucket[x])) === hash(name(chain[bucket[x]])) % nbucket
and
nbucket = ((int *)hash)[0];
nchain = ((int *)hash)[1];
memcpy(bucket, hash + 8, nbucket * 4);
memcpy(succ, hash + nbucket * 4 + 8, nchain * 4);
for (i = 0; i < nbucket; i++) {
printf("%d %d\n", bucket[i], succ[i]);
if (bucket[i] && succ[i])
pred[succ[i]] = i;
}
printf("%d %d\n", nbucket, nchain);
#define sym_name(x, symtbl, strtbl) (strtbl + symtbl[x].st_name)
for (i = 0; i < nbucket; i++) {
printf("=======\n");
for (j = bucket[i]; j; j = succ[j]) {
char *sname = sym_name(j, dynsym, dynstr);
printf("%d,succ=%d ", j, succ[j]);
printf("%d:%s\n", _dl_elf_hash(sname) % nbucket, sname);
}
}

How to generate unique GUID 36 chars in SQLITE iPhone

I am trying to create a table which is unique and has primary key. I know in sqlite we can develop Unique AUTOINCREMENT ID SQL AUTOINCREMENT, but is it possible to generate Unique GUID which is 36 chars long. The only reason to do that is to make it more unique.
This is the bit of code I use for UUIDs (I may have even found it here on Stack Overflow)...
+ (NSString *)GetUUID
{
CFUUIDRef theUUID = CFUUIDCreate(NULL);
CFStringRef string = CFUUIDCreateString(NULL, theUUID);
CFRelease(theUUID);
return [(NSString *)string autorelease];
}
I don't know how long the UUIDs generated are because in the ways I use it I don't care so perhaps check that by passing the result into a NSLog call.
HTH, Pedro :)
I user this code to generate guids on the iphone - category on NSString. Can't remember where I found it, but it works great.
#import "NSString_UniqueID.h"
static unichar x (unsigned int);
#implementation NSString (TWUUID)
+ (NSString*) stringWithUniqueId
{
CFUUIDRef uuid = CFUUIDCreate(NULL);
CFUUIDBytes b = CFUUIDGetUUIDBytes(uuid);
unichar unichars[22];
unichar* c = unichars;
*c++ = x(b.byte0 >> 2);
*c++ = x((b.byte0 & 3 << 4) + (b.byte1 >> 4));
*c++ = x((b.byte1 & 15 << 2) + (b.byte2 >> 6));
*c++ = x(b.byte2 & 63);
*c++ = x(b.byte3 >> 2);
*c++ = x((b.byte3 & 3 << 4) + (b.byte4 >> 4));
*c++ = x((b.byte4 & 15 << 2) + (b.byte5 >> 6));
*c++ = x(b.byte5 & 63);
*c++ = x(b.byte6 >> 2);
*c++ = x((b.byte6 & 3 << 4) + (b.byte7 >> 4));
*c++ = x((b.byte7 & 15 << 2) + (b.byte8 >> 6));
*c++ = x(b.byte8 & 63);
*c++ = x(b.byte9 >> 2);
*c++ = x((b.byte9 & 3 << 4) + (b.byte10 >> 4));
*c++ = x((b.byte10 & 15 << 2) + (b.byte11 >> 6));
*c++ = x(b.byte11 & 63);
*c++ = x(b.byte12 >> 2);
*c++ = x((b.byte12 & 3 << 4) + (b.byte13 >> 4));
*c++ = x((b.byte13 & 15 << 2) + (b.byte14 >> 6));
*c++ = x(b.byte14 & 63);
*c++ = x(b.byte15 >> 2);
*c = x(b.byte15 & 3);
CFRelease(uuid);
return [NSString stringWithCharacters: unichars length: 22];
}
#end
// Convert six-bit values into letters, numbers or _ or $ (64 characters in that set).
//------------------------------------------------------------------------------------
unichar x (unsigned int c)
{
if (c < 26) return 'a' + c;
if (c < 52) return 'A' + c - 26;
if (c < 62) return '0' + c - 52;
if (c == 62) return '$';
return '_';
}

Windows C API for UTF8 to 1252

I'm familiar with WideCharToMultiByte and MultiByteToWideChar conversions and could use these to do something like:
UTF8 -> UTF16 -> 1252
I know that iconv will do what I need, but does anybody know of any MS libs that will allow this in a single call?
I should probably just pull in the iconv library, but am feeling lazy.
Thanks
Windows 1252 is mostly equivalent to latin-1, aka ISO-8859-1: Windows-1252 just has some additional characters allocated in the latin-1 reserved range 128-159. If you are ready to ignore those extra characters, and stick to latin-1, then conversion is rather easy. Try this:
#include <stddef.h>
/*
* Convert from UTF-8 to latin-1. Invalid encodings, and encodings of
* code points beyond 255, are replaced by question marks. No more than
* dst_max_len bytes are stored in the destination array. Returned value
* is the length that the latin-1 string would have had, assuming a big
* enough destination buffer.
*/
size_t
utf8_to_latin1(char *src, size_t src_len,
char *dst, size_t dst_max_len)
{
unsigned char *sb;
size_t u, v;
u = v = 0;
sb = (unsigned char *)src;
while (u < src_len) {
int c = sb[u ++];
if (c >= 0x80) {
if (c >= 0xC0 && c < 0xE0) {
if (u == src_len) {
c = '?';
} else {
int w = sb[u];
if (w >= 0x80 && w < 0xC0) {
u ++;
c = ((c & 0x1F) << 6)
+ (w & 0x3F);
} else {
c = '?';
}
}
} else {
int i;
for (i = 6; i >= 0; i --)
if (!(c & (1 << i)))
break;
c = '?';
u += i;
}
}
if (v < dst_max_len)
dst[v] = (char)c;
v ++;
}
return v;
}
/*
* Convert from latin-1 to UTF-8. No more than dst_max_len bytes are
* stored in the destination array. Returned value is the length that
* the UTF-8 string would have had, assuming a big enough destination
* buffer.
*/
size_t
latin1_to_utf8(char *src, size_t src_len,
char *dst, size_t dst_max_len)
{
unsigned char *sb;
size_t u, v;
u = v = 0;
sb = (unsigned char *)src;
while (u < src_len) {
int c = sb[u ++];
if (c < 0x80) {
if (v < dst_max_len)
dst[v] = (char)c;
v ++;
} else {
int h = 0xC0 + (c >> 6);
int l = 0x80 + (c & 0x3F);
if (v < dst_max_len) {
dst[v] = (char)h;
if ((v + 1) < dst_max_len)
dst[v + 1] = (char)l;
}
v += 2;
}
}
return v;
}
Note that I make no guarantee about this code. This is completely untested.

How to sort a string of characters in objective-C?

I'm looking for an Objective-C way of sorting characters in a string, as per the answer to this question.
Ideally a function that takes an NSString and returns the sorted equivalent.
Additionally I'd like to run length encode sequences of 3 or more repeats. So, for example "mississippi" first becomes "iiiimppssss", and then could be shortened by encoding as "4impp4s".
I'm not expert in Objective-C (more Java and C++ background) so I'd also like some clue as to what is the best practice for dealing with the memory management (retain counts etc - no GC on the iphone) for the return value of such a function. My source string is in an iPhone search bar control and so is an NSString *.
int char_compare(const char* a, const char* b) {
if(*a < *b) {
return -1;
} else if(*a > *b) {
return 1;
} else {
return 0;
}
}
NSString *sort_str(NSString *unsorted) {
int len = [unsorted length] + 1;
char *cstr = malloc(len);
[unsorted getCString:cstr maxLength:len encoding:NSISOLatin1StringEncoding];
qsort(cstr, len - 1, sizeof(char), char_compare);
NSString *sorted = [NSString stringWithCString:cstr encoding:NSISOLatin1StringEncoding];
free(cstr);
return sorted;
}
The return value is autoreleased so if you want to hold on to it in the caller you'll need to retain it. Not Unicode safe.
With a bounded code-set, radix sort is best:
NSString * sortString(NSString* word) {
int rads[128];
const char *cstr = [word UTF8String];
char *buff = calloc([word length]+1, sizeof(char));
int p = 0;
for(int c = 'a'; c <= 'z'; c++) {
rads[c] = 0;
}
for(int k = 0; k < [word length]; k++) {
int c = cstr[k];
rads[c]++;
}
for(int c = 'a'; c <= 'z'; c++) {
int n = rads[c];
while (n > 0) {
buff[p++] = c;
n--;
}
}
buff[p++] = 0;
return [NSString stringWithUTF8String: buff];
}
Note that the example above only works for lowercase letters (copied from a specific app which needs to sort lowercase strings). To expand it to handle all of the ASCII 127, just do for(c=0; c <= 127; c++).