Xcode 5 crash on ios 5 device (ipad 1) - ios5

#0 0x0040b6f0 in cr_detectClasses ()
#1 0x33960ae8 in call_load_methods ()
#2 0x339608da in load_images ()
#3 0x2fe037d0 in dyld::notifySingle(dyld_image_states, ImageLoader const*) ()
#4 0x2fe0c85a in ImageLoader::recursiveInitialization(ImageLoader::LinkContext const&, unsigned int, ImageLoader::InitializerTimingList&) ()
#5 0x2fe0d82c in ImageLoader::runInitializers(ImageLoader::LinkContext const&, ImageLoader::InitializerTimingList&) ()
#6 0x2fe04a40 in dyld::initializeMainExecutable() ()
#7 0x2fe08c1c in dyld::_main(macho_header const*, unsigned long, int, char const**, char const**, char const**) ()
#8 0x2fe032ce in dyldbootstrap::start(macho_header const*, int, char const**, long, macho_header const*) ()
i have a project that i try to run on an ios 5 device my minimum deploy target is ios 5.0.
All frameworks i use are ios 5 compatible (so no need to put any on optional i guess?) but still it crashes on startup ( with the above stack trace )
ios6 + ios7 device work perfect.
Does anyone has any clue ?

Looks like you're including a framework or API that doesn't exist on iOS 5. Some steps for debugging:
Remove all unnecessary frameworks.
Remove all the code you can. Do you still get the problem if your program is nothing but a blank App Delegate?
If your bare-bones app still crashes on iOS5, it's a framework problem. Verify that all the frameworks really do exist on iOS 5.
If your bare-bones app works fine, add back code until you find the bit that causes the crash. Post the offending code so we can offer more specific suggestions.

Related

EXC_BAD_INSTRUCTION only in iPhone 5 simulator

Running my code on the iPhone 5 simulator throws the exception shown in the image.
Running the code on any of the other simulators is just fine.
I can't spot where I made a mistake in this unspectacular line of code.
Does anyone else have this problem?
NSInteger (which is a type alias for Int in Swift) is a 32-bit
integer on 32-bit platforms like the iPhone 5.
The result of
NSInteger(NSDate().timeIntervalSince1970) * 1000
is 1480106653342 (at this moment) and does not fit into the
range -2^31 ... 2^31-1 of 32-bit (signed) integers.
Therefore Swift aborts the execution. (Swift does not "truncate"
the result of integer arithmetic operations as it is done in some
other programming languages, unless you specifically use the
"overflow" operators like &*.)
You can use Int64 for 64-bit computations on all platforms:
Int64(NSDate().timeIntervalSince1970 * 1000)
In your case, if a string is needed:
let lastLogin = String(Int64(NSDate().timeIntervalSince1970 * 1000))

Why are there RichInt or RichX in Scala?

This is a simple question.
Why was not the method related to Int resided in Int?
Instead Scala bothers to put related methods into RichInt and rely on implicit conversion so as to have them work like methods of Int.
Why bother?
Scala doesn't exist in a vacuum. It was specifically designed to be hosted in an ecosystem / on a platform which was mostly designed for another language: the Java platform, the .NET platform, the ECMAScript platform, Cocoa, etc.
This means that in some cases compromises had to be made, in order to make Scala operate seamlessly, efficiently, and with high performance with the ecosystem, libraries and language of the host platform. That's why it has null, why it has classes (it could get by with just traits, and allow traits to have constructors), why it has packages (because they can be cleanly mapped to Java packages or .NET namespaces), why it doesn't have proper tail calls, doesn't have reified generics, etc. It's even why it has curly braces, not to make it easier to integrate with Java, but to make it easier to integrate with the brains of Java developers.
scala.Int is a fake class, it represents a native platform integer (primitive int in Java, System.Int32 in .NET, etc.) Being fake, it can't really have any methods other than the operations provided by the host environment.
The alternative would be to have all operations in the Int class and have the compiler know the difference between which methods are native and which aren't. But that's a special case, it makes more sense to concentrate efforts on making "enrich-my-library" fast in general, so that all programmers can benefit from those optimizations instead of spending time, money and resources on optimizations that only apply to twelve or so classes.
The question is why not model Int richly and then optimize, for example, that it has an unboxed representation and that some operations are provided natively?
The answer must surely be that the compiler is still not very good at these optimizations.
scala> 42.isWhole
res1: Boolean = true
scala> :javap -prv -
[snip]
9: getstatic #26 // Field scala/runtime/RichInt$.MODULE$:Lscala/runtime/RichInt$;
12: getstatic #31 // Field scala/Predef$.MODULE$:Lscala/Predef$;
15: bipush 42
17: invokevirtual #35 // Method scala/Predef$.intWrapper:(I)I
20: invokevirtual #39 // Method scala/runtime/RichInt$.isWhole$extension:(I)Z
23: putfield #17 // Field res1:Z
26: return
or under -optimize
9: getstatic #26 // Field scala/runtime/RichInt$.MODULE$:Lscala/runtime/RichInt$;
12: getstatic #31 // Field scala/Predef$.MODULE$:Lscala/Predef$;
15: astore_1
16: bipush 42
18: invokevirtual #35 // Method scala/runtime/RichInt$.isWhole$extension:(I)Z
21: putfield #17 // Field res0:Z
24: return

Why does data stored in registers have memory addresses?

If I have the following code:
-(int)number {
int i = 3;
return i;
}
I can get the memory address of the integer i, by doing &i. (say while paused on a breakpoint on the return line)
However the corresponding assembly (ARM) will simple be:
MOV R0, #3
Nowhere is the memory needed (except to store the instruction), so how can i have a memory address?
That code might not need to use memory, but that does not mean it doesn't use memory. The compiler can implement it however it wants. Without optimization, this means variables will probably all be stored in memory, whether they need to be or not. For example, consider this very basic program:
int main() {
int i = 0;
return i;
}
With optimization disabled (which it is by default), Apple clang 4.0 gives me the following assembly:
_main:
sub sp, sp, #4
movw r0, #0
str r0, [sp]
add sp, sp, #4
bx lr
With optimization enabled, I get a much simpler program:
_main:
mov r0, #0
bx lr
As you can see, the unoptimized version stores the 0 in memory, but the optimized version doesn't. If you were to use the optimized version in the debugger, it would fail to give you the address of i. I actually got an error that i was undefined, since it had been optimized out completely.

Why does running javap on a compiled Scala class show weird entries in the constant pool?

When running javap -v on the compiled class resulting from this bit of Scala (version 2.8.1 final):
class Point(x : Int, y : Int)
I get the following output for the constant pool entries, along with several terminal beeps indicating non-printable chars?
#19 = Utf8 Lscala/reflect/ScalaSignature;
#20 = Utf8 bytes
#21 = Utf8 \t2\"\t!!>Lg9A(Z7qift4A\nqCA\r!BA
aM\4
-\tAA[1wC&Q\nTWm;=R\"\t
E\tQa]2bYL!a\tMr\1PE*,7\r\t+\t)A-\t/%:$
eDu\taP5oSRtDc!CAqA!)Qca-!)!da-
#22 = Utf8 RuntimeVisibleAnnotations
#23 = Utf8 Point
#24 = Class #23 // Point
Any idea what's going on and why? I've never seen binary garbage in CONSTANT_Utf8 entries before.
I'm using an OpenJDK 7 build on Mac 10.6, if that makes a difference - I will try to replicate tomorrow when I have other OSes to play with, and will update accordingly.
The ScalaSignature element is where the extra type information that Scala needs is stored. It's being stored (encoded, obviously) in annotations now so that it can be made available to reflection tools.

Porting Issue: Pointer with offset in VC++

Ok, this compiles fine in GCC under Linux.
char * _v3_get_msg_string(void *offset, uint16_t *len) {/*{{{*/
char *s;
memcpy(len, offset, 2);
*len = ntohs(*len);
s = malloc(*len+1);
memset(s, 0, *len+1);
memcpy(s, offset+2, *len);
s[*len] = '\0';
*len+=2;
return s;
}/*}}}*/
However, I'm having a problem porting it to Windows, due to the line...
memcpy(s, offset+2, *len);
Being a void pointer, VC++ doesn't want to offset the pointer. The usual caveat that CPP doesn't allow pointer offsets SHOULD be moot, as the whole project is being built under extern "C".
Now, this is only 1 function in many, and finding the answer to this will allow them all to be fixed. I would really prefer not having to rewrite the library project from the ground up, and I don't want to build under MinGW. There has to be a way to do this that I'm missing, and not finding in Google.
Well, you cannot do pointer arithmetics with void*, it is ridiculous that this compiles under GCC. try memcpy(s, ((char*)offset)+2,*len);