EXC_BAD_INSTRUCTION only in iPhone 5 simulator - swift

Running my code on the iPhone 5 simulator throws the exception shown in the image.
Running the code on any of the other simulators is just fine.
I can't spot where I made a mistake in this unspectacular line of code.
Does anyone else have this problem?

NSInteger (which is a type alias for Int in Swift) is a 32-bit
integer on 32-bit platforms like the iPhone 5.
The result of
NSInteger(NSDate().timeIntervalSince1970) * 1000
is 1480106653342 (at this moment) and does not fit into the
range -2^31 ... 2^31-1 of 32-bit (signed) integers.
Therefore Swift aborts the execution. (Swift does not "truncate"
the result of integer arithmetic operations as it is done in some
other programming languages, unless you specifically use the
"overflow" operators like &*.)
You can use Int64 for 64-bit computations on all platforms:
Int64(NSDate().timeIntervalSince1970 * 1000)
In your case, if a string is needed:
let lastLogin = String(Int64(NSDate().timeIntervalSince1970 * 1000))

Related

In Swift 4, what is the correct way of typecasting a String value to a Float value?

I came across a situation where the following code does not work as expected in a project whereas works fine in Playground.
strtof("0.9", nil) //expected to return 0.9
Float("0.9")! //expected to return 0.9
Here are the screen shots when I execute the same code in project vs Playground.
XCode's Console:
Playground:
Is this difference intended?
The problem occurs because decimal 0.9 can't be represented as a 32 bit floating point value (Float in Swift). The closest value just happens to be 0.899999976.
Both the debugger (Xcode's console) and playground do compute the exact same value.
It's just the way the output is generated which differs. Playgrounds seem to round the result while the debugger shows all significant digits.

Why does Swift String.Index keeps its index value 4 times bigger than real?

I was trying to implement Boyer-Moore algorithm in Swift Playground and I used Swift String.Index a lot and something that started to bother me is why indexes are kept 4 times bigger that what it seems they should be.
For example:
let why = "is s on 4th position not 1st".index(of: "s")
This code in Swift Playground will generate _compoundOffset 4 not 1. I'm sure there is a reason for doing this, but I couldn't find explanation anywhere.
It's not a duplicate of any question that explains how to get index of char in Swift, I know that, I used index(of:) function just to illustrate the question. I wanted to know why value of 2nd char is 4 not 1 when using String.Index.
So I guess the way it keeps indexes is private and I don't need to know the inside implementation, it's probably connected with UTF16 and UTF32 coding.
First of all, don’t ever assume _compoundOffset to be anything else than an implementation detail. _compoundOffset is an internal property of String.Index that uses bit masking to store two values in this one number:
The encodedOffset, which is the index's byte offset in terms of UTF-16 code units. This one is public and can be relied on. In your case encodedOffset is 1 because that's the offset for that character, as measured in UTF-16 code units. Note that the encoding of the string in memory doesn't matter! encodedOffset is always UTF-16.
The transcodedOffset, which stores the index's offset inside the current UTF-16 code unit. This is also an internal property that you can't access. The value is usually 0 for most indices, unless you have an index into the string's UTF-8 view that refers to a code unit which doesn't fall on a UTF-16 boundary. In that case, the transcodedOffset will store the offset in bytes from the encodedOffset.
Now why is _compoundOffset == 4? Because it stores the transcodedOffset in the two least significant bits and the encodedOffset in the 62 most significant bits. So the bit pattern for encodedOffset == 1, transcodedOffset == 0 is 0b100, which is 4.
You can verify all this in the source code for String.Index.

Swift casting causes the app to crash

I've been having this odd error in a recent swift program of mine. It involves random occurrences, and to simulate this I assign an event 'odds,' then generate two random numbers (using these odds), and if the numbers are the same then the action occurs. But the program crashes, inexplicably, in the generation part. The only explanation I can think of is the overabundance of casting required, but I'm not sure why it only crashes once in a while. I'd appreciate any insight as to why the casting crashes and any suggestions for what to do to avoid such excessive casting.
My image shows the code and the error, and the code below is a generalization of my code.
Crash Error
let rand = [Int(arc4random_uniform(UInt32(someInt))), Int(arc4random_uniform(UInt32(someInt)))]
if (rand[0] == rand[1]) {
executeAction()
}
This occurs because your integer variable shootOdds, at some point, takes a negative value (or: less plausible, a value larger than 4,294,967,295), causing runtime error for cast to unsigned integer, UInt32(someInt). You can avoid this by making sure, prior to the let rand = ... row, that shootOdds >= 0 (or in your code example above, someInt >= 0), or that you number is not larger than the upper limit for UInt32.
Hence note that the error is not associated with the rand function, but specifically the negative integer cast to unsigned integer.
E.g., try the following example in your playground, to assert you get the same runtime error:
let a = -1
let b = UInt32(a)
Due to faulty code, my odds were increasing exponentially and eventually too large to be contained in UInt32... and thus the error. Thanks for the help!

NSString to 64-bit integer conversion on iPhone

I am struggling with very simple thing:
I receive some ids by http request as a string. I know they represent 64-bit integer id numbers.
How can I convert them to the 64-bit Integers (NSNumber or NSInteger)?
Functions like:
[nsstring integerValue],
[nsstring intValue]
seems to be 32bit limited (max value:2147483647).
Any Hints?
Data model with 64bit integer properties compiles fine so it means iPhone supports such a numbers for a god sake.
It must be some simple conversion method. It is so popular in http connection based devices.
have you tried longLongValue ?
if they have a fixed length, you could cut them on each 9th number, make the intvalue of the cutted parts and add the values with the 10^9-factor. surely not the most elegant way, but it should work

How should I declare a long in Objective-C? Is NSInteger appropriate?

I see NSInteger is used quite often and the typedef for it on the iPhone is a long, so technically I could use it when I am expect int(64) values. But should I be more explicit and use something like int64_t or long directly? What would be the downside of just using long?
IIRC, long on the iPhone/ARM is 32 bits. If you want a guaranteed 64-bit integer, you should (indeed) use int64_t.
Integer Data Types Sizes
short - ILP32: 2 bytes; LP64: 2 bytes
int - ILP32: 4 bytes; LP64: 4 bytes
long - ILP32: 4 bytes; LP64: 8 bytes
long long - ILP32: 8 bytes; LP64: 8 bytes
It may be useful to know that:
The compiler defines the __LP64__ macro when compiling for the 64-bit runtime.
NSInteger is a typedef of long so it will be 32-bits in a 32-bit environment and 64-bits in a 64-bit environment.
When converting to 64-bit you can simply replace all your ints and longs to NSInteger and you should be good to go.
Important: pay attention to the alignment of data, LP64 uses natural alignment for all Integer data types but ILP32 uses 4 bytes for all Integer data types with size equal to or greater than 4 bytes.
You can read more about 32 to 64 bit conversion in the Official 64-Bit Transition Guide for Cocoa Touch.
Answering you questions:
How should I declare a long in Objective-C? Is NSInteger appropriate?
You can use either long or NSInteger but NSInteger is more idiomatic IMHO.
But should I be more explicit and use something like int64_t or long directly?
If you expect consistent 64-bit sizes neither long nor NSInteger will do, you'll have to use int64_t (as Wevah said).
What would be the downside of just using long?
It's not idiomatic and you may have problems if Apple rolls out a new architecture again.
If you need a type of known specific size, use the type that has that known specific size: int64_t.
If you need a generic integer type and the size is not important, go ahead and use int or NSInteger.
NSInteger's length depends on whether you are compiling for 32 bit or 64 bit. It's defined as long for 64 bit and iPhone and int for 32 bit.
So on iPhone the length of NSInteger is the same as the length of a long, which is compiler dependent. Most compilers make long the same length as the native word. i.e. 32 bit for 32 bit architectures and 64 bit for 64 bit architectures.
Given the uncertainty over the width of NSInteger, I use it only for types of variables to be used in the Cocoa API when NSInteger is specified. If I need a fixed width type, I go for the ones defined in stdint.h. If I don't care about the width I use the C built in types
If you want to declare something as long, declare it as long. Be aware that long can be 32 or 64 bit, depending on the compiler.
If you want to declare something to be as efficient as possible, and big enough to count items, use NSInteger or NSUInteger. Note that both can be 32 or 64 bits, and can be actually different types (int or long), depending on the compiler. Which protects you from mixing up types in some cases.
If you want 32 or 64 bit, and nothing else, use int32_t, uint32_t, int64_t, uint64_t. Be aware that either type can be unnecessarily inefficient on some compiler.