with this code :
let rand : Int = Int(arc4random())
NSLog("rand = %d %i %# \(rand)",rand,rand,String(rand))
I get :
rand = -1954814774 -1954814774 2340152522 2340152522
why all 4 values are not the same ?
arc4random generates an unsigned 32bit int. Int is probably 64 bit on your machine so you get the same number and it doesn't overflow. But %i and %d are signed 32-bit format specifiers. See here and here. That's why you get a negative number when arc4random returns a number greater than 2^32-1, aka Int32.max.
For example, when 2340152522 is generated, you get -1954814774 in the %i position because:
Int32(bitPattern: 2340152522) == -1954814774
On the other hand, converting an Int to String won't change the number. Int is a signed 64 bit integer.
Related
What are the differences between the data types Int & UInt8 in swift.
Looks like UInt8 is used for binary data, i need to convert UInt8 to Int is this possible.
That U in UInt stands for unsigned int.
It is not just using for binary data. Uint is used for positive numbers only, like natural numbers.
I recommend you to get to know how negative numbers are understood from a computer.
Int8 is an Integer type which can store positive and negative values.
UInt8 is an unsigned integer which can store only positive values.
You can easily convert UInt8 to Int8 but if you want to convert Int8 to UInt8 then make sure value should be positive.
UInt8 is an 8bit store, while Int not hardly defined or defined by the compiler:
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/TheBasics.html
Int could be 32 or 64 bits
Updated for swift:
Operation Output Range Bytes per Element
uint8 0 to 255 1
Int - 9223372036854775808 to 9223372036854775807 2 or 4
If you want to find the max and min range of Int or UInt8:
let maxIntValue = Int.max
let maxUInt8Value = UInt8.max
let minIntValue = Int.min
let minUInt8Value = UInt8.min
If you want to convert UInt8 to Int, used below simple code:
func convertToInt(unsigned: UInt) -> Int {
let signed = (unsigned <= UInt(Int.max)) ?
Int(unsigned) :
Int(unsigned - UInt(Int.max) - 1) + Int.min
return signed
}
class Distance
{
public:
int a;
};
int main()
{
Distance d1; //declaring object
char *p=(char *)&d1;
*p=1;
printf("\n %d ",d1.a);
return 0;
}
This is my code.
When I am passing the value of 'a' to be like 256,512 , I am getting 257,513 respectively but for values like 1000 i get 769 and for values like 16,128,100 I am getting 1.
First I thought it might be related to powers of 2 being incremented by 1 due to changes in their binary representation. But adding 1 to binary representation of 1000 won't give me 769.
Please help me to understand this code.
*p = 1 sets the last byte(char) to 000000001
As you're type casting int to char,
binary for (int)1000 is (binary)0000001111101000
you're assigning (int)1 for last 8 bits i,e (binary)0000001100000001 which is 769.
Using 256512 worked because last 8 bit that you change are all zeros i.e (int)256512 is (binary)111110101000000000 so making last bit as 1 gives you (binary)111110101000000001 which is (int)256513
And I think(not sure) you get 1 for 16,128,100 because this integer is well out of int range and thus not assigned and a is set to 0 as class object is created. and thus setting last bit to 1 makes a = 1
As the title states, lldb reports the value of UInt.max to be a UInt of -1, which seems highly illogical. Considering that let uint: UInt = -1 doesn't even compile, how is this even possible? I don't see any way to have a negative value of UInt at runtime because the initializer will crash if given a negative value. I want to know the actual maximum value of UInt.
The Int value of -1 and the UInt value UInt.max have the same bit representation in memory.
You can see that if you do:
let i = Int(bitPattern: UInt.max) // i == -1
and in the opposite direction:
if UInt(bitPattern: Int(-1)) == UInt.max {
print("same")
}
Output:
same
The debugger is incorrectly displaying UInt.max as a signed Int. They have the same bit representation in memory (0xffffffffffffffff on a 64-bit system such as iPhone 6 and 0xffffffff on a 32-bit system such as iPhone 5), and the debugger apparently chooses to show that value as an Int.
You can see the same issue if you do:
print(String(format: "%d", UInt.max)) // prints "-1"
It doesn't mean UInt.max is -1, just that both have the same representation in memory.
To see the maximum value of UInt, do the following in an app or on a Swift Playground:
print(UInt.max)
This will print 18446744073709551615 on a 64-bit system (such as a Macintosh or iPhone 6) and 4294967295 on a 32-bit system (such as an iPhone 5).
In lldb:
(lldb) p String(UInt.max)
(String) $R0 = "18446744073709551615"
(lldb)
This sounds like an instance of transitioning between interpreting a literal value under Two's Complement and Unsigned representations.
In the unsigned world, a binary number is just a binary number, so the more digits that are 1, the bigger the number, since there is no need to encode the sign somehow. In order to represent the largest number of signed values, the Two's Complement encoding scheme encodes positive values as normal provided the most extreme bit is not a 1. If the most extreme bit is a 1, then the bits are reinterpreted as described at https://en.wikipedia.org/wiki/Two%27s_complement.
As shown on Wikipedia, the Two's Complement representation of -1 has all bits set to 1, or the maximal unsigned value.
I'm trying to get familiar with the Swift and test several things.
Here's a strange thing which I can't understand.
var count : NSInteger = 19
var percent : CGFloat = 22.01
var random : NSInteger = NSInteger(percent)
NSLog("%d, %f, %d", count, percent, random);
println("\(count), \(percent), \(random)")
It should print 19, 22.01, 22 but the log is...
19, 0.000000, 33875549
19, 22.0100002288818, 22
What's wrong here? After I removed the type specifier, it works fine with println not with NSLog.
Any Idea why the log is not correct?
ADDED
What about println? Using \() has no way to print 22.01?
My guess is you're compiling for 32-bit iOS, where CGFloat is a 32-bit float.
The closest float to 22.01 is exactly 22.0100002288818359375.
The closest 64-bit double to 22.01 is exactly 22.010000000000001563194018672220408916473388671875.
It appears that Swift string interpolation converts a double to the shortest string that would convert back to exactly the same double. It converts a float to a double (with the extra bits being zeros), then converts the double to a string as if it had been given a double in the first place.
The shortest string that converts back to Double(22.010000000000001563194018672220408916473388671875) is 22.01. But the shortest string that converts back to Double(22.0100002288818359375) is 22.0100002288818.
CGFloats are weird since they can be 32 or 64 bit depending on the system. It seems that NSLog has an issue with them. If you type percent explicitly as a Float or Double it will work correctly. You can also get the native type with cgfloat.native.
NSLog("%d, %f, %d", count, percent.native, random);
Use %# for CGFloat and it works fine:
NSLog("%d, %#, %d", count, percent, random);
This works because you can call .description on a CGFloat.
func rand(max: Int?) -> Int {
var index = Int(arc4random())
return max? != nil ? (index % max!) : index
}
I get an exception on the last line: EXC_BAD_INSTRUCTION
I'm guessing it has something to do with the fact that the iPhone 5S is 64 bit while the 5 is not, but I don't see anything in the function above that deals with 64 bits?
Edit
I was able to resolve the issue with the following adjustments, but I still cannot explain why.
func rand(max: Int?) -> Int {
var index = arc4random()
return max? != nil ? Int(index % UInt32(max!)) : Int(index)
}
The Int integer type is a 32-bit integer on the iPhone 5 and a 64-bit integer on the 5S. Since arc4random() returns a UInt32, which has twice the positive range of an Int on the iPhone 5, your first version basically has a 50% chance of crashing on this line:
var index = Int(arc4random())
Your modified version waits to convert until you take the modulo sum with max, so it's safe to convert to Int there. You should check out arc4random_uniform, which handles the modulo for you and avoids some bias inherent in your current implementation.
As it seems like you found out, arc4random returns an unsigned 32 bit integer. So 0 to 4,294,967,295. Also, Int is a different size depending on the system that it is running on.
From "Swift reference:"
On a 32-bit platform, Int is the same size as Int32.
On a 64-bit platform, Int is the same size as Int64.
On an iPhone 5, Int will only hold −2,147,483,648 to +2,147,483,647. On an iPhone 5S, Int can hold −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807. An unsigned 32 bit integer can overflow a Int32 but never a Int64.
More information on what random function to use and why.