dot notation with p for hexadecimal numeric literals in swift - swift

I'm working through the first basic playground in https://github.com/nettlep/learn-swift using XCode
What exactly is happening with this expression?
0xC.3p0 == 12.1875
I've learned about hexadecimal literals and the special "p" notation that indicates a power of 2.
0xF == 15
0xFp0 == 15 // 15 * 2^0
If I try 0xC.3 I get the error: Hexadecimal floating point literal must end with an exponent.
I found this nice overview of numeric literals and another deep explanation, but I didn't see something that explains what .3p0 does.
I've forked the code and upgraded this lesson to XCode 7 / Swift 2 -- here's the specific line.

This is Hexadecimal exponential notation.
By convention, the letter P (or p, for "power") represents times two
raised to the power of ... The number after the P is decimal and
represents the binary exponent.
...
Example: 1.3DEp42 represents hex(1.3DE) × dec(2^42).
For your example, we get:
0xC.3p0 represents 0xC.3 * 2^0 = 0xC.3 * 1 = hex(C.3) = 12.1875
where hex(C.3) = dec(12.{3/16}) = dec(12.1875)
As an example, you can try 0xC.3p1 (equals hex(C.3) * dec(2^1)), which yields double the value, i.e., 24.375.
You can also study the binary exponent growth in a playground for hex-value 1:
// ...
print(0x1p-3) // 1/8 (0.125)
print(0x1p-2) // 1/4 (0.25)
print(0x1p-1) // 1/2 (0.5)
print(0x1p1) // 2.0
print(0x1p2) // 4.0
print(0x1p3) // 8.0
// ...
Finally, this is also explained in Apple`s Language Reference - Lexical Types: Floating-Point Literals:
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists
of an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 x 2^2, which
evaluates to 60. Similarly, 0xFp-2 represents 15 x 2^-2, which
evaluates to 3.75.

Related

how does hexadecimal to decimal work in swift? [duplicate]

I don't understand how floating point numbers are represented in hex notation in Swift. Apple's documentation shows that 0xC.3p0 is equal to 12.1875 in decimal. Can someone walk me through how to do that conversion? I understand that before the decimal hex value 0xC = 12. The 3p0 after the decimal is where I am stumped.
From the documentation:
Floating-Point Literals
...
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists of
an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 × 22, which evaluates
to 60. Similarly, 0xFp-2 represents 15 × 2-2, which evaluates to 3.75.
In your case
0xC.3p0 = (12 + 3/16) * 2^0 = 12.1875
Another example:
0xAB.CDp4 = (10*16 + 11 + 12/16 + 13/16^2) * 2^4 = 2748.8125
This format is very similar to the %a printf-format (see for example
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html).
It can be used to specify a floating point number directly in its
binary IEEE 754 representation, see Why does Swift use base 2 for the exponent of hexadecimal floating point values?
for more information.
Interpret 0xC.3p0 using the place value system:
C (or 12) is in the 16^0 place
3 is in the 16^-1 place (and 3/16 == 0.1875)
p says the exponent follows (like the e in 6.022e23 in base 10)
0 is the exponent (in base 10) that is the power of 2 (2^0 == 1)
So putting it all together
0xC.3p0 = (12 + (3/16)) * 2^0 = 12.1875
In order to sum up what I've read, you can see those representations as follow:
0xC.3p0 = (12*16^0 + 3*16^-1) * 2^0 = 12.1875
From Martin R's example above :
0xAB.CDp4 = (10*16^1 + 11*16^0 + 12*16^-1 + 13*16^-2) * 2^4 = 2748.8125
The 0xC is 12, as you said. The decimal part is ((1/16)*3)*10^0.
So you need to take the decimal part and divide it by 16. Then you need to multiply it by 2 raised to the power of the number after the p
Hexadecimal -(0-9,A=10,B=11,C=12,D=13,E=14,F=15) and p0 means 2^0
ex: - 0xC = 12 (0x prefix represents hexadecimal)
After the decimal part as in 0xC.3p0 we divide the numbers with the power of 16
So here its 3/16 = 0.1875
so 0xC.3p0 = (12 + (3/16) ) 2^0
If it was 0xC.43p0 then for the 4 we would use 4/(16), for 3 we would use 3/(16 ^2) and similarly if the decimal part increases.
ex: 0xC.231p1 = (12 + 2/16 + 3/(256) + 1/(16^3)) 2^1 = 24.27392578125

Why does Int(Float(Int.max)) give me an error?

I observed something really strange. If you run this code in Swift:
Int(Float(Int.max))
It crashes with the error message:
fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
This is really counter-intuitive, so I expanded the expression into 3 lines and tried to see what happens in each step in a playground:
let a = Int.max
let b = Float(a)
let c = Int(b)
It crashes with the same message. This time, I see that a is 9223372036854775807 and b is 9.223372e+18. It is obvious that a is greater than b by 36854775807. I also understand that floating points are inaccurate, so I expected something less than Int.max, with the last few digits being 0.
I also tried this with Double, it crashes too.
Then I thought, maybe this is just how floating point numbers behave, so I tested the same thing in Java:
long a = Long.MAX_VALUE;
float b = (float)a;
long c = (long)b;
System.out.println(c);
It prints the expected 9223372036854775807!
What is wrong with swift?
There aren't enough bits in the mantissa of a Double or Float to accurately represent 19 significant digits, so you are getting a rounded result.
If you print the Float using String(format:) you can see a more accurate representation of the value of the Float:
let a = Int.max
print(a) // 9223372036854775807
let b = Float(a)
print(String(format: "%.1f", b)) // 9223372036854775808.0
So the value represented by the Float is 1 larger than Int.max.
Many values will be converted to the same Float value. The question becomes, how much would you have to reduce Int.max before it results in a different Double or Float value.
Starting with Double:
var y = Int.max
while Double(y) == Double(Int.max) {
y -= 1
}
print(Int.max - y) // 512
So with Double, the last 512 Ints all convert to the same Double.
Float has fewer bits to represent the value, so there are more values that all map to the same Float. Switching to - 1000 so that it runs in reasonable time:
var y = Int.max
while Float(y) == Float(Int.max) {
y -= 1000
}
print(Int.max - y) // 274877907000
So, your expectation that a Float could accurately represent a specific Int was misplaced.
Follow up question from the comments:
If float does not have enough bits to represent Int.max, how is it
able to represent a number one larger than that?
Floating point numbers are represented as two parts: mantissa and exponent. The mantissa represents the significant digits (in binary) and the exponent represents the power of 2. As a result, a floating point number can accurately express an even power of 2 by having a mantissa of 1 with an exponent that represents the power.
Numbers that are not even powers of 2 may have a binary pattern that contains more digits than can be represented in the mantissa. This is the case for Int.max (which is 2^63 - 1) because in binary that is 111111111111111111111111111111111111111111111111111111111111111 (63 1's). A Float which is 32 bits cannot store a mantissa which is 63 bits, so it has to be rounded or truncated. In the case of Int.max, rounding up by 1 results in the value
1000000000000000000000000000000000000000000000000000000000000000. Starting from the left, there is only 1 significant bit to be represented by the mantissa (the trailing 0's come for free), so this number is a mantissa of 1 and an exponent of 64.
See #MartinR's answer for an explanation of what Java is doing.
Swift and Java behave differently when converting a "too large" floating point
number to an integer. Java truncates any floating point value
larger than Long.MAX_VALUE = 2^63-1:
long c = (long)(1.0E+30f);
System.out.println(c);
// 9223372036854775807
Swift expects that the value is in the range of Int, and aborts
with a runtime exception otherwise:
/// Creates a new instance by rounding the given floating-point value toward
/// zero.
///
/// - Parameter other: A floating-point value. When `other` is rounded toward
/// zero, the result must be within the range `Int.min...Int.max`.
public init(_ value: Float)
Example:
let c = Int(Float(1.0E30))
print(c)
// fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
The same happens with your value Float(Int.max), which is the
floating point representable value closest to Int.max and happens
to be larger than Int.max.

Floating point hex notation in Swift

I don't understand how floating point numbers are represented in hex notation in Swift. Apple's documentation shows that 0xC.3p0 is equal to 12.1875 in decimal. Can someone walk me through how to do that conversion? I understand that before the decimal hex value 0xC = 12. The 3p0 after the decimal is where I am stumped.
From the documentation:
Floating-Point Literals
...
Hexadecimal floating-point literals consist of a 0x prefix, followed
by an optional hexadecimal fraction, followed by a hexadecimal
exponent. The hexadecimal fraction consists of a decimal point
followed by a sequence of hexadecimal digits. The exponent consists of
an upper- or lowercase p prefix followed by a sequence of decimal
digits that indicates what power of 2 the value preceding the p is
multiplied by. For example, 0xFp2 represents 15 × 22, which evaluates
to 60. Similarly, 0xFp-2 represents 15 × 2-2, which evaluates to 3.75.
In your case
0xC.3p0 = (12 + 3/16) * 2^0 = 12.1875
Another example:
0xAB.CDp4 = (10*16 + 11 + 12/16 + 13/16^2) * 2^4 = 2748.8125
This format is very similar to the %a printf-format (see for example
http://pubs.opengroup.org/onlinepubs/009695399/functions/fprintf.html).
It can be used to specify a floating point number directly in its
binary IEEE 754 representation, see Why does Swift use base 2 for the exponent of hexadecimal floating point values?
for more information.
Interpret 0xC.3p0 using the place value system:
C (or 12) is in the 16^0 place
3 is in the 16^-1 place (and 3/16 == 0.1875)
p says the exponent follows (like the e in 6.022e23 in base 10)
0 is the exponent (in base 10) that is the power of 2 (2^0 == 1)
So putting it all together
0xC.3p0 = (12 + (3/16)) * 2^0 = 12.1875
In order to sum up what I've read, you can see those representations as follow:
0xC.3p0 = (12*16^0 + 3*16^-1) * 2^0 = 12.1875
From Martin R's example above :
0xAB.CDp4 = (10*16^1 + 11*16^0 + 12*16^-1 + 13*16^-2) * 2^4 = 2748.8125
The 0xC is 12, as you said. The decimal part is ((1/16)*3)*10^0.
So you need to take the decimal part and divide it by 16. Then you need to multiply it by 2 raised to the power of the number after the p
Hexadecimal -(0-9,A=10,B=11,C=12,D=13,E=14,F=15) and p0 means 2^0
ex: - 0xC = 12 (0x prefix represents hexadecimal)
After the decimal part as in 0xC.3p0 we divide the numbers with the power of 16
So here its 3/16 = 0.1875
so 0xC.3p0 = (12 + (3/16) ) 2^0
If it was 0xC.43p0 then for the 4 we would use 4/(16), for 3 we would use 3/(16 ^2) and similarly if the decimal part increases.
ex: 0xC.231p1 = (12 + 2/16 + 3/(256) + 1/(16^3)) 2^1 = 24.27392578125

Algorithm to convert integer (represented as an array) with base n to integer with base m

I have a, very long, integer. The integer is represented by a array of unsigned chars.
Example: the integer 1234 with base 10 is represented in the array as [4,3,2,1], [2,2,3,2] (base 8) and [2,13,4] (base 16)
Now I want to convert my integer with base n to another integer with base m. In my persued for a answer I came accross Wallar's algorithm, originally from here.
from math import *
def baseExpansion(n,c,b):
j = 0
base10 = sum([pow(c,len(n)-k-1)*n[k] for k in range(0,len(n))])
while floor(base10/pow(b,j)) != 0: j = j+1
return [floor(base10/pow(b,j-p)) % b for p in range(1,j+1)]
At first I thought this was my answer but unfortunately it is not. The problem I have is that the algorithm computes the sum. In my case this is a problem because the variable base10 is of type unsigned integer of 32 bits. Therefore when my integer, represented as a array, has more then 10 digits it can not convert the number anymore. Anyone has a solution?
Here's the school-book algorithm for doing what you're trying. You start with a representation for zero and call it a running total. Then, for each digit of the number to be converted, starting with the most significant and going to the least significant, 1) multiply the running total by the base of the source number and 2) add the digit to the running total. Now all you need is algorithms to do the multiplication and addition (and you can actually do both at once). Here's how to do that: 1) set the current digit to a variable, call it "carry", 2) for each digit in your new number, starting with the least significant and going to the most significant: 2a) set carry to the current digit in the new number times the output base plus carry, 2b) set the current digit to carry mod the output base, 2c) set carry to carry divided by the output base. And that should do it. There is an implementation of what you are trying to do somewhere here: http://www.cis.ksu.edu/~howell/calculator/comparison.html

Exponent value not displaying,what should i do?

how to display exponent value after calulation to textbox in iphone sdk.For example say 6.4516e-10.i am not getting answer for it in my textbox after calculating 10 * 6.4516e-10.please tell me solution..
Use StringWithFormat and exponent formations:
From the man page on printf:
eE The argument is printed in the style e `[-d.ddd+-dd]' where is one digit before the decimal point and the number after is equal to the precision specification for the argument; when the precision is missing, 6 digits are produced.
So you would want a format something like: %.4e
float n = 6.4516e-10;
n = n * 10;
NSLog(#"n: %.4e", n);
2011-08-29 07:36:38.158 Test[39477:707] n: 6.4516e-09