Trying to get a SKLabelNode to show a float, but instead it only shows an integer. This is what i have at the moment:
sdrLabel.text = String(Float(NSUserDefaults.standardUserDefaults().integerForKey("TotalScore") / NSUserDefaults.standardUserDefaults().integerForKey("TotalDeath")))
I tried to type cast it to a float and present it as a string, but it still shows an int. For example 233 / 8, it shows 29.
You are loosing precision because you are working with integers and then you cast the result to Float. You need to work with Float (or Double) when dividing those two numbers.
You can use this instead:
sdrLabel.text = String(NSUserDefaults.standardUserDefaults().doubleForKey("TotalScore") / NSUserDefaults.standardUserDefaults().doubleForKey("TotalDeath"))
Related
This question already has an answer here:
How to store 1.66 in NSDecimalNumber
(1 answer)
Closed 5 years ago.
I've read a lot that NSDecimalNumber is the best format to use when using currency.
However, I'm still getting floating point issues.
For example.
let a: NSDecimalNumber = 0.07 //0.07000000000000003
let b: NSDecimalNumber = 7.dividing(by: 100) //0.06999999999999999
I know I could use Decimal and b would be what I'm expecting:
let b: Decimal = 7 / 100 //0.07
I'm using Core Data in my app. So I'm stuck with NSDecimalNumber. Unless I want convert a lot of NSDecimalNumbers to Decimals.
Can someone help me get 0.07?
The problem is that you’re effectively doing floating point math (with the problems it has faithfully capturing fractional decimal values in a Double) and creating a Decimal (or NSDecimalNumber) from the Double value that already has introduced this discrepancy. Instead, you want to create your Decimal values before doing your division (or before having a fractional Double value, even if a literal).
So, the following is equivalent to your example, whereby it is building a Double representation (with the limitations that entails) of 0.07, and you end up with a value that is not exactly 0.07:
let value = Decimal(7.0 / 100.0) // or NSDecimalNumber(value: 7.0 / 100.0)
Whereas this does not suffer this problem because we are dividing a decimal 7 by a decimal 100:
let value = Decimal(7) / Decimal(100) // or NSDecimalNumber(value: 7).dividing(by: 100)
Or, other ways to create 0.07 value but avoiding Double in the process include using strings:
let value = Decimal(string: "0.07") // or NSDecimalNumber(string: "0.07")
Or specifying the mantissa/significant and exponent:
let value = Decimal(sign: .plus, exponent: -2, significand: 7) // or NSDecimalNumber(mantissa: 7, exponent: -2, isNegative: false)
Bottom line, avoid Double representations entirely when using Decimal (or NSDecimalNumber), and you won't suffer the problem you described.
When I do example from tutorial, I get some issue from constants variables topic.
If someone explain my example I'll be appreciate for this.
When you don't specify a type, a floating point number literal will be inferred to be of type Double.
Double, as its name suggests, has double precision than Float. So when you do:
let a = 64.1
The actual value in memory may be something like 64.099999999999991. Since Double shows only 16 significant digits, it shows 64.09999999999999, rounding off the last "1".
Why does let b: Float = 64.1 show the correct number?
When you specify the type to float, the precision decreases. Float only shows 8 significant digits. That's 64.099999, but there's a "9" straight after that, so it rounds it up to get 64.1.
This has nothing to do with explicitly stating the variable type. Try specifying it to be a Double:
let b: Double = 64.1
It will still show 64.09999999999999.
I'm aware of some relatively similar questions on this site, but if they do apply to my problem (which I'm not certain they do) then I certainly don't understand them. Here's my problem;
var degrees = UInt32()
var radians = Double()
let degrees:UInt32 = arc4random_uniform(360)
let radians = angle * (M_PI / 180)
This returns an error, focused on the multiplication star, reading; "Binary operator "*" cannot be applied to operands of type 'UInt32' and 'Double'.
I'm fairly sure I need to have the degrees variable be of type UInt32 to randomise it, and also that the pi constant cannot be made to be of UInt32, or at least I don't know how, as I'm relatively new to Xcode and Swift in general.
I'd be very grateful if anyone had a solution to my problem.
Thanks in advance.
let degree = arc4random_uniform(360)
let radian = Double(degree) * .pi/180
you need to convert the degree to double before the multiplication .
from apple swift book:
Integer and Floating-Point Conversion
Conversions between integer and floating-point numeric types must be made explicit:
let three = 3
let pointOneFourOneFiveNine = 0.14159
let pi = Double(three) + pointOneFourOneFiveNine
// pi equals 3.14159, and is inferred to be of type Double
Here, the value of the constant three is used to create a new value of type Double, so that both sides of
the addition are of the same type. Without this conversion in place, the addition would not be allowed.
Floating-point to integer conversion must also be made explicit. An integer type can be initialized
with a Double or Float value:
1 let integerPi = Int(pi)
2 // integerPi equals 3, and is inferred to be of type Int
Floating-point values are always truncated when used to initialize a new integer value in this way.
This means that 4.75 becomes 4, and -3.9 becomes -3.
I want to calculate a simple number, and if the number is not an integer I want to round it up.
For instance, if after a calculation I get 1.2, I want to change it to 2. If the number is 3.7, I want to change it to 4 and so on.
You can use math.ceil to round a Double up and toInt to convert the Double to an Int.
def roundUp(d: Double) = math.ceil(d).toInt
roundUp(1.2) // Int = 2
roundUp(3.7) // Int = 4
roundUp(5) // Int = 5
The ceil function is also directly accessible on the Double:
3.7.ceil.toInt // 4
Having first imported math
import scala.math._ (the final dot & underscore are crucial for what comes next)
you can simply write
ceil(1.2)
floor(3.7)
plus a bunch of other useful math functions like
exp(1)
pow(2,2)
sqrt(pow(2,2)
Paste the following code into a playground:
5.0 / 100
func test(anything: Float) -> Float {
return anything / 100
}
test(5.0)
The first line should return 0.05 as expected. The function test returns 0.0500000007450581. Why?
It has nothing to do with functions. Your first example is using type Double which represents floating point numbers more precisely by using 64 bits. If you were to change your second example to:
func test(anything: Double) -> Double {
return anything / 100
}
test(5.0)
You would get the result you expect. Float uses only 32 bits of data, thus it provides a less precise representation of the number. Also, floating point numbers are stored as binary values and frequently are only an approximation of the base 10 representation. That is why 0.05 is showing up as 0.0500000007450581 when stored as a Float.