How to create a Double value from a Float value in Swift - swift

I can't believe that I can't figure this out myself and I also cant find an answer online, but...
I'm working in Swift after a long break working on Dart and Java.
I have a situation where I have component A supplying a Float value, and component B requiring a Double value. I can't figure out how to convert/cast/re-instantiate the float to a double!
Example:
let f:Float = 0.3453
let d:Double = aVal;
That assignment doesn't work, even though if f had been an Int, it would have. Which is very surprising to me since a Float is less precise than Double (takes less memory).
I also tried:
let d:Double = f as! Double
XCode warns that this will "always fail."
Also tried:
let d:Double = Double(from: f)
XCode warns "f should be decoder type"
There has to be an extremely obvious/easy solution to this.

As #workingdog said, this will work:
let f: Float = 0.3453
let d: Double = Double(f)
print(d) // prints 0.34529998898506165

Related

Why does a horizontal line appear in Xcode Playground extension when viewing "Show result"?

I am learning about Swift extensions, and wrote a simple extension to Double in a Playground. Code is below.
extension Double {
func round(to places: Int) -> Double {
let precisionNumber = pow(10, Double(places))
var n = self //self contains the value of the myDouble variable
n = n * precisionNumber
n.round()
n = n / precisionNumber
return n
}
}
var myDouble = 3.14159
myDouble.round(to: 1)
The extension works as planned, however when I press "show result" (the eye icon) in the right column for any line of code in the extension, I see a horizontal line.
Anyone know what this line is supposed to signify? Using Xcode 11.2.1 and Swift 5.
The trouble here is that you have not revealed all of your playground. My guess is that there is more code, where you call your extension again. Perhaps your real code looks something like this:
extension Double {
func round(to places: Int) -> Double {
let precisionNumber = pow(10, Double(places))
var n = self //self contains the value of the myDouble variable
n = n * precisionNumber
n.round()
n = n / precisionNumber
return n
}
}
var myDouble = 3.14159
myDouble.round(to: 1) // once
print(myDouble)
myDouble = myDouble.round(to: 1) // twice; in your case it's probably another value
print(myDouble)
That is a very poor way to write your playground, if your goal is to write and debug your extension, for the very reason you have shown. You have called the extension two times, so what meaningful value can be shown as the "result" of each line of the extension? The playground has to try to show you both "results" from both calls. The only way it can think of to do that is to graph the two results.
That is a downright useful representation, though, when you are deliberately looping or repeating code, because you get at least a sense, through the graph, of how the value changes each time thru the loop.

Unable to bridge NSNumber to Float when I try to fetch Data from Firebase [duplicate]

After upgrading to Xcode 9.3 (9E145) my App showed some unexpected behavior. It seems that the issue is with a cast of an NSNumber to a Float. I use the as type cast operator for this. See the following example.
let n = NSNumber.init(value: 1.12)
let m = NSNumber.init(value: 1.00)
let x = n as? Float
let y = m as? Float
let xd = n as? Double
let z = Float(truncating: n)
Here, the first cast fails, i.e. x == nil. The second cast succeeds and the instantiation of a Float with the init:truncating constructor also succeeds, i.e. z == 1.12. The cast of n to a Double succeeds, which, to me, makes no sense at all.
Can anyone explain this behavior to me? I.e. can anyone give me a good reason why the cast of n to a Float fails? Is this a bug? If this is intended behavior, can you please reference the location in the Swift documentation that describes this?
This is a consequence of SE-0170 NSNumber bridging and Numeric types, implemented in Swift 4:
as? for NSNumber should mean "Can I safely express the value stored in this opaque box called a NSNumber as the value I want?".
1.12 is a floating point literal, and inferred as a Double, so NSNumber(value: 1.12) is “boxing” the 64-bit floating point value
closest to 1.12. Converting that to a 32-bit Float does not
preserve this value:
let n = NSNumber(value: 1.12)
let x = Float(truncating: n) // Or: let x = n.floatValue
let nn = NSNumber(value: x)
print(n == nn) // false
On the other hand, 1.0 can be represented exactly as a Float:
let m = NSNumber(value: 1.0)
let y = m.floatValue
let mm = NSNumber(value: y)
print(m == mm) // true
and that is why casting m as? Float succeeds. Both
n.floatValue
Float(truncating: n)
can be used to ”truncate” the number to the closest representable
32-bit floating point value.

Swift-Binary operator cannot be applied to operands, when converting degrees to radians

I'm aware of some relatively similar questions on this site, but if they do apply to my problem (which I'm not certain they do) then I certainly don't understand them. Here's my problem;
var degrees = UInt32()
var radians = Double()
let degrees:UInt32 = arc4random_uniform(360)
let radians = angle * (M_PI / 180)
This returns an error, focused on the multiplication star, reading; "Binary operator "*" cannot be applied to operands of type 'UInt32' and 'Double'.
I'm fairly sure I need to have the degrees variable be of type UInt32 to randomise it, and also that the pi constant cannot be made to be of UInt32, or at least I don't know how, as I'm relatively new to Xcode and Swift in general.
I'd be very grateful if anyone had a solution to my problem.
Thanks in advance.
let degree = arc4random_uniform(360)
let radian = Double(degree) * .pi/180
you need to convert the degree to double before the multiplication .
from apple swift book:
Integer and Floating-Point Conversion
Conversions between integer and floating-point numeric types must be made explicit:
let three = 3
let pointOneFourOneFiveNine = 0.14159
let pi = Double(three) + pointOneFourOneFiveNine
// pi equals 3.14159, and is inferred to be of type Double
Here, the value of the constant three is used to create a new value of type Double, so that both sides of
the addition are of the same type. Without this conversion in place, the addition would not be allowed.
Floating-point to integer conversion must also be made explicit. An integer type can be initialized
with a Double or Float value:
1 let integerPi = Int(pi)
2 // integerPi equals 3, and is inferred to be of type Int
Floating-point values are always truncated when used to initialize a new integer value in this way.
This means that 4.75 becomes 4, and -3.9 becomes -3.

Elementary Q about Swift (1.2) variables

import Foundation
var x = 17.0
var y = 1.0
var z = 0.5
var isSq : Bool = true
y = ((sqrt(x)) - Int(sqrt(x)))
I am working in Xcode 6.4
Last line produces error: 'could not find an overload for '-' that accepts the supplied arguments'.
Would be nice to understand what is happening here, also is there a function which returns just the decimal part of a double variable - the compliment of Int()?
Many thanks
sqrt(x) has the type Double, and Int(sqrt(x)) has the type
Int. There is no minus operator in Swift that takes a Double as
left operand and an Int as right operand,
and Swift does not implicitly convert between types.
Therefore you have to convert the Int to Double again:
let y = sqrt(x) - Double(Int(sqrt(x)))
You can extract the fractional part also with the fmod() function:
let y = fmod(sqrt(x), 1.0)

Should conditional compilation be used to cope with difference in CGFloat on different architectures?

In answering this earlier question about getting a use of ceil() on a CGFloat to compile for all architectures, I suggested a solution along these lines:
var x = CGFloat(0.5)
var result: CGFloat
#if arch(x86_64) || arch(arm64)
result = ceil(x)
#else
result = ceilf(x)
#endif
// use result
(Background info for those already confused: CGFloat is a "float" type for 32-bit architecture, "double" for 64-bit architecture (i.e. the compilation target), which is why just using either of ceil() or ceilf() on it won't always compile, depending on the target architecture. And note that you don't seem to be able to use CGFLOAT_IS_DOUBLE for conditional compilation, only the architecture flags...)
Now, that's attracted some debate in the comments about fixing things at compile time versus run time, and so forth. My answer was accepted too fast to attract what might be some good debate about this, I think.
So, my new question: is the above a safe, and sensible thing to do, if you want your iOS and OS X code to run on 32- and 64-bit devices? And if it is sane and sensible, is there still a better (at least as efficient, not as "icky") solution?
Matt,
Building on your solution, and if you use it in several places, then a little extension might make it more palatable:
extension CGFloat {
var ceil: CGFloat {
#if arch(x86_64) || arch(arm64)
return ceil(x)
#else
return ceilf(x)
#endif
}
}
The rest of the code will be cleaner:
var x = CGFloat(0.5)
x.ceil
var f : CGFloat = 0.5
var result : CGFloat
result = CGFloat(ceil(Double(f)))
Tell me what I'm missing, but that seems pretty simple to me.
Note that with current version of Swift the solution below is already implemented in the standard library and all mathematical functions are properly overloaded for Double, Float and CGFloat.
Ceil is an arithmetic operation and in the same way as any other arithmetic operation, there should be an overloaded version for both Double and Float.
var f1: Float = 1.0
var f2: Float = 2.0
var d1: Double = 1.0
var d2: Double = 2.0
var f = f1 + f2
var d = d1 + d2
This works because + is overloaded and works for both types.
Unfortunately, by pulling the math functions from the C library which doesn't support function overloading, we are left with two functions instead of one - ceil and ceilf.
I think the best solution is to overload ceil for Float types:
func ceil(f: CFloat) -> CFloat {
return ceilf(f)
}
Allowing us to do:
var f: Float = 0.5
var d: Double = 0.5
var f: Float = ceil(f)
var d: Double = ceil(d)
Once we have the same operations defined for both Float and Double, even CGFloat handling will be much simpler.
To answer the comment:
Depending on target processor architecture, CGFloat can be defined either as Float or a Double. That means we should use ceil or ceilf depending on target architecture.
var cgFloat: CGFloat = 1.5
//on 64bit it's a Double
var rounded: CGFloat = ceil(cgFloat)
//on 32bit it's a Float
var rounded: CGFloat = ceilf(cgFloat)
However, we would have to use the ugly #if.
Another option is to use clever casts
var cgFloat: CGFloat = 1.5
var rounded: CGFloat = CGFloat(ceil(Double(cgFloat))
(casting first to Double, then casting the result to CGFloat)
However, when we are working with numbers, we want math functions to be transparent.
var cgFloat1: CGFloat = 1.5
var cgFloat2: CGFloat = 2.5
// this works on both 32 and 64bit architectures!
var sum: CGFloat = cgFloat1 + cgFloat 2
If we overload ceil for Float as shown above, we are able to do
var cgFloat: CGFloat = 1.5
// this works on both 32 and 64bit architectures!
var rounded: CGFloat = ceil(cgFloat)