Why am I getting 0 when subtracting 5.0 from 650.50 using the subtracting() method?
in the following code, adding, multiplying and dividing work fine, but why subtracting doesn't? What am I doing wrong?
See code in IBM's sandbox:
http://swift.sandbox.bluemix.net/#/repl/59b1387696a0602d6cb19201
import Foundation
let num1:NSDecimalNumber = 650.50
let num2:NSDecimalNumber = 5.0
let result = num1.adding(num2)
let result2 = num1.subtracting(num2)
let result3 = num1.multiplying(by: num2)
let result4 = num1.dividing(by: num2)
print("Addition: \(result)") // Addition: 655.5
// Why am I getting 0 here and not 645.5?
print("Subtraction: \(result2)") //Subtraction: 0
print("Multiplication: \(result3)") //Multiplication: 3252.5
print("Division: \(result4)") //Division: 130.1
Apples Docs:
https://developer.apple.com/documentation/foundation/nsdecimalnumber
This may be because of a specificity of the IBM sandbox related to NSDecimalNumber (indeed many parts of Foundation are still not entirely available on Linux).
Anyway, whatever the bug is, a solution is to use the Swift counterpart to NSDecimalNumber which is Decimal.
Despite the fact that this is supposed to be only a wrapper around NSDecimalNumber, it gives the correct result, even on the IBM platform.
Note that this wrapper doesn't use the NSDecimalNumber methods, it uses Swift operators such as + or *.
import Foundation
let num1: Decimal = 650.50
let num2: Decimal = 5.0
let result = num1 + num2
let result2 = num1 - num2
let result3 = num1 * num2
let result4 = num1 / num2
print("Addition: \(result)")
print("Subtraction: \(result2)")
print("Multiplication: \(result3)")
print("Division: \(result4)")
Gives:
Addition: 655.5
Subtraction: 645.5
Multiplication: 3252.5
Division: 130.1
Your code isn't wrong and works correctly in Xcode/macOS. However, IBM's Swift sandbox uses Linux, and the implementation of Foundation on Linux has issues. In the Status page of the repo, NSDecimalNumber is marked as "Unimplemented". Therefore, it may have some problems. Use classes from the standard library instead.
Related
I can't believe that I can't figure this out myself and I also cant find an answer online, but...
I'm working in Swift after a long break working on Dart and Java.
I have a situation where I have component A supplying a Float value, and component B requiring a Double value. I can't figure out how to convert/cast/re-instantiate the float to a double!
Example:
let f:Float = 0.3453
let d:Double = aVal;
That assignment doesn't work, even though if f had been an Int, it would have. Which is very surprising to me since a Float is less precise than Double (takes less memory).
I also tried:
let d:Double = f as! Double
XCode warns that this will "always fail."
Also tried:
let d:Double = Double(from: f)
XCode warns "f should be decoder type"
There has to be an extremely obvious/easy solution to this.
As #workingdog said, this will work:
let f: Float = 0.3453
let d: Double = Double(f)
print(d) // prints 0.34529998898506165
I know float or double are not good for storing decimal number like money and quantity. I'm trying to use NSDecimalNumber instead. Here is my code in Swift playground.
let number:NSDecimalNumber = 1.66
let text:String = String(describing: number)
NSLog(text)
The console output is 1.6599999999999995904
How can I store the exact value of the decimal number 1.66 in a variable?
In
let number:NSDecimalNumber = 1.66
the right-hand side is a floating point number which cannot represent
the value "1.66" exactly. One option is to create the decimal number
from a string:
let number = NSDecimalNumber(string: "1.66")
print(number) // 1.66
Another option is to use arithmetic:
let number = NSDecimalNumber(value: 166).dividing(by: 100)
print(number) // 1.66
With Swift 3 you may consider to use the "overlay value type" Decimal instead, e.g.
let num = Decimal(166)/Decimal(100)
print(num) // 1.66
Yet another option:
let num = Decimal(sign: .plus, exponent: -2, significand: 166)
print(num) // 1.66
Addendum:
Related discussions in the Swift forum:
Exact NSDecimalNumber via literal
ExpressibleByFractionLiteral
Related bug reports:
SR-3317
Literal protocol for decimal literals should support precise decimal accuracy, closed as a duplicate of
SR-920
Re-design builtin compiler protocols for literal convertible types.
Please don't mark as duplicate until you read the whole thing. This is specific to Swift 3.
I have functions that have parameters such as Ints, Floats, etc. I'd like to take the output of readLine() and have Swift accept the output of readLine() as these types, but unfortunately readLine() outputs a String? and when I try to convert it tells me it's not unwrapped. I need help. I'm using Ubuntu 16.04.
For example, if I had area(width: 15, height: 15), how would I replace 15 and 15 with two constants containing readLine() or any equivalent to readLine() to accept input from a user in the terminal?
Also note that the program I am writing is specifically doing math, as most people seem to be happy with strings, this is literally a CLI-based calculator.
EDIT 1 (lol) Okay, here's a more exact explanation of above. The following code will print the area of a trapezoid:
import Foundation
func areaTrapezoid(height: Float, baseOne: Float, baseTwo: Float) {
let inside = baseOne + baseTwo
let outside = 0.5 * height
let result = outside * inside
print("Area of Trapezoid is \(result)")
}
areaTrapezoid(height: 10, baseOne: 2, baseTwo: 3)
So, the trapezoid has a height of 10 units, and two bases that have lengths of 2 and 3 respectively. However, I want to do something like:
import Foundation
func areaTrapezoid(height: Float, baseOne: Float, baseTwo: Float) {
let inside = baseOne + baseTwo
let outside = 0.5 * height
let result = outside * inside
print("Area of Trapezoid is \(result)")
}
let h = readLine()
areaTrapezoid(height: h, baseOne: 2, baseTwo: 3)
Except, as is already obvious, readLine() will output an optional string, and not a Float. I want the user to be able to input the numbers via CLI in sort of an interactive way, if you will. I'm just learning Swift, but I did something similar in C++ when I was learning that language. Thanks for any help you can provide.
readLine() returns an Optional String.
To unwrap the String, you can use if let, and to convert the String to an integer, use Int().
Example:
import Foundation
if let typed = readLine() {
if let num = Int(typed) {
print(num)
}
}
Let's say you prompted the user twice:
let prompt1 = readLine()
let prompt2 = readLine()
Then:
if let response1 = prompt1,
response2 = prompt2,
num1 = Int(response1),
num2 = Int(response2) {
print("The sum of \(num1) and \(num2) is \(num1 + num2)")
}
Current learning Swift, there are ways to find max and min value for different kind of Integer like Int.max and Int.min.
Is there a way to find max value for Double and Float? Moreover, which document should I refer for this kind of question? I am currently reading Apple's The Swift Programming Language.
As of Swift 3+, you should use:
CGFloat.greatestFiniteMagnitude
Double.greatestFiniteMagnitude
Float.greatestFiniteMagnitude
While there’s no Double.max, it is defined in the C float.h header, which you can access in Swift via import Darwin.
import Darwin
let fmax = FLT_MAX
let dmax = DBL_MAX
These are roughly 3.4 * 10^38 and 1.79 * 10^308 respectively.
But bear in mind it’s not so simple with floating point numbers (it’s never simple with floating point numbers). When holding numbers this large, you lose precision in a similar way to losing precision with very small numbers, so:
let d = DBL_MAX
let e = d - 1.0
let diff = d - e
diff == 0.0 // true
let maxPlusOne = DBL_MAX + 1
maxPlusOne == d // true
let inf = DBL_MAX * 2
// perhaps infinity is the “maximum”
inf == Double.infinity // true
So before you get into some calculations that might possibly brush up against these limits, you should probably read up on floating point. Here and here are probably a good start.
AV's answer is fine, but I find those macros hard to remember and a bit non-obvious, so eventually I made Double.MIN and friends work:
extension Double {
static var MIN = -DBL_MAX
static var MAX_NEG = -DBL_MIN
static var MIN_POS = DBL_MIN
static var MAX = DBL_MAX
}
Don't use lowercase min and max -- those symbols are used in Swift 3.
Just write
let mxFloat = MAXFLOAT
You will get the maximum value of a float in Swift.
Also CGFloat.infinity, Double.infinity or just .infinity can be useful in such situations.
Works with swift 5
public extension Double {
/// Max double value.
static var max: Double {
return Double(greatestFiniteMagnitude)
}
/// Min double value.
static var min: Double {
return Double(-greatestFiniteMagnitude)
}
}
Im very new to Xcode and am trying to make a simple app that calculates Gross Profit.
I am trying to use the following code but it returns the value '0'.
any ideas why?
// Playground - noun: a place where people can play
import UIKit
var costPrice = 10
var salePrice = 100
var grossProfit = ((salePrice - costPrice) / salePrice) * 100
println(grossProfit)
This is all explained in the first few pages of the iBook "Introduction to Swift" that is free and published by Apple.
Swift is type safe and will infer type from context.
The line var costPrice = 10 is inferring that the variable costPrice is an int.
You then can't implicitly combine ints with other types of numbers (doubles for instance).
If you try this..
let costPrice = 10.0
let salePrice = 100.0
let grossProfit = ((salePrice - costPrice) / salePrice) * 100.0
You will find this works.
10 and 100 are integers, so costPrice and salePrice are integers. Integer division truncates as you're seeing. You wanted to use 10.0 and 100.0 here.