I would like to round down a decimal to the nearest increment of another number. For example, given a value of 2.23678301 and an increment of 0.0001, I would like to round this to 2.2367. Sometimes the increment could be something like 0.00022, in which case the value would be rounded down to 2.23674.
I tried to do this, but sometimes the result is not correct and tests aren't passing:
extension Decimal {
func rounded(byIncrement increment: Self) -> Self {
var multipleOfValue = self / increment
var roundedMultipleOfValue = Decimal()
NSDecimalRound(&roundedMultipleOfValue, &multipleOfValue, 0, .down)
return roundedMultipleOfValue * increment
}
}
/// Tests
class DecimalTests: XCTestCase {
func testRoundedByIncrement() {
// Given
let value: Decimal = 2.2367830187654
// Then
XCTAssertEqual(value.rounded(byIncrement: 0.00010000), 2.2367)
XCTAssertEqual(value.rounded(byIncrement: 0.00022), 2.23674)
XCTAssertEqual(value.rounded(byIncrement: 0.0000001), 2.236783)
XCTAssertEqual(value.rounded(byIncrement: 0.00000001), 2.23678301) // XCTAssertEqual failed: ("2.23678301") is not equal to ("2.236783009999999744")
XCTAssertEqual(value.rounded(byIncrement: 3.5), 0)
XCTAssertEqual(value.rounded(byIncrement: 0.000000000000001), 2.2367830187654) // XCTAssertEqual failed: ("2.2367830187653998323726489726140416") is not equal to ("2.236783018765400576")
}
}
I'm not sure why the decimal calculations are making up numbers that were never there, like the last assertion. Is there a cleaner or more accurate way to do this?
Your code is fine. You're just calling it incorrectly. This line doesn't do what you think:
let value: Decimal = 2.2367830187654
This is equivalent to:
let value = Decimal(double: Double(2.2367830187654))
The value is first converted to a Double, binary rounding it to 2.236783018765400576. That value is then converted to a Decimal.
You need to use the string initializer everywhere you want a Decimal from a digit string:
let value = Decimal(string: "2.2367830187654")!
XCTAssertEqual(value.rounded(byIncrement: Decimal(string: "0.00000001")!), Decimal(string: "2.23678301")!)
etc.
Or you can use the integer-based initializers:
let value = Decimal(sign: .plus, exponent: -13, significand: 22367830187654)
In iOS 15 there are some new initializers that don't return optionals (init(_:format:lenient:) for example), but you're still going to need to pass Strings, not floating point literals.
You could also do this, though it may be confusing to readers, and might lead to bugs if folks take the quotes away:
extension Decimal: ExpressibleByStringLiteral {
public init(stringLiteral value: String) {
self.init(string: value)!
}
}
let value: Decimal = "2.2367830187654"
XCTAssertEqual(value.rounded(byIncrement: "0.00000001"), "2.23678301")
For test code, that's probably nice, but I'd be very careful about using it in production code.
Related
I'd like to create a property wrapper to accommodate the well known precision issues. However, when I use the #PropertyWrapper as I understand it to be used and demonstrated here, I get the following errors:
Extra argument in call
Missing argument for parameter 'initialValue' in property wrapper initializer; add 'wrappedValue' and 'initialValue' arguments in '#MantissaClamping(...)'
I don't see how I have an "extra argument in call," I assign the decimal as the wrapped value, and I provide the integer literal as the mantissa argument.
After saying I have an extra argument, the other error says I'm missing an argument. And suggesting I think suggesting that I literally add them as arguments to the property wrapper...That would defeat the whole purpose of property wrappers in my eyes, because it would require redundant code like this...But even this doesn't work.
struct MantissaClampTestStruct {
#MantissaClamping(Decimal("0.000000000001")!, 14) var small: Decimal = Decimal("0.000000000001")!
}
How can I assign a literal value to the property, and let that apply to the property wrapper? While providing the int value that directly applies to the property wrapper?
Here is my reproducible code you can put in a playground.
extension Decimal {
/// Rounds a value
/// - Parameters:
/// - roundingMode: up down plain or bankers
/// - scale: the number of digits result can have after its decimal point
/// - Returns: the rounded number
func rounded(_ roundingMode: NSDecimalNumber.RoundingMode = .bankers, scale: Int = 0) -> Self {
var result = Self()
var number = self
NSDecimalRound(&result, &number, scale, roundingMode)
return result
}
}
#propertyWrapper
struct MantissaClamping {
var value: Decimal
let mantissaCount: Int
init(initialValue value: Decimal, _ mantissaCount: Int) {
precondition(mantissaCount < 19 && mantissaCount >= 0)
self.value = value
self.mantissaCount = mantissaCount
}
var wrappedValue: Decimal {
get { value }
set { value = newValue.rounded(.down, scale: mantissaCount)}
}
}
struct MantissaClampTestStruct {
#MantissaClamping(14) var small: Decimal = Decimal("0.000000000001")!
}
According to the docs:
When you include property wrapper arguments, you can also specify an initial value using assignment. Swift treats the assignment like a wrappedValue argument and uses the initializer that accepts the arguments you include.
So it translates your property declaration into something like:
var small = MantissaClamping(wrappedValue: Decimal("0.000000000001")!, 14)
Obviously, this doesn't match any of your initialisers.
Just rename the parameter label to wrappedValue:
init(wrappedValue value: Decimal, _ mantissaCount: Int) {
And also add the string: label to the Decimal initialiser, which you have missed:
#MantissaClamping(14) var small: Decimal = Decimal(string: "0.000000000001")!
You might also want to round the initial value too:
init(wrappedValue value: Decimal, _ mantissaCount: Int) {
precondition(mantissaCount < 19 && mantissaCount >= 0)
// here
self.value = value.rounded(.down, scale: mantissaCount)
self.mantissaCount = mantissaCount
}
I declare a generic array
fileprivate var array: [T?]
I have a method average(), which will calculate average if 'T' is Int or Float; otherwise returns 0
public func average() -> Float {
var mean = 0
if T is Int or Float {
for index in (0..<array.count-1){
mean = mean+array[index]
}
mean = mean/array.count
}
return mean;
}
Question: How will I check if array is holding Int/Float (if T is Int or Float, in above code)
This is a tool for protocols. The useful protocols for your problem are FloatingPoint to handle floating point types (like Float) and Integer to handle signed integer types (like Int). These have slightly different implementations, so it's best to write each one separately. Doing this will ensure that this method is only available for appropriate types of T (rather than all possible types, and just returning 0 in those cases).
extension MyStruct where T: FloatingPoint {
func average() -> T {
let sum = array.flatMap{$0}.reduce(0, +)
let count = T(array.count)
return sum.divided(by: count)
}
}
extension MyStruct where T: Integer {
func average() -> Float {
let sum = array.flatMap{$0}.reduce(0, +)
let count = array.count
return Float(sum.toIntMax()) / Float(count.toIntMax())
}
}
EDIT: Following up a bit more on Caleb's comments below, you may be tempted to think it's ok to just convert integers into floats to generate their average. But this is not safe in general without careful consideration of your ranges. For example, consider the average of [Int.min, Int.max]. That's [-9223372036854775808, 9223372036854775807]. The average of that should be -0.5, and that's what's returned by my example above. However, if you convert everything to floats in order to sum it, you'll get 0, because Float cannot express Int.max precisely. I've seen this bite people in live code when they do not remember that for very large floats, x == x+1.
Float(Int.max) == Float(Int.max - 1) // true
You can use the type method introduced in Swift 3 in the following way:
let type = type(of: array)
print("type: \(type)") // if T is String, you will see Array<Optional<String>> here
You will need to iterate over your array and use "if let" to unwrap the type of the values in the array. If they are ints handle them one way, if they are floats handle them another way.
//while iterating through your array check your elements to see if they are floats or ints.
if let stringArray = T as? Int {
// obj is a string array. Do something with stringArray
}
else {
// obj is not a string array
}
Here's a high level description:
... inside a loop which allows indexing
if let ex = array[index] as? Int {
your code
continue // go around the loop again, you're all done here
}
if let ex = array[index] as? Float {
// other code
continue // go around the loop again, you're all done here
}
// if you got here it isn't either of them
// code to handle that
... end of inside the loop
I can explain further if that isn't clear enough.
This is probably the simplest way to do it:
var average: Float {
let total = array.reduce(0.0) { (runningTotal, item) -> Float in
if let itemAsFloat = item as? Float {
return runningTotal + itemAsFloat
}
else if let itemAsInt = item as? Int {
return runningTotal + Float(itemAsInt)
}
else {
return runningTotal
}
}
return total / Float(array.count)
}
Obviously if you want it can be a function and depending on how you're wanting to use it you may need to tweak it.
*Note that it's possible to have both Int and Float in an array of T. e.g. if T is Any.
I'm just starting out with Swift 3 and I'm converting a Rails project to swift (side project while I learn)
Fairly simple, I have a Rails statement Im converting and Im getting many red errors in Xcode:
let startingPoint: Int = 1
let firstRange: ClosedRange = (2...10)
let secondRange: ClosedRange = (11...20)
func calc(range: Float) -> Float {
switch range {
case startingPoint:
return (range - startingPoint) * 1 // or 0.2
case firstRange:
return // code
default:
return //code
}
}
calc will either have an Int or Float value: 10 or 10.50
Errors are:
Expression pattern of type ClosedRange cannot match values of type Float
Binary operator - cannot be applied to operands of type Float and Int
I understand the errors but I dont know what to search for to correct it. Could you point me in the right direction, please?
Swift is strongly typed. Whenever you use a variable or pass something as a function argument, Swift checks that it is of the correct type. You can't pass a string to a function that expects an integer etc. Swift does this check at compile time (since it's statically typed).
To adhere by that rules, try changing your code to this:
let startingPoint: Float = 1
let firstRange: ClosedRange<Float> = (2...10)
let secondRange: ClosedRange<Float> = (11...20)
func calc(range: Float) -> Float {
switch range {
case startingPoint:
return (range - startingPoint) * 1 // or 0.2
case firstRange:
return 1.0 // 1.0 is just an example, but you have to return Float since that is defined in the method
default:
return 0.0 // 0.0 is just an example, put whatever you need here
}
}
For the first error, you might want to specify ClosedRange to be of type Floats. Something similar to:
let firstRange: ClosedRange<Float> = (2...10)
For the second error, the problem is you are trying to compare a Float (range:Float) with an Int (startingPoint). So I would suggest you convert the startingPoint variable to a Float as well.
Converting a String to Int returns an optional value but converting a Double to Int does not return an optional value. Why is that? I wanted to check if a double value is bigger than maximum Int value, but because converting function does not return an optional value, I am not be able to check by using optional binding.
var stringNumber: String = "555"
var intValue = Int(stringNumber) // returns optional(555)
var doubleNumber: Double = 555
var fromDoubleToInt = Int(doubleNumber) // returns 555
So if I try to convert a double number bigger than maximum Integer, it crashes instead of returning nil.
var doubleNumber: Double = 55555555555555555555
var fromDoubleToInt = Int(doubleNumber) // Crashes here
I know that there's another way to check if a double number is bigger than maximum Integer value, but I'm curious as why it's happening this way.
If we consider that for most doubles, a conversion to Int simply means dropping the decimal part:
let pieInt = Int(3.14159) // 3
Then the only case in which the Int(Double) constructor returns nil is in the case of an overflow.
With strings, converting to Int returns an optional, because generally, strings, such as "Hello world!" cannot be represented as an Int in a way that universally makes sense. So we return nil in the case that the string cannot be represented as an integer. This includes, by the way, values that can be perfectly represented as doubles or floats:
Consider:
let iPi = Int("3.14159")
let dPi = Double("3.14159")
In this case, iPi is nil while dPi is 3.14159. Why? Because "3.14159" doesn't have a valid Int representation.
But meanwhile, when we use the Int constructor which takes a Double and returns non-optional, we get a value.
So, if that constructor is changed to return an optional, why would it return 3 for 3.14159 instead of nil? 3.14159 can't be represented as an integer.
But if you want a method that returns an optional Int, returning nil when the Double would overflow, you can just write that method.
extension Double {
func toInt() -> Int? {
let minInt = Double(Int.min)
let maxInt = Double(Int.max)
guard case minInt ... maxInt = self else {
return nil
}
return Int(self)
}
}
let a = 3.14159.toInt() // returns 3
let b = 555555555555555555555.5.toInt() // returns nil
Failable initializers and methods with Optional return types are designed for scenarios where you, the programmer, can't know whether a parameter value will cause failure, or where verifying that an operation will succeed is equivalent to performing the operation:
let intFromString = Int(someString)
let valueFromDict = dict[someKey]
Parsing an integer from a string requires checking the string for numeric/non-numeric characters, so the check is the same as the work. Likewise, checking a dictionary for the existence of a key is the same as looking up the value for the key.
By contrast, certain operations are things where you, the programmer, need to verify upfront that your parameters or preconditions meet expectations:
let foo = someArray[index]
let bar = UInt32(someUInt64)
let baz: UInt = someUInt - anotherUInt
You can — and in most cases should — test at runtime whether index < someArray.count and someUInt64 < UInt32.max and someUInt > anotherUInt. These assumptions are fundamental to working with those kinds of types. On the one hand, you really want to design around them from the start. On the other, you don't want every bit of math you do to be peppered with Optional unwrapping — that's why we have types whose axioms are stated upfront.
I am just starting to get to know Swift but I am having a serious problem with number formatting at an extremely basic level.
For example, I need to display an integer with at least 2 digits (e.g. 00, 01, 02, 03, 04, 05 ...). The normal syntax I'd expect would be something like:
println(" %02i %02i %02i", var1, var2, var3);
...but I don't find any clear instruction for how to achieve this in Swift. I find it really hard to believe that I need to create a custom function to do that. The same for returning a float or double value to a fixed number of decimal places.
I've found links to a couple of similar questions (Precision String Format Specifier In Swift & How to use println in Swift to format number) but they seem to mix objective C and even talk about Python and using unity libraries. Is there no Swift solution to this basic programming need? Is it really true that something so fundamental has been completely overlooked in Swift?
You can construct a string with a c-like formatting using this constructor:
String(format: String, arguments:[CVarArgType])
Sample usage:
var x = 10
println(String(format: "%04d", arguments: [x])) // This will print "0010"
If you're going to use it a lot, and want a more compact form, you can implement an extension like this:
extension String {
func format(arguments: [CVarArgType]) -> String {
return String(format: self, arguments: arguments)
}
}
allowing to simplify its usage as in this example:
"%d apples cost $%03.2f".format([4, 4 * 0.33])
Here's a POP solution to the problem:
protocol Formattable {
func format(pattern: String) -> String
}
extension Formattable where Self: CVarArg {
func format(pattern: String) -> String {
return String(format: pattern, arguments: [self])
}
}
extension Int: Formattable { }
extension Double: Formattable { }
extension Float: Formattable { }
let myInt = 10
let myDouble: Double = 0.01
let myFloat: Float = 1.11
print(myInt.format(pattern: "%04d")) // "0010
print(myDouble.format(pattern: "%.2f")) // "0.01"
print(myFloat.format(pattern: "$%03.2f")) // "$1.11"
print(100.format(pattern: "%05d")) // "00100"
You can still use good ole NSLog("%.2f",myFloatOrDouble) too :D
There is a simple solution I learned with "We <3 Swift", which is so easy you can even use without Foundation, round() or Strings, keeping the numeric value.
Example:
var number = 31.726354765
var intNumber = Int(number * 1000.0)
var roundedNumber = Double(intNumber) / 1000.0
result: 31.726