Using stride with Swift 4 sometimes results in wrong results, what is demonstrated by this piece of code:
import UIKit
import PlaygroundSupport
struct Demo {
let omegaMax = 10.0
let omegaC = 1.0
var omegaSignal:Double {get {return 0.1 * omegaC}}
var dOmega:Double {get {return 0.1 * omegaSignal}}
var omegaArray:[Double] {get {return Array(stride(from: 0.0, through: omegaMax, by: dOmega))}}
The variable dOmega is expected to hold the value 0.001.
The array is expected to have the size of 1001 elements,
where the last element should have the value 10.0
However, these assumptions are not true, as you can see in the following code section:
let demo = Demo()
let nbrOfElements = demo.omegaArray.count
let lastElement = demo.omegaArray[nbrOfElements-1]
print("\(nbrOfElements)") // -> 1000
print("\(lastElement)") // -> 9.990000000000002
What is going on there? Inspecting dOmega gives the answer.
print("\(demo.dOmega)") // -> 0.010000000000000002
The increment value dOmega is not exactly 0.01 as expected, but has a very small approximation error, which is OK for a Double. However this leads to the situation, that the expected Array element 1001 would have the value 10.000000000000002, which is bigger than the given maximum value of 10.0 and so this element 1001 is not generated.
Depending whether the variables in the stride function have rounding errors or not, the result is the expected one or not.
My question is: What is the right way to use stride with Doubles to get in any case the expected result?
You are using the wrong overload of stride!
There are two overloads:
stride(from:to:by:)
stride(from:through:by:)
You should have used the second one, because it includes the parameter through, whereas the first one does not include the parameter to.
print(Array(stride(from: 0, through: 10.0, by: 0.001)).last!) // 10.0
The little difference you see is just because of the imprecise nature of Double.
Related
I want to convert a positive value to a negative value, for example:
let a: Int = 10
turn it to -10, my current idea is just use it to multiple -1
a * -1
I'm not sure if this is proper, any idea?
With Swift 5, according to your needs, you can use one of the two following ways in order to convert an integer into its additive inverse.
#1. Using negate() method
Int has a negate() method. negate() has the following declaration:
mutating func negate()
Replaces this value with its additive inverse.
The Playground code samples below show how to use negate() in order to mutate an integer and replace its value with its additive inverse:
var a = 10
a.negate()
print(a) // prints: -10
var a = -10
a.negate()
print(a) // prints: 10
Note that negate() is also available for all types that conform to SignedNumeric protocol.
#2. Using the unary minus operator (-)
The sign of a numeric value can be toggled using a prefixed -, known as the unary minus operator. The Playground code samples below show how to use it:
let a = 10
let b = -a
print(b) // prints: -10
let a = -10
let b = -a
print(b) // prints: 10
Just use - operator.
let negativeA = -a
vDSP_maxv is not assigning the max value to output in the code below.
I expected the last line to print 2, but instead it prints something different each time, usually a very large or small number like 2.8026e-45
I've read this tutorial, the documentation, and the inline documentation in the header file for vDSP_maxv, but I don't see why the code below isn't producing the expected result.
Making numbers an UnsafePointer instead of an UnsafeMutablePointer didn't work, nor did a number of other things I've tried, so maybe I'm missing something fundamental.
import Accelerate
do {
// INPUT - pointer pointing at: 0.0, 1.0, 2.0
let count = 3
let numbers = UnsafeMutablePointer<Float>
.allocate(capacity: count)
defer { numbers.deinitialize() }
for i in 0..<count {
(numbers+i).initialize(to: Float(i))
}
// OUTPUT
var output = UnsafeMutablePointer<Float>
.allocate(capacity: 1)
// FIND MAX
vDSP_maxv(
numbers,
MemoryLayout<Float>.stride,
output,
vDSP_Length(count)
)
print(output.pointee) // prints various numbers, none of which are expected
}
You are mistaking the usage of the stride parameter to vDSP_maxv.
You need to pass the number of elements consisting a single stride, not the number of bytes.
vDSP_maxv(::::)
*C = -INFINITY;
for (n = 0; n < N; ++n)
if (*C < A[n*I])
*C = A[n*I];
In the pseudo code above, I represents the stride parameter, and you see giving 4 (MemoryLayout<Float>.stride) to I would generate indexes exceeding the bound of A (your numbers).
Some other parts fixed to fit my preference, but the most important thing is the second parameter for vDSP_maxv:
import Accelerate
do {
// INPUT - pointer pointing at: 0.0, 1.0, 2.0
let numbers: [Float] = [0.0, 1.0, 2.0]
// OUTPUT
var output: Float = Float.nan
// FIND MAX
vDSP_maxv(
numbers,
1, //<- when you want to use all elements in `numbers` continuously, you need to pass `1`
&output,
vDSP_Length(numbers.count)
)
print(output) //-> 2.0
}
I'm trying to build a simple Swift app to calculate VAT (Value Added taxes = 20%).
func taxesFree(number: Int) -> Double {
var textfield = self.inputTextField.text.toInt()!
let VAT = 0.2
var result = textfield * VAT
return result
}
For some reason I keep getting
Binary operator * cannot be applied to operands of type Int and Double
on the line
var result = textfield * VAT
You should convert one type to the other one so both variable should be the same types:
var result: Double = Double(textfield) * VAT
It's because you're trying to multiply an Int (textfield) with a Double (VAT). Because with such an operation you could lose the precision of the double Swift doesn't allow to convert one to the other so you need to explicitly cast the Int to a Double ...
var result = Double(textfield) * VAT
The problem here is that the statement given is literally true, because Swift is strongly typed and doesn't coerce implicitly. Just had a similar case myself with "binary operator '-' cannot be applied to operands of type 'Date' and 'Int'".
If you write:
var result = 10 * 0.2
...that's fine, but if you write:
var number = 10
var result = number * 0.2
...that's not fine. This is because untyped explicit values have an appropriate type selected by the compiler, so in fact the first line is taken as being var result = Double(10) * Double(0.2). After all, as a human being you might mean 10 to be floating-point or an integer - you normally wouldn't say which and would expect that to be clear from context. It might be a bit of a pain, but the idea of strong types is that after the code is parsed it can only have one valid compiled expression.
In general you would build a new value using the constructor, so var result = Double(textfield) * VAT in your case. This is different from casting (textfield as Double) because Int is not a subclass of Double; what you are doing instead is asking for a completely new Double value to be built at runtime, losing some accuracy if the value is very high or low. This is what loosely typed languages do implicitly with pretty much all immediate values, at a small but significant time cost.
In your specific case, it wasn't valuable to have an Int in the first place (even if no fraction part is possible) so what you needed was:
func taxesFree(number: Int) -> Double {
var textfield = Double(self.inputTextField.text)!
let VAT = 0.2
var result = textfield * VAT
return result
}
In my case it was just casting to CGFloat:
self.cnsMainFaqsViewHight.constant = CGFloat(self.mainFaqs.count) * 44.0
You can convert like
var result: Double = Double(textfield)
I was misunderstanding the Closed Range Operator in Swift.
You should not wrap the range in an array: [0...10]
for i in [0...10] {
// error: binary operator '+' cannot be applied to operands of type 'CountableClosedRange<Int>' and 'Int'
let i = i + 1
}
for i in 0...10 {
// ok!
let i = i + 1
}
The range is a collection that can itself be iterated. No need to wrap it in an array, as perhaps you would have in Objective-C.
0...3 -> [0, 1, 2, 3]
[0...3] -> [[0, 1, 2, 3]]
Once you realize your object is a nested collection, rather than an array of Ints, it's easy to see why you cannot use numeric operators on the object.
This worked for me when I got the same error message in Playground:
func getMilk(howManyCartons: Int){
print("Buy \(howManyCartons) cartons of milk")
let priceToPay: Float = Float(howManyCartons) * 2.35
print("Pay $\(priceToPay)")
}
getMilk(howManyCartons: 2)
Current learning Swift, there are ways to find max and min value for different kind of Integer like Int.max and Int.min.
Is there a way to find max value for Double and Float? Moreover, which document should I refer for this kind of question? I am currently reading Apple's The Swift Programming Language.
As of Swift 3+, you should use:
CGFloat.greatestFiniteMagnitude
Double.greatestFiniteMagnitude
Float.greatestFiniteMagnitude
While there’s no Double.max, it is defined in the C float.h header, which you can access in Swift via import Darwin.
import Darwin
let fmax = FLT_MAX
let dmax = DBL_MAX
These are roughly 3.4 * 10^38 and 1.79 * 10^308 respectively.
But bear in mind it’s not so simple with floating point numbers (it’s never simple with floating point numbers). When holding numbers this large, you lose precision in a similar way to losing precision with very small numbers, so:
let d = DBL_MAX
let e = d - 1.0
let diff = d - e
diff == 0.0 // true
let maxPlusOne = DBL_MAX + 1
maxPlusOne == d // true
let inf = DBL_MAX * 2
// perhaps infinity is the “maximum”
inf == Double.infinity // true
So before you get into some calculations that might possibly brush up against these limits, you should probably read up on floating point. Here and here are probably a good start.
AV's answer is fine, but I find those macros hard to remember and a bit non-obvious, so eventually I made Double.MIN and friends work:
extension Double {
static var MIN = -DBL_MAX
static var MAX_NEG = -DBL_MIN
static var MIN_POS = DBL_MIN
static var MAX = DBL_MAX
}
Don't use lowercase min and max -- those symbols are used in Swift 3.
Just write
let mxFloat = MAXFLOAT
You will get the maximum value of a float in Swift.
Also CGFloat.infinity, Double.infinity or just .infinity can be useful in such situations.
Works with swift 5
public extension Double {
/// Max double value.
static var max: Double {
return Double(greatestFiniteMagnitude)
}
/// Min double value.
static var min: Double {
return Double(-greatestFiniteMagnitude)
}
}
I am just learning to code so I decided to start with Swift. I am following the tour that mac has for it at here and I am at the section where it is calculating a sum of numbers and then it tells you to try to do a function that does an average of numbers.
func averageOf(numbers: Int...) -> Int {
var sum = 0
var total = 0
var average = 0
for number in numbers {
sum += number
total++
} **Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_1386_INVOP, subcode=0x0)**
average = sum/total
return average
}
What am I doing wrong(what do I need to learn to do it right)?
I’m guessing you’ve called your function with no arguments, that is:
averageOf()
This is allowed with variadic arguments, and numbers will be an empty array. This will result in you attempting to divide an unchanged sum by an unchanged total (because you will go round the loop no times for no elements in numbers), so dividing 0 by 0, and you’re getting a divide-by-zero error.
To prevent this from being a possibility, you could require the user to supply at least one number:
func averageOf(first: Int, rest: Int...) -> Double {
var sum = first
var total = 1.0
for number in rest {
sum += number
total++
}
return Double(sum)/total
}
This way, if you try to call it with no arguments, you’ll get a compiler error.
BTW I altered your version to return a Double rather than an Int, you might want to experiment with the two versions to see why.
(this technique is similar to how the standard lib max function is declared, which requires at least 2 arguments:
func max<T : Comparable>(x: T, y: T) -> T
but has an overloaded version for 3 or more:
func max<T : Comparable>(x: T, y: T, z: T, rest: T...) -> T
the reason for the first version instead of cutting straight to a variadic version that takes at least two being, you can then pass it into things like reduce to find the max in a collection e.g. reduce(a, 0, max))
For me this error happened because an implicitly unwrapped property was not set. Setting it would fix the issue.