Error: Cannot convert value of type 'UInt64' to expected argument type 'DispatchTime' - swift

Im trying to create a timer in Swift4 that is controlled by a variable, the only issue is that I get a error:
Cannot convert value of type 'UInt64' to expected argument type 'DispatchTime'
Here is the code:
let maxNumber = maxNumberField.intValue
let amountOfNumbers = amountOfNumbersField.intValue
var delay = 5
var x: Int32 = amountOfNumbers
while(x > 0){
let when = (DispatchTime.now().uptimeNanoseconds + (5 * UInt64(x)))
DispatchQueue.main.asyncAfter(deadline: when) { // error
let number = arc4random_uniform(UInt32(maxNumber + 1))
let synth = NSSpeechSynthesizer()
synth.startSpeaking(String(number))
}
x = (x - 1)
}
From my understanding I need to convert the when variable which is an UInt64 to an DispatchTime.
How would I do this?

You should either:
Use DispatchTime.init(uptimeNanoseconds:) with your when variable.
Use this: let when = (DispatchTime.now() + (5 * Double(x))), for there is + overload for (DispatchTime, Double).

Related

Split fractional and integral parts of a Double [duplicate]

I'm trying to separate the decimal and integer parts of a double in swift. I've tried a number of approaches but they all run into the same issue...
let x:Double = 1234.5678
let n1:Double = x % 1.0 // n1 = 0.567800000000034
let n2:Double = x - 1234.0 // same result
let n3:Double = modf(x, &integer) // same result
Is there a way to get 0.5678 instead of 0.567800000000034 without converting to the number to a string?
You can use truncatingRemainder and 1 as the divider.
Returns the remainder of this value divided by the given value using truncating division.
Apple doc
Example:
let myDouble1: Double = 12.25
let myDouble2: Double = 12.5
let myDouble3: Double = 12.75
let remainder1 = myDouble1.truncatingRemainder(dividingBy: 1)
let remainder2 = myDouble2.truncatingRemainder(dividingBy: 1)
let remainder3 = myDouble3.truncatingRemainder(dividingBy: 1)
remainder1 -> 0.25
remainder2 -> 0.5
remainder3 -> 0.75
Same approach as Alessandro Ornano implemented as an instance property of FloatingPoint protocol:
Xcode 11 • Swift 5.1
import Foundation
extension FloatingPoint {
var whole: Self { modf(self).0 }
var fraction: Self { modf(self).1 }
}
1.2.whole // 1
1.2.fraction // 0.2
If you need the fraction digits and preserve its precision digits you would need to use Swift Decimal type and initialize it with a String:
extension Decimal {
func rounded(_ roundingMode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var result = Decimal()
var number = self
NSDecimalRound(&result, &number, 0, roundingMode)
return result
}
var whole: Decimal { rounded(sign == .minus ? .up : .down) }
var fraction: Decimal { self - whole }
}
let decimal = Decimal(string: "1234.99999999")! // 1234.99999999
let fractional = decimal.fraction // 0.99999999
let whole = decimal.whole // 1234
let sum = whole + fractional // 1234.99999999
let negativeDecimal = Decimal(string: "-1234.99999999")! // -1234.99999999
let negativefractional = negativeDecimal.fraction // -0.99999999
let negativeWhole = negativeDecimal.whole // -1234
let negativeSum = negativeWhole + negativefractional // -1234.99999999
Swift 2:
You can use:
modf(x).1
or
x % floor(abs(x))
Without converting it to a string, you can round up to a number of decimal places like this:
let x:Double = 1234.5678
let numberOfPlaces:Double = 4.0
let powerOfTen:Double = pow(10.0, numberOfPlaces)
let targetedDecimalPlaces:Double = round((x % 1.0) * powerOfTen) / powerOfTen
Your output would be
0.5678
Swift 5.1
let x:Double = 1234.5678
let decimalPart:Double = x.truncatingRemainder(dividingBy: 1) //0.5678
let integerPart:Double = x.rounded(.towardZero) //1234
Both of these methods return Double value.
if you want an integer number as integer part, you can just use
Int(x)
Use Float since it has less precision digits than Double
let x:Double = 1234.5678
let n1:Float = Float(x % 1) // n1 = 0.5678
There’s a function in C’s math library, and many programming languages, Swift included, give you access to it. It’s called modf, and in Swift, it works like this
// modf returns a 2-element tuple,
// with the whole number part in the first element,
// and the fraction part in the second element
let splitPi = modf(3.141592)
splitPi.0 // 3.0
splitPi.1 // 0.141592
You can create an extension like below,
extension Double {
func getWholeNumber() -> Double {
return modf(self).0
}
func getFractionNumber() -> Double {
return modf(self).1
}
}
You can get the Integer part like this:
let d: Double = 1.23456e12
let intparttruncated = trunc(d)
let intpartroundlower = Int(d)
The trunc() function truncates the part after the decimal point and the Int() function rounds to the next lower value. This is the same for positive numbers but a difference for negative numbers. If you subtract the truncated part from d, then you will get the fractional part.
func frac (_ v: Double) -> Double
{
return (v - trunc(v))
}
You can get Mantissa and Exponent of a Double value like this:
let d: Double = 1.23456e78
let exponent = trunc(log(d) / log(10.0))
let mantissa = d / pow(10, trunc(log(d) / log(10.0)))
Your result will be 78 for the exponent and 1.23456 for the Mantissa.
Hope this helps you.
It's impossible to create a solution that will work for all Doubles. And if the other answers ever worked, which I also believe is impossible, they don't anymore.
let _5678 = 1234.5678.description.drop { $0 != "." } .description // ".5678"
Double(_5678) // 0.5678
let _567 = 1234.567.description.drop { $0 != "." } .description // ".567"
Double(_567) // 0.5669999999999999
extension Double {
/// Gets the decimal value from a double.
var decimal: Double {
Double("0." + string.split(separator: ".").last.string) ?? 0.0
}
var string: String {
String(self)
}
}
This appears to solve the Double precision issues.
Usage:
print(34.46979988898988.decimal) // outputs 0.46979988898988
print(34.46.decimal) // outputs 0.46

Cannot Cover Value of type countable closedRange<int>to expected argument type Rang <int>

Please help how to solve this error in swift3
let max:Int = Int(StackMaxWidth/10)
let min:Int = Int(StackMinWidth/10)
let width:CGFlot = CGFlot(randomInRange(min...max)*10)
enter image description here
func randomInRange(_ range: Range<Int>) -> Int {
let count = UInt32(range.upperBound - range.lowerBound)
return Int(arc4random_uniform(count)) + range.lowerBound
}
This is one type of function
replace self.StackGapMinWidth...maxGap to self.StackGapMinWidth..<(maxGap + 1)

How can I write a function that takes generic type arguments, but returns a different type based on what the generic object's type is? [duplicate]

This question already has answers here:
How can I handle different types using generic type in swift?
(2 answers)
Closed 6 years ago.
I'm trying to write a basic interpolation function in swift3. I get a lot of errors, though. This is obviously not the right way to use generics, but maybe I have a fundamental misunderstanding of their application?
class func interpolate<T>(from: T, to: T, progress: CGFloat) -> T
{
// Safety
assert(progress >= 0 && progress <= 1, "Invalid progress value: \(progress)")
if let from = from as? CGFloat, let to = to as? CGFloat
{
return from + (to - from) * progress // No + candidates produce the expected contextual result type 'T'
}
if let from = from as? CGPoint, let to = to as? CGPoint
{
var returnPoint = CGPoint()
returnPoint.x = from.x + (to.x-from.x) * progress
returnPoint.y = from.y + (to.y-from.y) * progress
return returnPoint // Cannot convert return expression of type 'CGPoint' to return type 'T'
}
if let from = from as? CGRect, let to = to as? CGRect
{
var returnRect = CGRect()
returnRect.origin.x = from.origin.x + (to.origin.x-from.origin.x) * progress
returnRect.origin.y = from.origin.y + (to.origin.y-from.origin.y) * progress
returnRect.size.width = from.size.width + (to.size.width-from.size.width) * progress
returnRect.size.height = from.size.height + (to.size.height-from.size.height) * progress
return returnRect // Cannot convert return expression of type 'CGRect' to return type 'T'
}
return nil // Nil is incompatible with return type 'T'
}
A generic function is useful when you have the same operations to perform on several different types. That's basically what you have here. The problem is that you don't have the operations defined for two of the types that you care about, namely CGPoint and CGRect.
If you create separate functions to add, subtract, and multiply those types, you can make this generic function work. It would be simplified to
class func interpolate<T>(from: T, to: T, progress: CGFloat) -> T
{
// Safety
assert(0.0...1.0 ~= progress, "Invalid progress value: \(progress)")
return from + (to - from) * progress
}

Why can't Swift add two Ints?

When I try the following:
var somestring = "5"
var somenumber = 2
var newnumber:Int = Int(somestring) + somenumber
I get this error:
binary operator '+' cannot be applied to two Int operands
What am I doing wrong? Shouldn't '+' be valid for adding two Ints?
That's a really weird error message. The actual problem is that you can't simply construct Ints from Strings. The proper way to do that is with the toInt method like so:
var newnumber:Int = something.toInt()! + somenumber
Notice that toInt returns an optional that's unwrapped with !. If you're not sure the string represents an integer, error handling needs to be added as well.
You should consider using the nil coalescing operator "??" to return 0 instead of nil when trying to extract the value from your string:
let someString = "5"
let someNumber = 2
let newNumber = (someString.toInt() ?? 0) + someNumber
println(newNumber) // 7
let anotherString = "a"
let anotherNumber = (anotherString.toInt() ?? 0) + someNumber
println(anotherNumber) // 2
update: Xcode 7.1.1 • Swift 2.1
let someString = "5"
let someNumber = 2
let newNumber = (Int(someString) ?? 0) + someNumber
print(newNumber) // 7
let anotherString = "a"
let anotherNumber = (Int(anotherString) ?? 0) + someNumber // 2

How do you cast a UInt64 to an Int64?

Trying to call dispatch_time in Swift is doing my head in, here's why:
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 10 * NSEC_PER_SEC), dispatch_get_main_queue(), {
doSomething()
})
Results in the error: "Could not find an overload for '*' that accepts the supplied arguments".
NSEC_PER_SEC is an UInt64 so time for some experiments:
let x:UInt64 = 1000
let m:Int64 = 10 * x
Results in the same error as above
let x:UInt64 = 1000
let m:Int64 = 10 * (Int64) x
Results in "Consecutive statements on a line must be separated by ';'"
let x:UInt64 = 1000
let m:Int64 = 10 * ((Int64) x)
Results in "Expected ',' separator"
let x:UInt64 = 1000
let m:Int64 = (Int64)10 * (Int64) x
Results in "Consecutive statements on a line must be separated by ';'"
Etc. etc.
Damn you Swift compiler, I give up. How do I cast a UInt64 to Int64, and/or how do you use dispatch_time in swift?
You can "cast" between different integer types by initializing a new integer with the type you want:
let uint:UInt64 = 1234
let int:Int64 = Int64(uint)
It's probably not an issue in your particular case, but it's worth noting that different integer types have different ranges, and you can end up with out of range crashes if you try to convert between integers of different types:
let bigUInt:UInt64 = UInt64(Int64.max) - 1 // 9,223,372,036,854,775,806
let bigInt:Int64 = Int64(bigUInt) // no problem
let biggerUInt:UInt64 = UInt64(Int64.max) + 1 // 9,223,372,036,854,775,808
let biggerInt:Int64 = Int64(biggerUInt) // crash!
Each integer type has .max and .min class properties that you can use for checking ranges:
if (biggerUInt <= UInt64(Int64.max)) {
let biggerInt:Int64 = Int64(biggerUInt) // safe!
}
To construct an Int64 using the bits of a UInt64, use the init seen here: https://developer.apple.com/reference/swift/int64/1538466-init
let myInt64 = Int64(bitPattern: myUInt64)
Try this:
let x:UInt64 = 1000 // 1,000
let m:Int64 = 10 * Int64(x) // 10,000
or even :
let x:UInt64 = 1000 // 1,000
let m = 10 * Int64(x) // 10,000
let n = Int64(10 * x) // 10,000
let y = Int64(x) // 1,000, as Int64 (as per #Bill's question)
It's not so much casting as initialising with a separate type...
Casting a UInt64 to an Int64 is not safe since a UInt64 can have a number which is greater than Int64.max, which will result in an overflow.
Here's a snippet for converting a UInt64 to Int64 and vice-versa:
// Extension for 64-bit integer signed <-> unsigned conversion
extension Int64 {
var unsigned: UInt64 {
let valuePointer = UnsafeMutablePointer<Int64>.allocate(capacity: 1)
defer {
valuePointer.deallocate(capacity: 1)
}
valuePointer.pointee = self
return valuePointer.withMemoryRebound(to: UInt64.self, capacity: 1) { $0.pointee }
}
}
extension UInt64 {
var signed: Int64 {
let valuePointer = UnsafeMutablePointer<UInt64>.allocate(capacity: 1)
defer {
valuePointer.deallocate(capacity: 1)
}
valuePointer.pointee = self
return valuePointer.withMemoryRebound(to: Int64.self, capacity: 1) { $0.pointee }
}
}
This simply interprets the binary data of UInt64 as an Int64, i.e. numbers greater than Int64.max will be negative because of the sign bit at the most significat bit of the 64-bit integer.
If you just want positive integers, just get the absolute value.
EDIT: Depending on behavior, you can either get the absolute value, or:
if currentValue < 0 {
return Int64.max + currentValue + 1
} else {
return currentValue
}
The latter option is similar to stripping the sign bit. Ex:
// Using an 8-bit integer for simplicity
// currentValue
0b1111_1111 // If this is interpreted as Int8, this is -1.
// Strip sign bit
0b0111_1111 // As Int8, this is 127. To get this we can add Int8.max
// Int8.max + currentValue + 1
127 + (-1) + 1 = 127
Better solution for converting:
UInt64 Int64_2_UInt64(Int64 Value)
{
return (((UInt64)((UInt32)((UInt64)Value >> 32))) << 32)
| (UInt64)((UInt32)((UInt64)Value & 0x0ffffffff));
}
Int64 UInt64_2_Int64(UInt64 Value)
{
return (Int64)((((Int64)(UInt32)((UInt64)Value >> 32)) << 32)
| (Int64)((UInt32)((UInt64)Value & 0x0ffffffff)));
}
simple solution for Swift 3 is an inbuilt function that takes care of overflows and buffer management.
var a:UInt64 = 1234567890
var b:Int64 = numericCast(a)