Cannot match values of type - swift

I'm just starting out with Swift 3 and I'm converting a Rails project to swift (side project while I learn)
Fairly simple, I have a Rails statement Im converting and Im getting many red errors in Xcode:
let startingPoint: Int = 1
let firstRange: ClosedRange = (2...10)
let secondRange: ClosedRange = (11...20)
func calc(range: Float) -> Float {
switch range {
case startingPoint:
return (range - startingPoint) * 1 // or 0.2
case firstRange:
return // code
default:
return //code
}
}
calc will either have an Int or Float value: 10 or 10.50
Errors are:
Expression pattern of type ClosedRange cannot match values of type Float
Binary operator - cannot be applied to operands of type Float and Int
I understand the errors but I dont know what to search for to correct it. Could you point me in the right direction, please?

Swift is strongly typed. Whenever you use a variable or pass something as a function argument, Swift checks that it is of the correct type. You can't pass a string to a function that expects an integer etc. Swift does this check at compile time (since it's statically typed).
To adhere by that rules, try changing your code to this:
let startingPoint: Float = 1
let firstRange: ClosedRange<Float> = (2...10)
let secondRange: ClosedRange<Float> = (11...20)
func calc(range: Float) -> Float {
switch range {
case startingPoint:
return (range - startingPoint) * 1 // or 0.2
case firstRange:
return 1.0 // 1.0 is just an example, but you have to return Float since that is defined in the method
default:
return 0.0 // 0.0 is just an example, put whatever you need here
}
}

For the first error, you might want to specify ClosedRange to be of type Floats. Something similar to:
let firstRange: ClosedRange<Float> = (2...10)
For the second error, the problem is you are trying to compare a Float (range:Float) with an Int (startingPoint). So I would suggest you convert the startingPoint variable to a Float as well.

Related

Round decimal to nearest increment given a number

I would like to round down a decimal to the nearest increment of another number. For example, given a value of 2.23678301 and an increment of 0.0001, I would like to round this to 2.2367. Sometimes the increment could be something like 0.00022, in which case the value would be rounded down to 2.23674.
I tried to do this, but sometimes the result is not correct and tests aren't passing:
extension Decimal {
func rounded(byIncrement increment: Self) -> Self {
var multipleOfValue = self / increment
var roundedMultipleOfValue = Decimal()
NSDecimalRound(&roundedMultipleOfValue, &multipleOfValue, 0, .down)
return roundedMultipleOfValue * increment
}
}
/// Tests
class DecimalTests: XCTestCase {
func testRoundedByIncrement() {
// Given
let value: Decimal = 2.2367830187654
// Then
XCTAssertEqual(value.rounded(byIncrement: 0.00010000), 2.2367)
XCTAssertEqual(value.rounded(byIncrement: 0.00022), 2.23674)
XCTAssertEqual(value.rounded(byIncrement: 0.0000001), 2.236783)
XCTAssertEqual(value.rounded(byIncrement: 0.00000001), 2.23678301) // XCTAssertEqual failed: ("2.23678301") is not equal to ("2.236783009999999744")
XCTAssertEqual(value.rounded(byIncrement: 3.5), 0)
XCTAssertEqual(value.rounded(byIncrement: 0.000000000000001), 2.2367830187654) // XCTAssertEqual failed: ("2.2367830187653998323726489726140416") is not equal to ("2.236783018765400576")
}
}
I'm not sure why the decimal calculations are making up numbers that were never there, like the last assertion. Is there a cleaner or more accurate way to do this?
Your code is fine. You're just calling it incorrectly. This line doesn't do what you think:
let value: Decimal = 2.2367830187654
This is equivalent to:
let value = Decimal(double: Double(2.2367830187654))
The value is first converted to a Double, binary rounding it to 2.236783018765400576. That value is then converted to a Decimal.
You need to use the string initializer everywhere you want a Decimal from a digit string:
let value = Decimal(string: "2.2367830187654")!
XCTAssertEqual(value.rounded(byIncrement: Decimal(string: "0.00000001")!), Decimal(string: "2.23678301")!)
etc.
Or you can use the integer-based initializers:
let value = Decimal(sign: .plus, exponent: -13, significand: 22367830187654)
In iOS 15 there are some new initializers that don't return optionals (init(_:format:lenient:) for example), but you're still going to need to pass Strings, not floating point literals.
You could also do this, though it may be confusing to readers, and might lead to bugs if folks take the quotes away:
extension Decimal: ExpressibleByStringLiteral {
public init(stringLiteral value: String) {
self.init(string: value)!
}
}
let value: Decimal = "2.2367830187654"
XCTAssertEqual(value.rounded(byIncrement: "0.00000001"), "2.23678301")
For test code, that's probably nice, but I'd be very careful about using it in production code.

Cannot invoke initializer for type 'Int' with an argument list of type '(Number)'

This program can't compile by Xcode, it only run in IOS application "Playgrounds".
In IOS application "Playgrounds" -> "Learn to Code 3"(Swift 3.1) ->"Music Universe" I have below code:
// A touch event for when your finger is moving across the scene.
// Declaration
struct Touch
// The position of this touch on the scene.
// Declaration
var position: Point
// The x coordinate for the point.
// Declaration for touch.position.x
var x: Double
The above just for explanation.
let touch: Touch
let numberOfNotes = 16
let normalizedXPosition = (touch.position.x + 500) / 1000
let note = normalizedXPosition * (numberOfNotes - 1)
let index = Int(note)
The last sentence show error:
Cannot invoke initializer for type 'Int' with an argument list of type '(Number)'.
How can I convert note to Int type?
This is running on the iPad application Swift Playgrounds.
They apparently have created a protocol Number behind the scenes to ease the pain of Swift type conversions. In this mini program, it is possible to multiply an Int and a Double without first converting the Int to a Double:
let a = 5 // a is an Int
let b = 6.3 // b is a Double
let c = a * b // This results in c being type Number
Number has read-only properties int and double which return the Int and Double representations of the number.
var int: Int { get }
var double: Double { get }
So, if you need index to be an Int, do it like this:
let index = note.int

Swift 3: How to check the type of a generic array

I declare a generic array
fileprivate var array: [T?]
I have a method average(), which will calculate average if 'T' is Int or Float; otherwise returns 0
public func average() -> Float {
var mean = 0
if T is Int or Float {
for index in (0..<array.count-1){
mean = mean+array[index]
}
mean = mean/array.count
}
return mean;
}
Question: How will I check if array is holding Int/Float (if T is Int or Float, in above code)
This is a tool for protocols. The useful protocols for your problem are FloatingPoint to handle floating point types (like Float) and Integer to handle signed integer types (like Int). These have slightly different implementations, so it's best to write each one separately. Doing this will ensure that this method is only available for appropriate types of T (rather than all possible types, and just returning 0 in those cases).
extension MyStruct where T: FloatingPoint {
func average() -> T {
let sum = array.flatMap{$0}.reduce(0, +)
let count = T(array.count)
return sum.divided(by: count)
}
}
extension MyStruct where T: Integer {
func average() -> Float {
let sum = array.flatMap{$0}.reduce(0, +)
let count = array.count
return Float(sum.toIntMax()) / Float(count.toIntMax())
}
}
EDIT: Following up a bit more on Caleb's comments below, you may be tempted to think it's ok to just convert integers into floats to generate their average. But this is not safe in general without careful consideration of your ranges. For example, consider the average of [Int.min, Int.max]. That's [-9223372036854775808, 9223372036854775807]. The average of that should be -0.5, and that's what's returned by my example above. However, if you convert everything to floats in order to sum it, you'll get 0, because Float cannot express Int.max precisely. I've seen this bite people in live code when they do not remember that for very large floats, x == x+1.
Float(Int.max) == Float(Int.max - 1) // true
You can use the type method introduced in Swift 3 in the following way:
let type = type(of: array)
print("type: \(type)") // if T is String, you will see Array<Optional<String>> here
You will need to iterate over your array and use "if let" to unwrap the type of the values in the array. If they are ints handle them one way, if they are floats handle them another way.
//while iterating through your array check your elements to see if they are floats or ints.
if let stringArray = T as? Int {
// obj is a string array. Do something with stringArray
}
else {
// obj is not a string array
}
Here's a high level description:
... inside a loop which allows indexing
if let ex = array[index] as? Int {
your code
continue // go around the loop again, you're all done here
}
if let ex = array[index] as? Float {
// other code
continue // go around the loop again, you're all done here
}
// if you got here it isn't either of them
// code to handle that
... end of inside the loop
I can explain further if that isn't clear enough.
This is probably the simplest way to do it:
var average: Float {
let total = array.reduce(0.0) { (runningTotal, item) -> Float in
if let itemAsFloat = item as? Float {
return runningTotal + itemAsFloat
}
else if let itemAsInt = item as? Int {
return runningTotal + Float(itemAsInt)
}
else {
return runningTotal
}
}
return total / Float(array.count)
}
Obviously if you want it can be a function and depending on how you're wanting to use it you may need to tweak it.
*Note that it's possible to have both Int and Float in an array of T. e.g. if T is Any.

How to convert a computed value to a literal for enum initialization

I've run into an issue with my enums in that I want to initialize a case to the double value of PI / 180. Is there a way to take this calculated value via a constant or some funky magic and turn it into a literal so that I can initialize the enum?
I would prefer not to have to do a 3.14.... - I would rather use the actual compiler and hardware computed representation of this value.
So my 1st attempt was:
public enum ANGLE_TYPE : Double {
case DEGREES = Double(CGFloat(M_PI / 180.0))
case RADIANS = 1.0
}
I keep getting the error Raw value for enum case must be a literal
Second attempt was :
public enum ANGLE_TYPE : Double {
let d : Double = Double(CGFloat(M_PI / 180.0))
case DEGRESS = d
}
and i get the same error.
Could somebody please tell me how to go about doing this.
You can only use literals for the raw values of type-backed enums.
To get this to work, you have to calculate the raw value of the calculation you're performing and paste that literal in as an approximation:
public enum ANGLE_TYPE : Double {
case DEGREES = 0.0174532925199433
case RADIANS = 1.0
}
The only other option is to not have a type-backed enum and manually provide the rawValue property:
public enum ANGLE_TYPE {
case DEGREES, RADIANS
var rawValue: Double {
get {
switch self {
case .DEGREES:
return Double(CGFloat(M_PI / 180.0))
case .RADIANS:
return 1.0
}
}
}
}
This might make sense because this means you don't have the init(rawValue:Double) initializer, which doesn't make a whole lot of sense in this case probably.
As a side note, this all caps thing is really unnecessary. I'd much prefer something more like this:
public enum AngleMeasureUnit {
case Degrees, Radians
}

Xcode6 beta 4 downcast to float

I have a Dictionary with a String and AnyObject, so [String: AnyObject].
In a function I want to check the type of the dict value. So this code worked in Xcode6-Beta 3:
for (key, value: AnyObject) in contents {
...
} else if value is Float {
stringValue = String(value as Float) + ","
Now I get the error: AnyObject is not convertible to Float
stringValue = String(Float(value)) + "," doesn't work as well.
Any ideas?
There is no problem with casting AnyObject to Float.
Converting AnyObject to Float has no problem as you can see below instruction will execute without errors.
var f:Float = value as Float
As swift String has no intializer to convert for Float
If you do
var str:String = String(f) //This will show error as swift has no intializer for Float
But Swift has only added intializer to String for Int to convert directly.There is not intializer for Float.
var i:Int = value as Int
var str:String = String(i) //this will run fine
Now to solve your problem you can do
for (key, value: AnyObject) in contents {
if(value is Int){
}else if value is Float {
//Take this in var and use
var stringValue = "\(value as Float),"
}
}
In future swift may add the intializer for Float but currently there is no intializer.
Replace "as" with "as!" to force downcast.
Please, remember that you can use the forced form of the type cast operator (as!) only when you are sure that the downcast will always succeed, otherwise a runtime error will be triggered. In your particular case there will be no problem since there is a previous checking (if value is Float).
You can't cast to float because:
AnyObject can represent an instance of any class type.
From Swift Programming Guide
But float isn't a class. You will have to use Any instead:
Any can represent an instance of any type at all, apart from function types.
From Swift Programming Guide
This should work (but I can't test on my current Mac):
for (key, value: Any) in contents {
...
} else if value is Float {
stringValue = String(value as Float) + ","
As wumm said, you can't directly cast to Float because Float is not a class type. You may be able to cast it to an NSNumber and bridge to a Float though. Try
value as NSNumber as Float
This strategy works with casting AnyObject into a Swift String.
Using your code and fixing the concatenation it would look like:
for (key, value: AnyObject) in contents {
if value is Float {
stringValue = "\(stringValue) \(value as NSNumber as Float) ,"
}
}