No * candidates produce the expected contextual result type 'FloatingPointRoundRule' [duplicate] - swift

I have the following simple extension to Double, which worked fine in everything up to Xcode 8 beta 3
public extension Double {
public func roundTo(_ decimalPlaces: Int) -> Double {
var v = self
var divisor = 1.0
if decimalPlaces > 0 {
for _ in 1 ... decimalPlaces {
v *= 10.0
divisor *= 0.1
}
}
return round(v) * divisor
}
}
As of Beta 4, I am getting "Cannot use mutating member on immutable value: 'self' is immutable" on the round function in the return - has anyone got any clues?

This is due to a naming conflict with the new rounding functions on the FloatingPoint protocol, round() and rounded(), which have been added to Swift 3 as of Xcode 8 beta 4.
You therefore either need to disambiguate by specifying that you're referring to global round() function in the Darwin module:
return Darwin.round(v) * divisor
Or, even better, simply make use of the new rounding functions and call rounded() on v:
return v.rounded() * divisor

Related

'+' is deprecated: Mixed-type addition is deprecated in Swift 3.1

When I'm directly adding an integer value(i.e: 1,2,3,etc) with another integer variable
let arr:Array = ["One","Two"]
var valueT:Int64 = 0
value = arr.count + 1 //in this line
I get the following warning:
'+' is deprecated: Mixed-type addition is deprecated. Please use explicit type conversion.
I fixed it the warning with this:
value = Int64(value + 1)
Though it is fixed but I wanna know why its called Mixed-type addition as I didn't use ++ . Also is there a better way to fix the warning in swift 3.1?
Update:
The following image is the proof of warning. I'm using Xcode Version 8.3 (8E162).
allROR is an array here.
Edit: To generate the error with your code it should be like
let value = 5
let result: Int64 = value + 1
Now you get the warning
'+' is deprecated: Mixed-type addition is deprecated. Please use explicit type conversion.
But it is looks like warning is misleading, as of both value and 1 is of type Int so its summation also Int so you need to simply convert the result to Int64 and it is what you doing and that is perfectly ok.
let result: Int64 = Int64(value + 1)
Just to answer this part: why its called Mixed-type addition
With the simplified example by Nirav D:
let value = 5
let result: Int64 = value + 1
You can Command-click on + and see the generated interface of Collection:
(After Indexing has finished, of course.)
#available(swift, deprecated: 3.0, obsoleted: 4.0, message: "Mixed-type addition is deprecated. Please use explicit type conversion.")
public func +<T>(lhs: T.Stride, rhs: T) -> T where T : SignedInteger
So, in the code example above, the type of 1 is inferred as Int64, and as Int64.Stride == Int, the operation value + 1 matches the signature func +<T>(lhs: T.Stride, rhs: T) -> T where T : SignedInteger.
This deprecation is included in the revised version of SE-0104 Protocol-oriented integers, this part:
Standard Library no longer provides + and - operators for Strideable types.
They were problematic, as one could have written mixed-type code like
let x: Int64 = 42; x += (1 as Int), which would compile, but
shouldn't. Besides, since the Stride of an unsigned type is signed,
Standard Library had to implement a hack to make code like let x: UInt
= 42; x += (1 as Int) ambiguous. These operators were only necessary because they made advancing collection indices convenient, which is no
longer the case since the introduction of the new indexing model in
Swift 3.
As you already have seen, you can avoid this warning in many ways.
Data type is different that is why it is showing an error
you need to make both variable and constant of same data type
for e.g.
let result = value + Int64(1) //in this line
OK
var arr = [String]()
var i: Int64 = 0
if arr.count == 0 {
i = 1
} else {
i = arr.count + 1
}
gives as a warning '+' is deprecated: Mixed-type addition is deprecated. Please use explicit type conversion.
The reason is, that arr.count and i has different types. And this warning is right. It has nothing with the integer literal 1
this snippet gives you the warning too
var arr = [String]()
var i: Int64 = 0
if arr.count == 0 {
i = 1
} else {
i = 1
i += arr.count // here is the warning now
}
this will not compile, even though it looks very similar
var arr = [String]()
var i: Int64 = 0
if arr.count == 0 {
i = 1
} else {
let tmp = arr.count + 1
i = tmp
}
I hope we get an error message when we compose all of these snippets in the future release.

Cannot match values of type

I'm just starting out with Swift 3 and I'm converting a Rails project to swift (side project while I learn)
Fairly simple, I have a Rails statement Im converting and Im getting many red errors in Xcode:
let startingPoint: Int = 1
let firstRange: ClosedRange = (2...10)
let secondRange: ClosedRange = (11...20)
func calc(range: Float) -> Float {
switch range {
case startingPoint:
return (range - startingPoint) * 1 // or 0.2
case firstRange:
return // code
default:
return //code
}
}
calc will either have an Int or Float value: 10 or 10.50
Errors are:
Expression pattern of type ClosedRange cannot match values of type Float
Binary operator - cannot be applied to operands of type Float and Int
I understand the errors but I dont know what to search for to correct it. Could you point me in the right direction, please?
Swift is strongly typed. Whenever you use a variable or pass something as a function argument, Swift checks that it is of the correct type. You can't pass a string to a function that expects an integer etc. Swift does this check at compile time (since it's statically typed).
To adhere by that rules, try changing your code to this:
let startingPoint: Float = 1
let firstRange: ClosedRange<Float> = (2...10)
let secondRange: ClosedRange<Float> = (11...20)
func calc(range: Float) -> Float {
switch range {
case startingPoint:
return (range - startingPoint) * 1 // or 0.2
case firstRange:
return 1.0 // 1.0 is just an example, but you have to return Float since that is defined in the method
default:
return 0.0 // 0.0 is just an example, put whatever you need here
}
}
For the first error, you might want to specify ClosedRange to be of type Floats. Something similar to:
let firstRange: ClosedRange<Float> = (2...10)
For the second error, the problem is you are trying to compare a Float (range:Float) with an Int (startingPoint). So I would suggest you convert the startingPoint variable to a Float as well.

How to round the double to decimal places value in Swift 3? [duplicate]

I have the following simple extension to Double, which worked fine in everything up to Xcode 8 beta 3
public extension Double {
public func roundTo(_ decimalPlaces: Int) -> Double {
var v = self
var divisor = 1.0
if decimalPlaces > 0 {
for _ in 1 ... decimalPlaces {
v *= 10.0
divisor *= 0.1
}
}
return round(v) * divisor
}
}
As of Beta 4, I am getting "Cannot use mutating member on immutable value: 'self' is immutable" on the round function in the return - has anyone got any clues?
This is due to a naming conflict with the new rounding functions on the FloatingPoint protocol, round() and rounded(), which have been added to Swift 3 as of Xcode 8 beta 4.
You therefore either need to disambiguate by specifying that you're referring to global round() function in the Darwin module:
return Darwin.round(v) * divisor
Or, even better, simply make use of the new rounding functions and call rounded() on v:
return v.rounded() * divisor

Swift 3 Migration - Double Extension rounding issue

I'm migrating our codebase to Swift 3 and I've come across a compilation issue that I can't explain or fix.
I have a method in a Double extension that rounds the Double to a certain number of digits:
public func roundToPlaces(places: Int) -> Double {
let divisor = pow(10.0, Double(places))
return round(self * divisor) / divisor
}
For example:
12.34567.roundToPlaces(2) should return 12.35. However, I'm getting a compilation issue for the round method used in this extension. It's saying that I Cannot use mutating member on immutable value: 'self' is immutable.
Any ideas on what's going on here? How do I fix this issue?
I've fixed the issue. Changing round(self * divisor) to (self * divisor).rounded() resolved the compilation issue.

Swift: How to use sizeof?

In order to integrate with C API's while using Swift, I need to use the sizeof function. In C, this was easy. In Swift, I am in a labyrinth of type errors.
I have this code:
var anInt: Int = 5
var anIntSize: Int = sizeof(anInt)
The second line has the error "'NSNumber' is not a subtype of 'T.Type'". Why is this and how do I fix it?
Updated for Swift 3
Be careful that MemoryLayout<T>.size means something different than sizeof in C/Obj-C. You can read this old thread https://devforums.apple.com/message/1086617#1086617
Swift uses an generic type to make it explicit that the number is known at compile time.
To summarize, MemoryLayout<Type>.size is the space required for a single instance while MemoryLayout<Type>.stride is the distance between successive elements in a contiguous array. MemoryLayout<Type>.stride in Swift is the same as sizeof(type) in C/Obj-C.
To give a more concrete example:
struct Foo {
let x: Int
let y: Bool
}
MemoryLayout<Int>.size // returns 8 on 64-bit
MemoryLayout<Bool>.size // returns 1
MemoryLayout<Foo>.size // returns 9
MemoryLayout<Foo>.stride // returns 16 because of alignment requirements
MemoryLayout<Foo>.alignment // returns 8, addresses must be multiples of 8
Use sizeof as follows:
let size = sizeof(Int)
sizeof uses the type as the parameter.
If you want the size of the anInt variable you can pass the dynamicType field to sizeof.
Like so:
var anInt: Int = 5
var anIntSize: Int = sizeof(anInt.dynamicType)
Or more simply (pointed out by user102008):
var anInt: Int = 5
var anIntSize: Int = sizeofValue(anInt)
Swift 3 now has MemoryLayout.size(ofValue:) which can look up the size dynamically.
Using a generic function that in turn uses MemoryLayout<Type> will have unexpected results if you e.g. pass it a reference of protocol type. This is because — as far as I know — the compiler then has all the type information it needs to fill in the values at compile time, which is not apparent when looking at the function call. You would then get the size of the protocol, not the current value.
In Xcode 8 with Swift 3 beta 6 there is no function sizeof (). But if you want, you can define one for your needs. This new sizeof function works as expected with an array. This was not possible with the old builtin sizeof function.
let bb: UInt8 = 1
let dd: Double = 1.23456
func sizeof <T> (_ : T.Type) -> Int
{
return (MemoryLayout<T>.size)
}
func sizeof <T> (_ : T) -> Int
{
return (MemoryLayout<T>.size)
}
func sizeof <T> (_ value : [T]) -> Int
{
return (MemoryLayout<T>.size * value.count)
}
sizeof(UInt8.self) // 1
sizeof(Bool.self) // 1
sizeof(Double.self) // 8
sizeof(dd) // 8
sizeof(bb) // 1
var testArray: [Int32] = [1,2,3,4]
var arrayLength = sizeof(testArray) // 16
You need all versions of the sizeof function, to get the size of a variable and to get the correct size of a data-type and of an array.
If you only define the second function, then sizeof(UInt8.self) and sizeof(Bool.self) will result in "8". If you only define the first two functions, then sizeof(testArray) will result in "8".
Swift 4
From Xcode 9 onwards there is now a property called .bitWidth, this provides another way of writing sizeof: functions for instances and integer types:
func sizeof<T:FixedWidthInteger>(_ int:T) -> Int {
return int.bitWidth/UInt8.bitWidth
}
func sizeof<T:FixedWidthInteger>(_ intType:T.Type) -> Int {
return intType.bitWidth/UInt8.bitWidth
}
sizeof(UInt16.self) // 2
sizeof(20) // 8
But it would make more sense for consistency to replace sizeof: with .byteWidth:
extension FixedWidthInteger {
var byteWidth:Int {
return self.bitWidth/UInt8.bitWidth
}
static var byteWidth:Int {
return Self.bitWidth/UInt8.bitWidth
}
}
1.byteWidth // 8
UInt32.byteWidth // 4
It is easy to see why sizeof: is thought ambiguous but I'm not sure that burying it in MemoryLayout was the right thing to do. See the reasoning behind the shifting of sizeof: to MemoryLayout here.