Compilation error attempting to use `min()` function in Swift - swift

I'm trying to write an extension for the Swift Int type to save nested min()/max() functions. It looks like this:
extension Int {
func bound(minVal: Int, maxVal: Int) -> Int {
let highBounded = min(self, maxVal)
return max(minVal, highBounded)
}
}
However, I have a compilation error when assigning/computing highBounded:
IntExtensions.swift:13:25: 'Int' does not have a member named 'min'
Why aren't the functions defined by the standard library properly located?

It looks like it is trying to find a method in Int for min() and max() since you are extending Int. You can get around this and use the default min and max functions by specifying the Swift namespace.
extension Int {
func bound(minVal: Int, maxVal: Int) -> Int {
let highBounded = Swift.min(self, maxVal)
return Swift.max(minVal, highBounded)
}
}

Related

Cannot convert value of type 'Range<Int>' to expected argument type 'Range<_>'

Using Swift 4.2, I get the title as an error in this function:
func jitter(range: Int) -> Int {
return Int.random(in: 0..<range, using: SystemRandomNumberGenerator())
}
Questions:
What precisely does Range<_> mean?
Is there a better way to get this? I simply want a small random number inside an animation loop.
The Swift compiler is giving you a bad error message. The problem is that the second argument to Int.random(in:using:) must be passed inout (i.e. with a & prefix). This works:
func jitter(range: Int) -> Int {
var rng = SystemRandomNumberGenerator()
return Int.random(in: 0..<range, using: &rng)
}
Even easier, omit the using: parameter altogether (SystemRandomNumberGenerator is the default RNG anyway):
func jitter(range: Int) -> Int {
return Int.random(in: 0..<range)
}

Swfit3.0 reduce - Argument passed to call that takes no arguments [duplicate]

After converting from Swift 2.2 to 3.0 my Array extension does not compile anymore, because it contains a call to global standard library function min<T>(T,T) and shows compiler error extra argument in call.
Here's a simple way to reproduce the error:
extension Array {
func smallestInt(first: Int, second: Int) -> Int {
return min(first, second) // compiler error: "Extra argument in call"
}
}
I get the same error when adding the same function to an extension of Dictionary, while the exact same code compiles just fine in an extension of other types (e.g. String or AudioBuffer):
Looking at the documentation of Array and Dictionary, I find that there are instance methods on Sequence named public func min() -> Element? and public func min(by areInIncreasingOrder: (Element, Element) throws -> Bool) rethrows -> Element?. While both String and AudioBuffer do not have any kind of min(...) function.
Is it possible that this is the reason why I can't call the global function? The compiler can't distinguish between global func min<T>(T,T) and self.min(...) although they have completely different signatures?
Is this a bug or a feature? What am I doing wrong? How can I call min(T,T) correctly inside an Array extension?
I see no reason why the compiler shouldn't be able to resolve this function call, therefore I would consider it a bug (it has already been filed – see SR-2450).
It seems to occur whenever attempting to call a top-level function with the same name, but unambiguously different signature to a method or property that's accessible from the same scope in a given type (instance or static).
An even simpler example would be:
func foo(_ a: Int) {}
struct Foo {
func foo() {} // or static func foo() {}, var foo = 0, static var foo = 0
func bar() {
foo(2) // error: argument passed to call that takes no arguments
}
}
Until fixed, a simple solution would be to prefix the call with the name of the module in which it resides in order to disambiguate that you're referring to the top-level function, rather than the instance one. For the standard library, that's Swift:
extension Array {
func smallestInt(first: Int, second: Int) -> Int {
return Swift.min(first, second)
}
}
In Swift 4, the compiler has a better diagnostic for this error (though the fact that it's still an error is a bug IMO):
extension Array {
func smallestInt(first: Int, second: Int) -> Int {
// Use of 'min' refers to instance method 'min(by:)'
// rather than global function 'min' in module 'Swift'
// - Use 'Swift.' to reference the global function in module 'Swift'
return min(first, second)
}
}
Although what's interesting is that the compiler will now also warn on attempting to call a standard library method with the same name as a stdlib top-level function:
extension Array where Element : Comparable {
func smallest() -> Element? {
// Use of 'min' treated as a reference to instance method in protocol 'Sequence'
// - Use 'self.' to silence this warning
// - Use 'Swift.' to reference the global function
return min()
}
}
In this case, as the warning says, you can silence it by using an explicit self.:
extension Array where Element : Comparable {
func smallest() -> Element? {
return self.min()
}
}
Although what's really curious about this warning is it doesn't appear to extend to non-stdlib defined functions:
func foo(_ a: Int) {}
struct Foo {
func foo() {}
func bar() {
foo() // no warning...
}
}

Using overridden math function

Trying to compile this code
pow(10,2)
struct Test {
var i:Int
func pow(_ p:Int) -> Int { return pow(i,p) }
}
var s = Test(i:10)
s.pow(2)
gives me a compiler error. Obviously the standard math.h function pow was blinded out by Swift's scoping rules. This rather smells like a compiler error since the math.h version of pow has a different signature. Is there any way to invoke it, though?
There are two problems: The first is how pow(i,p) is resolved.
As described in Swift 3.0: compiler error when calling global func min<T>(T,T) in Array or Dictionary extension, this
can be solved by prepending the module name to the function call.
The second problem is that there is no pow function taking
two integer arguments. There is
public func powf(_: Float, _: Float) -> Float
public func pow(_: Double, _: Double) -> Double
in the standard math library, and
public func pow(_ x: Decimal, _ y: Int) -> Decimal
in the Foundation library. So you have to choose which one to use,
for example:
struct Test {
var i:Int
func pow(_ p:Int) -> Int {
return lrint(Darwin.pow(Double(i),Double(p)))
}
}
which converts the arguments to Double and rounds the result
back to Int.
Alternatively, use iterated multiplication:
struct Test {
var i: Int
func pow(_ p:Int) -> Int {
return (0..<p).reduce(1) { $0.0 * i }
}
}
I observed at https://codereview.stackexchange.com/a/142850/35991 that
this is faster for small exponents. Your mileage may vary.

Why does the code produces error Type '(int, int)' does not conform to protocol 'IntegerLiteralConvertible'

With the code below, I get "Type '(int, int)' does not conform to protocol 'IntegerLiteralConvertible' instead of missing argument as one would expect. What's IntegerLiteralConvertible and why do you think the compiler produces this error instead for the code below?
I have looked at other SO posts regarding this error but have not gotten any insight from them.
func add(x:Int, y:Int) {
}
add(3)
My best guess is that it tries to convert the (3) tuple into a (Int, Int) tuple.
In fact, this is accepted by the compiler and works as expected:
func add(x: Int, y: Int) -> Int {
return x + y
}
let tuple = (4, 7)
add(tuple)
In playground that outputs 11, which is the expected sum result.
Note: the code above works if the func is global, with no named parameters. If it's an instance or class/static method, then the tuple must include parameter names:
class MyClass {
class func add(# x: Int, y: Int) -> Int {
return x + y
}
}
let tuple = (x: 3, y: 7)
MyClass.add(tuple) // returns 10
As for IntegerLiteralConvertible, it's used to make a class or struct adopting it to be initializable from a literal integer. Let's say you have a struct and you want to be able to instantiate by assigning an literal integer, you achieve it this way:
struct MyDataType : IntegerLiteralConvertible {
var value: Int
static func convertFromIntegerLiteral(value: IntegerLiteralType) -> MyDataType {
return MyDataType(value: value)
}
init(value: Int) {
self.value = value
}
}
and then you can create an instance like this:
let x: MyDataType = 5
It looks like you are trying to use currying — Swift has built-in support for this, but it is not automatic, so you have to be explicit about it when declaring your function:
func add(x:Int)(y:Int) -> Int {
return x + y
}
println(add(3)) // (Function)

Swift: why 'subscript(var digitIndex: Int) -> Int' is a valid function signature?

This piece of code comes from Swift documentation https://developer.apple.com/library/prerelease/mac/documentation/Swift/Conceptual/Swift_Programming_Language/Extensions.html
extension Int {
subscript(var digitIndex: Int) -> Int {
var decimalBase = 1
while digitIndex > 0 {
decimalBase *= 10
--digitIndex
}
return (self / decimalBase) % 10
}
}
Apparently var is a reserved word, so why it is legal to declare: subscript(var digitIndex: Int) -> Int?
If I change the signature to subscript(#digitIndex: Int) -> Int, I will get this compiler error:
My questions are:
1) why the signature is valid?
2) why my change causes an exception?
Declaring a function argument with var means that it can be modified, is not a constant. In your case, without using var, you had a constant argument but you attempted to decrement it. Thus the error.
Your two cases are:
func foo (x: int) { /* x is a constant, like `let x: int` */ }
func foo (var x: int) { /* x is not a constant */ }