I was trying to write an Int extension to clamp an Int to a specific range, like this:
extension Int {
func clamp(left: Int, right: Int) -> Int {
return min(max(self, left), right)
}
}
I was getting a compiler error and after a while I realised that min is being interpreted here as Int.min, which is a constant for the lowest Int.
I can reimplement this avoiding min/max but I'm curious: is there a way I can reference those from an Int extension?
You can prepend the module name, in this case Swift:
extension Int {
func clamp(left: Int, right: Int) -> Int {
return Swift.min(Swift.max(self, left), right)
}
}
And just for fun: You get the same result with
extension Int {
func clamp(left: Int, right: Int) -> Int {
return (left ... right).clamp(self ... self).start
}
}
using the clamp() method from ClosedInterval.
You could create a function myMin<T>(a: T, b: T) that calls min and use that in your extension.
Related
I was trying to write an Int extension to clamp an Int to a specific range, like this:
extension Int {
func clamp(left: Int, right: Int) -> Int {
return min(max(self, left), right)
}
}
I was getting a compiler error and after a while I realised that min is being interpreted here as Int.min, which is a constant for the lowest Int.
I can reimplement this avoiding min/max but I'm curious: is there a way I can reference those from an Int extension?
You can prepend the module name, in this case Swift:
extension Int {
func clamp(left: Int, right: Int) -> Int {
return Swift.min(Swift.max(self, left), right)
}
}
And just for fun: You get the same result with
extension Int {
func clamp(left: Int, right: Int) -> Int {
return (left ... right).clamp(self ... self).start
}
}
using the clamp() method from ClosedInterval.
You could create a function myMin<T>(a: T, b: T) that calls min and use that in your extension.
Trying to compile this code
pow(10,2)
struct Test {
var i:Int
func pow(_ p:Int) -> Int { return pow(i,p) }
}
var s = Test(i:10)
s.pow(2)
gives me a compiler error. Obviously the standard math.h function pow was blinded out by Swift's scoping rules. This rather smells like a compiler error since the math.h version of pow has a different signature. Is there any way to invoke it, though?
There are two problems: The first is how pow(i,p) is resolved.
As described in Swift 3.0: compiler error when calling global func min<T>(T,T) in Array or Dictionary extension, this
can be solved by prepending the module name to the function call.
The second problem is that there is no pow function taking
two integer arguments. There is
public func powf(_: Float, _: Float) -> Float
public func pow(_: Double, _: Double) -> Double
in the standard math library, and
public func pow(_ x: Decimal, _ y: Int) -> Decimal
in the Foundation library. So you have to choose which one to use,
for example:
struct Test {
var i:Int
func pow(_ p:Int) -> Int {
return lrint(Darwin.pow(Double(i),Double(p)))
}
}
which converts the arguments to Double and rounds the result
back to Int.
Alternatively, use iterated multiplication:
struct Test {
var i: Int
func pow(_ p:Int) -> Int {
return (0..<p).reduce(1) { $0.0 * i }
}
}
I observed at https://codereview.stackexchange.com/a/142850/35991 that
this is faster for small exponents. Your mileage may vary.
I'm trying to write an extension for the Swift Int type to save nested min()/max() functions. It looks like this:
extension Int {
func bound(minVal: Int, maxVal: Int) -> Int {
let highBounded = min(self, maxVal)
return max(minVal, highBounded)
}
}
However, I have a compilation error when assigning/computing highBounded:
IntExtensions.swift:13:25: 'Int' does not have a member named 'min'
Why aren't the functions defined by the standard library properly located?
It looks like it is trying to find a method in Int for min() and max() since you are extending Int. You can get around this and use the default min and max functions by specifying the Swift namespace.
extension Int {
func bound(minVal: Int, maxVal: Int) -> Int {
let highBounded = Swift.min(self, maxVal)
return Swift.max(minVal, highBounded)
}
}
With the code below, I get "Type '(int, int)' does not conform to protocol 'IntegerLiteralConvertible' instead of missing argument as one would expect. What's IntegerLiteralConvertible and why do you think the compiler produces this error instead for the code below?
I have looked at other SO posts regarding this error but have not gotten any insight from them.
func add(x:Int, y:Int) {
}
add(3)
My best guess is that it tries to convert the (3) tuple into a (Int, Int) tuple.
In fact, this is accepted by the compiler and works as expected:
func add(x: Int, y: Int) -> Int {
return x + y
}
let tuple = (4, 7)
add(tuple)
In playground that outputs 11, which is the expected sum result.
Note: the code above works if the func is global, with no named parameters. If it's an instance or class/static method, then the tuple must include parameter names:
class MyClass {
class func add(# x: Int, y: Int) -> Int {
return x + y
}
}
let tuple = (x: 3, y: 7)
MyClass.add(tuple) // returns 10
As for IntegerLiteralConvertible, it's used to make a class or struct adopting it to be initializable from a literal integer. Let's say you have a struct and you want to be able to instantiate by assigning an literal integer, you achieve it this way:
struct MyDataType : IntegerLiteralConvertible {
var value: Int
static func convertFromIntegerLiteral(value: IntegerLiteralType) -> MyDataType {
return MyDataType(value: value)
}
init(value: Int) {
self.value = value
}
}
and then you can create an instance like this:
let x: MyDataType = 5
It looks like you are trying to use currying — Swift has built-in support for this, but it is not automatic, so you have to be explicit about it when declaring your function:
func add(x:Int)(y:Int) -> Int {
return x + y
}
println(add(3)) // (Function)
This piece of code comes from Swift documentation https://developer.apple.com/library/prerelease/mac/documentation/Swift/Conceptual/Swift_Programming_Language/Extensions.html
extension Int {
subscript(var digitIndex: Int) -> Int {
var decimalBase = 1
while digitIndex > 0 {
decimalBase *= 10
--digitIndex
}
return (self / decimalBase) % 10
}
}
Apparently var is a reserved word, so why it is legal to declare: subscript(var digitIndex: Int) -> Int?
If I change the signature to subscript(#digitIndex: Int) -> Int, I will get this compiler error:
My questions are:
1) why the signature is valid?
2) why my change causes an exception?
Declaring a function argument with var means that it can be modified, is not a constant. In your case, without using var, you had a constant argument but you attempted to decrement it. Thus the error.
Your two cases are:
func foo (x: int) { /* x is a constant, like `let x: int` */ }
func foo (var x: int) { /* x is not a constant */ }