I know swift has both reference types and value types. And I know Int is a value type. But how can I store a reference to an integer?
var x:Int = 1
var y:Int = x // I want y to reference x (not copy)
++y
println(x) // prints 1, but I want 2
I tried using boxed types, and I tried using array of Int, but neither works for holding a reference to integer.
I guess I can write my own
class IntRef {
var a:Int = 0
init(value:Int) { a = value }
}
var x:IntRef = IntRef(value: 3)
var y = x
++y.a
println(x.a)
seems a bit awkward.
Unfortunately there is no reference type Integer or something like that in Swift so you have to make a Box-Type yourself.
For example a generic one:
class Reference<T> {
var value: T
init(_ value: T) { self.value = value }
}
You can also use closures. I think these better because they are more powerful and also more specific. They not only store a reference, but they also indicate how the reference will be used. For example,
var x:Int = 1
var setx = { (a:Int) in x = a }
var getx = { x }
setx(getx() + 1)
println(x) // This will print 2
I don't recommend actually defining getx/setx. Define a closure that does a specific task for your application.
Related
I'm wondering if there's a way to specify a numeric type constrained to say a Range or ClosedRange of values.
Non-compiling examples:
typealias QuarterNumber = 1...4
typealias VolumeLevel = 0...11
typealias ObtuseAngle = 90.0<..<180.0
Now for the first example I might instead create something like:
enum QuarterNumber : Int {
case First = 1
case Second = 2
case Third = 3
case Fourth = 4
}
However, that becomes unwieldy in the VolumeLevel case where I'd definitely want to just use the raw numbers. And in the ObtuseAngle case it's completely impractical to specify all the individual values it could take.
One way I might do this is with a wrapper type:
struct ObtuseAngle {
var value: Double
}
extension ObtuseAngle : ExpressibleByFloatLiteral, ExpressibleByIntegerLiteral {
init(floatLiteral v: FloatLiteralType) {
guard 90.0 < v && v < 180.0 else {
preconditionFailure("Invalid angle")
}
self.init(value: v)
}
init(integerLiteral v: IntegerLiteralType) {
self.init(floatLiteral: Double(v))
}
}
let x: ObtuseAngle = 115 // works
let y: ObtuseAngle = 45 // runtime crash
The downside is that now the actual value is buried in .value instead of being a more "primitive" type. I also lose all the mathematical operators I would have on the "base" type, but that makes some sense because e.g. two obtuse angles may not be obtuse when added, and multiplying them would not really result in an angle type, etc.
Note in this example it might also be appropriate to declare value with type Measurement<UnitAngle> and initialize it to Measurement(value: v, unit: UnitAngle.degrees).
I'm trying to figure out how to create a pointer to a pointer in Swift. Now, I know we don't exactly have pointers in Swift, but here is what I am trying to accomplish:
var foo = objA() //foo is variable referencing an instance of objA
var bar = foo //bar is a second variable referencing the instance above
foo = objA() //foo is now a reference to a new instance of objA, but bar
//is still a reference to the old instance
I would like to have bar be a reference to the foo variable instead of it being a reference to the foo object. That way, if foo becomes a reference to a different object, bar goes along for the ride.
One way of having a secondary reference to a variable would be a computed variable:
class C {}
var a = C()
var b: C { return a } // b is a computed variable, returning the current value of a
b === a // true
a = C()
b === a // true, whereas "var b = a" would have made this false
I believe this is what you want:
class O {
let n: Int
init(n: Int) { self.n = n }
}
var foo = O(n: 1)
var bar = withUnsafePointer(&foo) {$0}
print(bar.pointee.n) // Prints 1
foo = O(n: 2)
print(bar.pointee.n) // Prints 2
(Replace pointee with memory for Swift < 3.0)
I really don't know why you'd want that though.
You can try to use UnsafeMutablePointer. See an example below:
let string = UnsafeMutablePointer<String>.alloc(1)
string.initialize("Hello Swift")
print(string.memory)
let next = string
next.memory = "Bye Bye Swift"
print(string.memory) // prints "Bye Bye Swift"
But it smells a little and It will be better to avoid using technique like this.
Currently I've got some swift code like this:
class C {
let type: Type;
var num = 0;
init() {
self.type = Type({ (num: Int) -> Void in
self.num = num;
});
}
}
The Swift compiler refuses to permit it, saying that I've referenced self.type before it's initialized, even though that's clearly completely untrue. Furthermore, I can't employ the workaround found in other questions/answers, because the type is not optional, and it's immutable, so it can't be initialized with nil pointlessly first.
How can I make the Swift compiler accept this perfectly valid code?
This has nothing to do with returning from the initializer early. The callback is executed asynchronously- it is stored and then used later.
I also have a few further lets that are initialized after this one. I would have to turn them all into mutable optionals, even though they're not optional and can't be mutated.
This works:
class C {
var type: Type?;
var num = 0;
init() {
self.type = Type({ (num: Int) -> Void in
self.num = num;
});
}
}
I assume you knew that. But you want to know why your version isn't working.
Now for the tricky part: for the line
self.num = num;
to work, the compiler has to pass self to inside the closure. The closure could be and probably is executed inside of the constructor of Type.
This is as if you had written
self.type = Type({ (self: C, num: Int) -> Void in
self.num = num
});
which is syntactically wrong but explains what the compiler has to do to compile your code.
To pass this necessary instance of self to the constructor of Type, self has to be initialized. But self isn't initialized, because you are still in the constructor.
The compiler tells you which part of self is not initialized, when you try to pass self to the constructor of Type.
P.S.
obviously Type knows num in your code.
If you want to use let in C instead of var you could do...
class Type {
let num: Int
init () {
num = 3
}
}
class C {
let type: Type;
var num = 0;
init() {
self.type = Type();
num = type.num
}
}
or even
class C {
let type: Type;
var num: Int {
return type.num
}
init() {
self.type = Type();
}
}
depending on whether you want num to change or not. Both examples compile without error.
First, it's important to explain why this is not perfectly valid code, and that it isn't clear at all that self.type is not used before it is initialized. Consider the following extension of your code:
struct A {
init(_ f: (Int) -> Void) { f(1) }
}
class C {
let type: A
var num = 0 {
didSet { print(type) }
}
init() {
self.type = A({ (num: Int) -> Void in
self.num = num
})
}
}
If you walk through the logic, you'll note that self.type is accessed via print before it has been initialized. Swift can't currently prove that this won't happen, and so doesn't allow it. (A theoretical Swift compiler might prove that it wouldn't happen for some particular cases, but for most non-trivial code it would likely bump into the halting problem. In any case, the current Swift compiler isn't powerful enough to make this proof, and it's a non-trivial proof to make.)
One solution, though somewhat unsatisfying, is to use implicitly unwrapped optionals:
private(set) var type: A! = nil
Except for the declaration, every other part of the code is the same. You don't have to treat it as optional. In practice, this just turns off the "used before initialization" checks for this variable. It also unfortunately makes it settable inside of the current file, but does make it immutable to everyone else.
This is the technique I've most often used, though often I try to rework the system so that it doesn't require this kind of closure (not always possible, but I often rack my brain to try). It's not beautiful, but it is consistent and bounds the ugly.
Another technique that can work in some cases is laziness:
class C {
lazy var type: A = {
A({ (num: Int) -> Void in self.num = num })}()
var num = 0
init() {}
}
Sometimes that works, sometimes it doesn't. In your case it might. When it does work, it's pretty nice because it makes the property truly immutable, rather than just publicly immutable, and of course because it doesn't require !.
Interesting.
It looks like the error goes away if you avoid referencing self inside the closure.
If the callback is synchronous you can change your code as follow:
class C {
let type: Type
var num = 0
init() {
var numTemp = 0 // create a temporary local var
let initialType = Type({ (num: Int) -> () in
numTemp = num // avoid self in the closure
});
self.type = initialType
self.num = numTemp
}
}
Important: this will NOT work if the closure is async.
Tested with Xcode (Playground) 6.4 + Swift 1.2
Hope this helps.
As appzYourLife said, a temporary variable for num will suffice:
class Type{
var y: (Int)->Void
init(y2:((Int)->Void)){
self.y = y2
}
}
class C {
let type: Type
var num: Int = 0
init() {
var num2 = 0
self.type = Type(y2: { (num3: Int) -> () in
num2 = num3
});
self.num = num2
}
}
However, you do not need a temporary variable for type, this error message is misleading.
I want to create a function which has a number parameter that should be between 0..100 %
I thought that the best way to enforce this would be by creating a wrapper type using FloatingPointType protocol , but I am getting a compilation error
Protocol 'FloatingPointType' can only be used as a generic constraint because it has Self or associated type requirements
struct Percent {
init(val : FloatingPointType) {
// enforce value is between 0..100
}
}
func hideView(percent : Percent) {
// percent is 0..100 at this point
.. do some work here
}
What would be the correct way to enforce this condition at compile time?
Update: As of Swift 5.1 this can more easily achieved with “property wrappers”, see for example “Implementing a value clamping property wrapper” on NSHipster.
The easiest way would be to define a type that holds a
Double (or Float or Int) in the required range:
struct P {
let val : Double
init (val : Double) {
// ...
}
}
But if you want to treat different floating point types then
you have to define a generic class
struct Percent<T : FloatingPointType> {
let val : T
init(val : T) {
self.val = val
}
}
To compare the values you need to require Equatable as well:
struct Percent<T : FloatingPointType where T: Equatable> {
let val : T
init(val : T) {
if val < T(0) {
self.val = T(0)
} else if val > T(100) {
self.val = T(100)
} else {
self.val = val
}
}
}
Example:
let p = Percent(val: 123.4)
println(p.val) // 100.0
Note that this requires that hideView() is generic as well:
func hideView<T>(percent : Percent<T>) {
// percent.val has the type `T` and is in the range
// T(0) ... T(100)
}
The language feature you are looking for is called Partial Functions. A partial function is a function that is not defined for all possible arguments of the specified type. For instance, they are available in Haskell or Scala - but they are not available in Swift.
So the best you can do is to check at runtime if the provided value lies within the valid range and act accordingly (e.g. raise an exception or return an error).
It sounds like you're trying to enforce, at compile time, that you can't pass a value outside the range 0.0 to 100.0 to a function. You can't do that.
What you can do is write your function to throw an exception if it is passed a value that's out of range, or display an error to the user and return if it's out of range.
Adding up to Martin's answer with updating for Swift 5:
struct Percentage<T: FloatingPoint> {
let value: T
init(value: T) {
self.value = min(max(value, T(0)), T(100))
}
}
Usage site, you can define the generic type like:
func updateCircleWith(percentage: Percentage<CGFloat>) {
// percentage.value will be between 0 and 100 here
// of course the CGFloat(0) and CGFloat(100)
}
As an exercise, I'm trying to extend Array in Swift to add a sum() member function. This should be type safe in a way that I want a call to sum() to compile only if the array holds elements that can be added up.
I tried a few variants of something like this:
extension Array {
func sum<U : _IntegerArithmeticType where U == T>() -> Int {
var acc = 0
for elem in self {
acc += elem as Int
}
return acc
}
}
The idea was to say, “OK, this is a generic function, the generic type must be something like an Int, and must also be the same as T, the type of the elements of the array”. But the compiler complains: “Same-type requirement make generic parameters U and T equivalent”. That's right, and they should be, with the additional contraint T : _IntegerArithmeticType.
Why isn't the compiler letting me do this? How can I do it?
(I know that I should later fix how things are added up and what the return type exactly is, but I'm stuck at the type constraint for now.)
As per Martin R's comment, this is not currently possible. The thing I'm tempted to use in this particular situation would be an explicit passing of a T -> Int conversion function:
extension Array {
func sum(toInt: T -> Int?) -> Int {
var acc = 0
for elem in self {
if let i = toInt(elem) {
acc += i
}
}
return acc
}
}
Then I can write stuff like this:
func itself<T>(t: T) -> T {
return t
}
let ss = ["1", "2", "3", "4", "five"].sum { $0.toInt() }
let si = [1, 2, 3, 4].sum(itself)
An explicit function has to be passed, though. The (itself) part can of course be replaced by { $0 }. (Others have called the itself function identity.)
Note that an A -> B function can be passed when A -> B? is needed.