In a Swift app I have a function that only accepts integers between +2 and positive infinity. Is there any way to enforce this at compile time?
Updated with a small code sample:
To calculate a Fibonacci sequence we need at least two numbers to start with, anything else is an error. Here I use guard and an failable initializer to verify this at runtime.
struct FibonacciSeed {
var magnitude = 2
init() { }
init?( magnitude: Int ) {
guard magnitude > 1 else { return nil }
self.magnitude = magnitude
}
var seed: [Int] {
// return valid seed ...
}
}
I was curious if there were some fancy way to enlist the help of the compiler to enforce this at compile time. Like the compiler doesn't let me compile:
var a:UInt = -8
There is no such feature in Swift. Compiler cannot know at compile time what kind of values the application passes to functions at runtime. As a runtime feature it would just make the Swift runtime libraries bigger. That would make applications bigger even if they don't use the specific feature. I'm quite sure Apple want to keep the libraries as small as possible.
Related
In my app, I am using Integers, Doubles, Floats, and CGFloats to represent a number of different values. According to my app’s semantic, these values may become “invalid“, a state which I represent using a reserved value, i. e. -1. The simplest approach to make this usable in code would be this:
anIntVariable = -1
aFloatVariable = -1
aDoubleVariable = -1.0
...
To get away from this convention driven approach and increase readability and adaptability I defined a number of extensions:
extension Int {
static var invalid = -1
}
extension Float {
static var invalid = -1.0
}
extension Double {
static var invalid = -1.0
}
...
So the above code would now read:
anIntVariable = .invalid
aFloatVariable = .invalid
aDoubleVariable = .invalid
...
It does work. However, I’m not really happy with this approach. Does anyone of you have an idea for a better way of expressing this?
To add some complexity, in addition to simple types like Int, Float, or Double, I also use Measurement based types like this:
let length = Measurement(value: .invalid, unit: UnitLength.baseUnit())
Extra bonus point if you find a way to include “invalid“ measurements in your solution as well...
Thanks for helping!
Some Additional Thoughts
I know I could use optionals with nil meaning “invalid”. In this case, however, you’d have additional overhead with conditional unwrapping... Also, using nil as “invalid” is yet another convention.
It isn’t better or worse, just different. Apple uses “invalid” values in its own APIs, i. e. the NSTableViewmethod row(for:) will return -1 if the view is not in the table view. I agree, however, that this very method perfectly illustrates that returning an optional would make a lot of sense...
I'd use optionals for that.
If you want lack of value and invalid value to be different states in your app, i'd suggest creating a wrapper for your values:
enum Validatable<T> {
case valid(T)
case invalid
}
And use it like that:
let validValue : Validatable<Int> = .valid(5)
let invalidValue : Validatable<Int> = .invalid
var validOptionalDouble : Validatable<Double?> = .valid(nil)
validOptionalDouble = .valid(5.0)
let measurement : Validatable<Measurement> = .invalid
etc.
You can then check for value by switch on that enum to access the associated value like this:
switch validatableValue {
case .valid(let value):
//do something with value
case .invalid:
//handle invalid state
}
or
if case .valid(let value) = validatableValue {
//handle valid state
}
etc
I keep bumping onto this problem repeatedly. In real life I see sets of numbers that represent a particular quality but I have difficulties to express them as distinct type in Swift.
For example the percent type. Let says I would want to have a Percent type with only integers. Also this percent would never be able to go over 100 or below zero.
I could express that in pure C as a union, with members ranging from 0 to 100. However using the Swift enum for that with underlying value type doesn't seem to me like to correct approach. Or is it?
Let's pick another one. Bribor Interbank Interest rate. I know it will always be a range between 0 and 20 percent. But the number itself will be a decimal with two decimal places.
What's the correct way to deal with this problem in Swift? Generics perhaps?
As Michael says in the comment, probably something like this:
struct IntPercent {
let value : Int8
init?(_ v : Int) {
guard v >= 0 && v <= 100 else { return nil }
value = Int8(v)
}
}
(Note: use a struct, not a class for a base value like that)
If you do that a lot, you can improve that a little using protocols, like so:
protocol RestrictedValue {
associatedtype T : Comparable
static var range : ( T, T ) { get }
var value : T { set get }
init() // hack to make it work
}
extension RestrictedValue {
init?(v: T) {
self.init()
guard Self.range.0 <= v && Self.range.1 >= v else { return nil }
value = v
}
}
struct IntPercent : RestrictedValue {
static var range = ( 0, 100 )
var value : Int = 0
init() {}
}
I don't think you can use Generics to limit base type values.
But I bet there is an even better solution - this one is definitely not awezome :-)
Constraining the value of an object is not the same as a constrained type. Constraining values of numbers doesn't really make sense, as the things you are talking about are just numbers -- there is no such thing as a percent-number or a Bribor-Interbank-Interest-rate-number; they are just numbers. If you want to constrain their value, you do it wherever you get or use the numbers. It doesn't make sense to define a new type simply to constrain the values of an existing type.
I have researched and looked into many different random function, but the downside to them is that they only work for one data type. I want a way to have one function work for a Double, Floats, Int, CGFloat, etc. I know I could just convert it to different data types, but I would like to know if there is a way to have it done using generics. Does someone know a way to make a generic random function using swift. Thanks.
In theory you could port code such as this answer (which is not necessarily a great solution, depending on your needs, since UInt32.max is relatively small)
extension FloatingPointType {
static func rand() -> Self {
let num = Self(arc4random())
let denom = Self(UInt32.max)
// this line won’t compile:
return num / denom
}
}
// then you could do
let d = Double.rand()
let f = Float.rand()
let c = CGFloat.rand()
Except… FloatingPointType doesn’t conform to a protocol that guarantees operators like /, + etc (unlike IntegerType which conforms to _IntegerArithmeticType).
You could do this manually though:
protocol FloatingPointArithmeticType: FloatingPointType {
func /(lhs: Self, rhs: Self) -> Self
// etc
}
extension Double: FloatingPointArithmeticType { }
extension Float: FloatingPointArithmeticType { }
extension CGFloat: FloatingPointArithmeticType { }
extension FloatingPointArithmeticType {
static func rand() -> Self {
// same as before
}
}
// so these now work
let d = Double.rand() // etc
but this probably isn’t quite as out-of-the-box as you were hoping (and no doubt contains some subtle invalid assumption about floating-point numbers on my part).
As for something that works across both integers and floats, the Swift team have mentioned on their mail groups before that the lack of protocols spanning both types is a conscious decision since it’s rarely correct to write the same code for both, and that’s certainly the case for generating random numbers, which need two very different approaches.
Before you get too far into this, think about what you want the behavior of a type-agnostic random function to be, and whether that's something that you want. It sounds like you're proposing something like this:
// signature only
func random<T>() -> T
// example call sites, with specialization by inference from declared result type
let randInt: Int = random()
let randFloat: Float = random()
let randDouble: Double = random()
let randInt64: Int = random()
(Note this syntax is sort of fake: without a parameter of type T, the implementation of random<T>() can't determine which type to return.)
What do you expect the possible values in each of these to be? Is randInt always a value between zero and Int.max? (Or maybe between zero and UInt32.max?) Is randFloat always between 0.0 and 1.0? Should randDouble actually have a larger count of possible values than randFloat (per the increased resolution between 0.0 and 1.0 of the Double type)? How do you account for the fact that Int is actually Int32 on 32-bit systems and Int64 on 64-bit hardware?
Are you sure it makes sense to have two (or more) calls that look identical but return values in different ranges?
Second, do you really want "arbitrarily random" random number generation everywhere in your app/game? Most use of RNGs is in game design, where typically there are a couple of important things you want in your RNG before you get your product past the prototyping stage:
Independence: I've seen games where you could learn to predict the next "random" enemy encounter based on recent "random" NPC chitchat/emotes. If you're using random elements in multiple gameplay systems, you don't want
Determinism: If you want to be able to reproduce a sequence of game events — either for testing/debugging or for consistent results between clients in a networked game — you don't want to be using a random function where you can't control that sequence. arc4random doesn't let you control the initial seed, and you have no control over the sequence because you don't know what other code in your process (library code, or just other code of your own that you forgot about) is also pulling numbers from the generator.
(If you're not making a game... much of this still applies, though it may be less important. You still don't want to be re-running your test case until the heat death of the universe trying to randomly find the same bug that one of your users reported.)
In iOS 9 / OS X 10.11 / tvOS, GameplayKit provides a suite of randomization tools to address these issues.
let randA = GKARC4RandomSource()
let someNumbers = (0..<1000).map { _ in randA.nextInt() }
let randB = GKARC4RandomSource()
let someMoreNumbers = (0..<1000).map { _ in randB.nextInt() }
let randC = GKARC4RandomSource(seed: randA.seed)
let evenMoreNumbers = (0..<1000).map { _ in randC.nextInt() }
Here, someMoreNumbers is always nondeterministic, no matter what happens in the generation of someNumbers or what other randomization activity happens on the system. And evenMoreNumbers is the exact same sequence of numbers as someNumbers.
Okay, so the GameplayKit syntax isn't quite what you want? Well, some of that is a natural consequence of having to manage RNGs as objects so that you can keep them independent and deterministic. But if you really want to have a super-simple random() call that you can slot in wherever needed, regardless of return type, and be independent and deterministic...
Here's one possible recipe for that. It implements random() as a static function on a type extension, so that you can use type-inference shorthand to write it; e.g.:
// func somethingThatTakesAnInt(a: Int, andAFloat Float: b)
somethingThatTakesAnInt(.random(), andAFloat: random())
The first parameter automatically calls Int.random() and the second calls Float.random(). (This is the same mechanism that lets you use shorthand for referring to enum cases, or .max instead of Int.max, etc.)
And it makes the type extensions private, with the idea that different source files will want independent RNGs.
// EnemyAI.swift
private extension Int {
static func random() -> Int {
return EnemyAI.die.nextInt()
}
}
class EnemyAI: NSObject {
static let die = GKARC4RandomSource()
// ...
}
// ProceduralWorld.swift
private extension Int {
static func random() -> Int {
return ProceduralWorld.die.nextInt()
}
}
class ProceduralWorld: NSObject {
static let die = GKARC4RandomSource()
// ...
}
Repeat with extensions for more types as desired.
If you add some breakpoints or logging to the different random() functions you'll see that the two implementations of Int.random() are specific to the file they're in.
Anyway, that's a lot of boilerplate, but perhaps it's good food for thought...
Personally, I'd probably write individual implementations for each thing you wanted. There just aren't that many, and it's a lot safer and more flexible. But… sure, you can do this. Just create a random bit pattern and say "that's a number."
func randomValueOfType<T>(type: T.Type) -> T {
let size = sizeof(type)
let bits = UnsafeMutablePointer<T>.alloc(1)
arc4random_buf(bits, size)
return bits.memory
}
(This is technically "that's a something" but if you pass it something other than number-like types, you'll probably crash because most random bit patterns aren't a legal object.)
I believe every possible bit pattern will generate a legal IEEE 754 number, but "legal" may be more complex than you're thinking. A "totally random" float would rightly include NaN (not-a-number) which will show up reasonably often in your results.
There are some other special cases like the infinities and negative zero, but in practice those should never occur. Any single bit pattern showing up in a random choice of 32-bits is effectively zero. There are lots of NaN bit patterns, so it shows up a lot.
And that's the problem with this whole approach. If your random generator can accept that NaN shows up, then it's probably testing real floating point. But if you're testing real floating point, you really want to be checking the edge cases like infinity and negative zero. But if you don't want to include NaN, then you don't really mean "a random Float" you mean "a random Real number that can be expressed as a Float." There's no type for that, so you would need to write specific logic to handle it, and if you're doing that, you might as well write a specialized random generator for each type.
But this function is probably still a useful foundation for building that. You could just generate values until one is a legal number (NaN doesn't show up that often, so you'll almost certainly get it in less than 2 tries).
This kind of emphasizes the point Airspeed Velocity made about why there's no generic "number" protocol. You usually can't just treat floating point numbers like integers. They just work differently, and you very often need to think about that fact.
I will first explain what I'm trying to do and how I got to where I got stuck before getting to the question.
As a learning exercise for myself, I took some problems that I had already solved in Objective-C to see how I can solve them differently with Swift. The specific case that I got stuck on is a small piece that captures a value before and after it changes and interpolates between the two to create keyframes for an animation.
For this I had an object Capture with properties for the object, the key path and two id properties for the values before and after. Later, when interpolating the captured values I made sure that they could be interpolated by wrapping each of them in a Value class that used a class cluster to return an appropriate class depending on the type of value it wrapped, or nil for types that wasn't supported.
This works, and I am able to make it work in Swift as well following the same pattern, but it doesn't feel Swift like.
What worked
Instead of wrapping the captured values as a way of enabling interpolation, I created a Mixable protocol that the types could conform to and used a protocol extension for when the type supported the necessary basic arithmetic:
protocol SimpleArithmeticType {
func +(lhs: Self, right: Self) -> Self
func *(lhs: Self, amount: Double) -> Self
}
protocol Mixable {
func mix(with other: Self, by amount: Double) -> Self
}
extension Mixable where Self: SimpleArithmeticType {
func mix(with other: Self, by amount: Double) -> Self {
return self * (1.0 - amount) + other * amount
}
}
This part worked really well and enforced homogeneous mixing (that a type could only be mixed with its own type), which wasn't enforced in the Objective-C implementation.
Where I got stuck
The next logical step, and this is where I got stuck, seemed to be to make each Capture instance (now a struct) hold two variables of the same mixable type instead of two AnyObject. I also changed the initializer argument from being an object and a key path to being a closure that returns an object ()->T
struct Capture<T: Mixable> {
typealias Evaluation = () -> T
let eval: Evaluation
let before: T
var after: T {
return eval()
}
init(eval: Evaluation) {
self.eval = eval
self.before = eval()
}
}
This works when the type can be inferred, for example:
let captureInt = Capture {
return 3.0
}
// > Capture<Double>
but not with key value coding, which return AnyObject:\
let captureAnyObject = Capture {
return myObject.valueForKeyPath("opacity")!
}
error: cannot invoke initializer for type 'Capture' with an argument list of type '(() -> _)'
AnyObject does not conform to the Mixable protocol, so I can understand why this doesn't work. But I can check what type the object really is, and since I'm only covering a handful of mixable types, I though I could cover all the cases and return the correct type of Capture. Too see if this could even work I made an even simpler example
A simpler example
struct Foo<T> {
let x: T
init(eval: ()->T) {
x = eval()
}
}
which works when type inference is guaranteed:
let fooInt = Foo {
return 3
}
// > Foo<Int>
let fooDouble = Foo {
return 3.0
}
// > Foo<Double>
But not when the closure can return different types
let condition = true
let foo = Foo {
if condition {
return 3
} else {
return 3.0
}
}
error: cannot invoke initializer for type 'Foo' with an argument list of type '(() -> _)'
I'm not even able to define such a closure on its own.
let condition = true // as simple as it could be
let evaluation = {
if condition {
return 3
} else {
return 3.0
}
}
error: unable to infer closure type in the current context
My Question
Is this something that can be done at all? Can a condition be used to determine the type of a generic? Or is there another way to hold two variables of the same type, where the type was decided based on a condition?
Edit
What I really want is to:
capture the values before and after a change and save the pair (old + new) for later (a heterogeneous collection of homogeneous pairs).
go through all the collected values and get rid of the ones that can't be interpolated (unless this step could be integrated with the collection step)
interpolate each homogeneous pair individually (mixing old + new).
But it seems like this direction is a dead end when it comes to solving that problem. I'll have to take a couple of steps back and try a different approach (and probably ask a different question if I get stuck again).
As discussed on Twitter, the type must be known at compile time. Nevertheless, for the simple example at the end of the question you could just explicitly type
let evaluation: Foo<Double> = { ... }
and it would work.
So in the case of Capture and valueForKeyPath: IMHO you should cast (either safely or with a forced cast) the value to the Mixable type you expect the value to be and it should work fine. Afterall, I'm not sure valueForKeyPath: is supposed to return different types depending on a condition.
What is the exact case where you would like to return 2 totally different types (that can't be implicitly casted as in the simple case of Int and Double above) in the same evaluation closure?
in my full example I also have cases for CGPoint, CGSize, CGRect, CATransform3D
The limitations are just as you have stated, because of Swift's strict typing. All types must be definitely known at compile time, and each thing can be of only one type - even a generic (it is resolved by the way it is called at compile time). Thus, the only thing you can do is turn your type into into an umbrella type that is much more like Objective-C itself:
let condition = true
let evaluation = {
() -> NSObject in // *
if condition {
return 3
} else {
return NSValue(CGPoint:CGPointMake(0,1))
}
}
I am looking for a proper way to handle a invalid argument during a initialization.
I am unsure how to do it using Swift as the init has't a return type. How can I tell whoever is trying to initialize this class that you are doing something wrong?
init (timeInterval: Int) {
if timeInterval > 0
self.timeInterval = timeInterval
else
//???? (return false?)
}
Thank you!
Use a failable initializer. Such an initializer looks very similar to a regular designated initializer, but has a '?' character right after init and is allowed to return nil. A failable initializer creates an optional value.
struct Animal {
let species: String
init?(species: String) {
if species.isEmpty { return nil }
self.species = species
}
}
See Apple's documentation on failable initializers for more detail.
In swift, you can't really abort a task half way through execution. There are no exceptions in swift and in general the philosophy is that aborting a task is dangerous and leads to bugs, so it just should't be done.
So, you verify a value like this:
assert(timeInterval > 0)
Which will terminate the program if an invalid value is provided.
You should also change timeInterval to be a UInt so that there will be a compiler error if anybody tries to give a < 0 value or an integer value that could be < 0.
It's probably not the answer you're looking for. But the goal is to check for bad parameters as early as possible, and that means doing it before you create any objects with those parameters. Ideally the check should be done at compile time but that doesn't always work.
I think this is the best solution, took it from:How should I handle parameter validation Swift
class NumberLessThanTen {
var mySmallNumber: Int?
class func instanceOrNil(number: Int) -> NumberLessThanTen? {
if number < 10 {
return NumberLessThanTen(number: number)
} else {
return nil
}
}
#required init() {
}
init(number: Int) {
self.mySmallNumber = number
}
}
let iv = NumberLessThanTen.instanceOrNil(17) // nil
let v = NumberLessThanTen.instanceOrNil(5) // valid instance
let n = v!.mySmallNumber // Some 5
In the Swift book by Apple, at the very bottom of this section:https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/TheBasics.html#//apple_ref/doc/uid/TP40014097-CH5-XID_399
They say:
When to Use Assertions
Use an assertion whenever a condition has the potential to be false,
but must definitely be true in order for your code to continue
execution. Suitable scenarios for an assertion check include:
An integer subscript index is passed to a custom subscript
implementation, but the subscript index value could be too low or too
high. A value is passed to a function, but an invalid value means that
the function cannot fulfill its task. An optional value is currently
nil, but a non-nil value is essential for subsequent code to execute
successfully.
This sounds exactly like your situation!
Thus your code should look like:
init (timeInterval: Int) {
assert (timeInterval > 0, "Time Interval Must be a positive integer")
// Continue your execution normally
}