Writing graphics code in UIKit is a PITA. The "nominal" floating point type for Swift is Double. But most of the UIKit graphics code uses CGFloat which seems to be either Double or Float based on the chip in my phone. I find myself having to constantly use CGFloat() and Double() transformers.
I have considered killing the problem by simply providing the operators it continually complains are lacking (I like to add lots of numeric type extensions anyway):
func * (lhs:CGFloat, rhs:Double) -> Double {
return Double(lhs) * rhs
}
func * (lhs:CGFloat, rhs:Double) -> CGFloat {
return lhs * CGFloat(rhs)
}
func * (lhs:Double, rhs:CGFloat) -> Double {
return lhs * Double(rhs)
}
func * (lhs:Double, rhs:CGFloat) -> CGFloat {
return CGFloat(lhs) * rhs
}
With this in place, I don't have to care anymore. I realize there will be lots of opinions as to whether this is a good thing or not. I get the cases where there can be subtle differences between a CGFloat that is a Float on a 32 bit platform and a Double, but I'm not sure that I'm likely to see them anyway. In other words, I can do this once and only once and get stung by those edge cases where fp math breaks down at the boundaries, or I can constantly convert things a million times over and still get stung by the same edge case. So my question is, is there ANY OTHER reason than those edge cases of fp math, not to do this?
Related
I’ve wanted to speed up my build times, so one of the steps was using Other Swift Flags and
-Xfrontend -warn-long-function-bodies=100
-Xfrontend -warn-long-expression-type-checking=100
But I’m not really sure how type checks work. For example, here’s a simple func for creating random CGFloat. Type check for it is over 200ms
static func randomColorValue() -> CGFloat {
return CGFloat(Int.random(in: 0...255))/255.0
}
But on changing to something like this
static func randomColorValue() -> CGFloat {
let rnd = Int.random(in: 0...255)
let frnd = CGFloat(rnd)
let result = frnd/255.0
return result
}
or like this
static func randomColorValue() -> CGFloat {
let rnd : Int = Int.random(in: 0...255)
let frnd : CGFloat = CGFloat(rnd)
let result : CGFloat = frnd/255.0
return result
}
type check is still over 200ms.
What's wrong here? Is there any set of rules and best practices for dealing with build times?
My Mac is a bit older (2012), maybe that's the problem?
EDIT:
After turning off -warn-long-function-bodies problematic line appeared, and that is
CGFloat(rnd)
It appears that casting Int to Float, Double or CGFloat shows slowing down of 150ms.
Note that warn-long-function-bodies is unsupported (it was added as an experimental flag). If you remove it, I find that the expression time is often reported as twice as fast, which makes be believe that using the two measurements together is causing interference. Measurement takes time, too. warn-long-expression is a supported option.
This question already has an answer here:
Need explanation about random function swift
(1 answer)
Closed 3 years ago.
When generating a random CGFloat, I use the following code. Some explanation here
extension CGFloat{
static func randomFloat(from:CGFloat, to:CGFloat) -> CGFloat {
let randomValue: CGFloat = CGFloat(Float(arc4random()) / 0xFFFFFFFF)
return randomValue * (to - from ) + from
}
}
It used to be OK. Works fine now.
There is a warning, after I upgraded to Swift 5.
'4294967295' is not exactly representable as 'Float'; it becomes '4294967296'
to the line of code
let randomValue: CGFloat = CGFloat(Float(arc4random()) / 0xFFFFFFFF)
How to fix it?
As you know Float has only 24-bits of significand, so 32-bit value 0xFFFFFFFF would be truncated. So, Swift is warning to you that Float cannot represent the value 0xFFFFFFFF precisely.
The short fix would be something like this:
let randomValue: CGFloat = CGFloat(Float(arc4random()) / Float(0xFFFFFFFF))
With using Float.init explicitly, Swift would not generate such warnings.
But the preferred way would be using random(in:) method as suggested in matt's answer:
return CGFloat(Float.random(in: from...to))
or simply:
return CGFloat.random(in: from...to)
Simplest solution: abandon your code and just call https://developer.apple.com/documentation/coregraphics/cgfloat/2994408-random.
In my application I can't decide what floating point format will be the best for performance. Its not so much the matter of bits that I am worried about rather how it interfaces with various functions I am using since I am using math libraries and graphics libraries.
As a result I have built everything using typealias EngineDecimal = CGFloat so that at the end I can experiment with changing that to other formats such as GLFloat, Float32 etc.
My question is what does the compiler do if I write a function like this:
func foo(in: EngineDecimal)-> EngineDecimal
{
return Decimal(mathFunction(CGFloat(in)));
}
//foo2 is a library defined function that I have no control over but I'm typing a sample one for this example
func foo2(in: CGFloat) -> CGFloat
{
return sin(in) + cos(in)
}
Will the compiler notice if Decimal is the same type as CGFloat and thus get rid of the casting statements? So in essence would this code run faster if typealias EngineDecimal = CGFloat vs if typealias EngineDecimal = GLFloat ?
A typealias doesn't create a new type, it just allows a new name to be used in place of an existing type. So there is no casting being done and no optimisation needs to occur.
I have researched and looked into many different random function, but the downside to them is that they only work for one data type. I want a way to have one function work for a Double, Floats, Int, CGFloat, etc. I know I could just convert it to different data types, but I would like to know if there is a way to have it done using generics. Does someone know a way to make a generic random function using swift. Thanks.
In theory you could port code such as this answer (which is not necessarily a great solution, depending on your needs, since UInt32.max is relatively small)
extension FloatingPointType {
static func rand() -> Self {
let num = Self(arc4random())
let denom = Self(UInt32.max)
// this line won’t compile:
return num / denom
}
}
// then you could do
let d = Double.rand()
let f = Float.rand()
let c = CGFloat.rand()
Except… FloatingPointType doesn’t conform to a protocol that guarantees operators like /, + etc (unlike IntegerType which conforms to _IntegerArithmeticType).
You could do this manually though:
protocol FloatingPointArithmeticType: FloatingPointType {
func /(lhs: Self, rhs: Self) -> Self
// etc
}
extension Double: FloatingPointArithmeticType { }
extension Float: FloatingPointArithmeticType { }
extension CGFloat: FloatingPointArithmeticType { }
extension FloatingPointArithmeticType {
static func rand() -> Self {
// same as before
}
}
// so these now work
let d = Double.rand() // etc
but this probably isn’t quite as out-of-the-box as you were hoping (and no doubt contains some subtle invalid assumption about floating-point numbers on my part).
As for something that works across both integers and floats, the Swift team have mentioned on their mail groups before that the lack of protocols spanning both types is a conscious decision since it’s rarely correct to write the same code for both, and that’s certainly the case for generating random numbers, which need two very different approaches.
Before you get too far into this, think about what you want the behavior of a type-agnostic random function to be, and whether that's something that you want. It sounds like you're proposing something like this:
// signature only
func random<T>() -> T
// example call sites, with specialization by inference from declared result type
let randInt: Int = random()
let randFloat: Float = random()
let randDouble: Double = random()
let randInt64: Int = random()
(Note this syntax is sort of fake: without a parameter of type T, the implementation of random<T>() can't determine which type to return.)
What do you expect the possible values in each of these to be? Is randInt always a value between zero and Int.max? (Or maybe between zero and UInt32.max?) Is randFloat always between 0.0 and 1.0? Should randDouble actually have a larger count of possible values than randFloat (per the increased resolution between 0.0 and 1.0 of the Double type)? How do you account for the fact that Int is actually Int32 on 32-bit systems and Int64 on 64-bit hardware?
Are you sure it makes sense to have two (or more) calls that look identical but return values in different ranges?
Second, do you really want "arbitrarily random" random number generation everywhere in your app/game? Most use of RNGs is in game design, where typically there are a couple of important things you want in your RNG before you get your product past the prototyping stage:
Independence: I've seen games where you could learn to predict the next "random" enemy encounter based on recent "random" NPC chitchat/emotes. If you're using random elements in multiple gameplay systems, you don't want
Determinism: If you want to be able to reproduce a sequence of game events — either for testing/debugging or for consistent results between clients in a networked game — you don't want to be using a random function where you can't control that sequence. arc4random doesn't let you control the initial seed, and you have no control over the sequence because you don't know what other code in your process (library code, or just other code of your own that you forgot about) is also pulling numbers from the generator.
(If you're not making a game... much of this still applies, though it may be less important. You still don't want to be re-running your test case until the heat death of the universe trying to randomly find the same bug that one of your users reported.)
In iOS 9 / OS X 10.11 / tvOS, GameplayKit provides a suite of randomization tools to address these issues.
let randA = GKARC4RandomSource()
let someNumbers = (0..<1000).map { _ in randA.nextInt() }
let randB = GKARC4RandomSource()
let someMoreNumbers = (0..<1000).map { _ in randB.nextInt() }
let randC = GKARC4RandomSource(seed: randA.seed)
let evenMoreNumbers = (0..<1000).map { _ in randC.nextInt() }
Here, someMoreNumbers is always nondeterministic, no matter what happens in the generation of someNumbers or what other randomization activity happens on the system. And evenMoreNumbers is the exact same sequence of numbers as someNumbers.
Okay, so the GameplayKit syntax isn't quite what you want? Well, some of that is a natural consequence of having to manage RNGs as objects so that you can keep them independent and deterministic. But if you really want to have a super-simple random() call that you can slot in wherever needed, regardless of return type, and be independent and deterministic...
Here's one possible recipe for that. It implements random() as a static function on a type extension, so that you can use type-inference shorthand to write it; e.g.:
// func somethingThatTakesAnInt(a: Int, andAFloat Float: b)
somethingThatTakesAnInt(.random(), andAFloat: random())
The first parameter automatically calls Int.random() and the second calls Float.random(). (This is the same mechanism that lets you use shorthand for referring to enum cases, or .max instead of Int.max, etc.)
And it makes the type extensions private, with the idea that different source files will want independent RNGs.
// EnemyAI.swift
private extension Int {
static func random() -> Int {
return EnemyAI.die.nextInt()
}
}
class EnemyAI: NSObject {
static let die = GKARC4RandomSource()
// ...
}
// ProceduralWorld.swift
private extension Int {
static func random() -> Int {
return ProceduralWorld.die.nextInt()
}
}
class ProceduralWorld: NSObject {
static let die = GKARC4RandomSource()
// ...
}
Repeat with extensions for more types as desired.
If you add some breakpoints or logging to the different random() functions you'll see that the two implementations of Int.random() are specific to the file they're in.
Anyway, that's a lot of boilerplate, but perhaps it's good food for thought...
Personally, I'd probably write individual implementations for each thing you wanted. There just aren't that many, and it's a lot safer and more flexible. But… sure, you can do this. Just create a random bit pattern and say "that's a number."
func randomValueOfType<T>(type: T.Type) -> T {
let size = sizeof(type)
let bits = UnsafeMutablePointer<T>.alloc(1)
arc4random_buf(bits, size)
return bits.memory
}
(This is technically "that's a something" but if you pass it something other than number-like types, you'll probably crash because most random bit patterns aren't a legal object.)
I believe every possible bit pattern will generate a legal IEEE 754 number, but "legal" may be more complex than you're thinking. A "totally random" float would rightly include NaN (not-a-number) which will show up reasonably often in your results.
There are some other special cases like the infinities and negative zero, but in practice those should never occur. Any single bit pattern showing up in a random choice of 32-bits is effectively zero. There are lots of NaN bit patterns, so it shows up a lot.
And that's the problem with this whole approach. If your random generator can accept that NaN shows up, then it's probably testing real floating point. But if you're testing real floating point, you really want to be checking the edge cases like infinity and negative zero. But if you don't want to include NaN, then you don't really mean "a random Float" you mean "a random Real number that can be expressed as a Float." There's no type for that, so you would need to write specific logic to handle it, and if you're doing that, you might as well write a specialized random generator for each type.
But this function is probably still a useful foundation for building that. You could just generate values until one is a legal number (NaN doesn't show up that often, so you'll almost certainly get it in less than 2 tries).
This kind of emphasizes the point Airspeed Velocity made about why there's no generic "number" protocol. You usually can't just treat floating point numbers like integers. They just work differently, and you very often need to think about that fact.
For Friday Fun, I wanted to model Angles in interchangable formats. I'm not sure if I've done it in the most Swift idiomatic way, but I'm learning. So I have an Angle protocol, and then 3 different struct types (Radians, Degrees, and Rotations) which all conform to the Angle protocol. I'd like to be able to add/subtract them, but the trick is, I want the lhs argument to dictate the return type. For example:
Degrees(180) + Rotations(0.25) --> Degrees(270)
or
Rotations(0.25) + Radians(M_PI) -> Rotations(0.75)
What I was hoping was I could do something like
func + (lhs:Angle, rhs:Angle) -> Angle {
return lhs.dynamicType(rawRadians: lhs.rawRadians + rhs.rawRadians)
}
The Angle protocol requires a var rawRadians:CGFloat { get } as well as an init(rawRadians:CGFloat)
I could do this with a Smalltalk-esque double dispatch approach, but I think there most be a more Swift appropriate approach (especially one that requires less code, double dispatch requires a lot of boilerplate code).
You just need a generic addition:
func +<A: Angle>(lhs: A, rhs: Angle) -> A {
return A(rawRadians: lhs.rawRadians + rhs.rawRadians)
}
This way the addition will return whatever type is on the lhs.
Generally speaking, if you're using dynamicType, you're probably fighting Swift. Swift relies more on generics and protocols (i.e. static type information at compile time) rather than dynamic dispatch (i.e dynamic type information at runtime).
In this code, as you say, A is a placeholder for "some type, that conforms to Angle, to be determined at compile time." So in your first example:
Degrees(180) + Rotations(0.25) --> Degrees(270)
This actually calls a specialized function +<Degrees>. And this:
Rotations(0.25) + Radians(M_PI) -> Rotations(0.75)
calls a (logically) different function called +<Rotations>. The compiler may choose to optimize these functions into a single function, but logically they are independent functions, created at compile time. It's basically a shortcut for writing addDegrees(Degrees, Angle) and addRotations(Rotations, Angle) by hand.
Now, to your question about a function that takes two Angles and returns.... well, what? If you want to return an Angle in this case, that's easy, and exactly matches your original signature:
func +(lhs: Angle, rhs: Angle) -> Angle {
return Radians(rawRadians: lhs.rawRadians + rhs.rawRadians)
}
"But..." you're saying, "that returns Radians." No it doesn't. It returns Angle. You can do anything "angle-like" on it you want. The implementation details are opaque as they should be. If you care that the underlying data structure is a Radians, you're almost certainly doing something wrong.
OK, there is one side case where it may be useful to know this, and that's if you're printing things out based on how you go them. So if the user gave you degree information to start, then you want to print everything in degrees (using a description method that you didn't mention). And maybe that's worth doing in that particular case.If you want to, your original code was very close:
func +(lhs: Angle, rhs: Angle) -> Angle {
return lhs.dynamicType.init(rawRadians: lhs.rawRadians + rhs.rawRadians)
}
But it's critical to understand that this doesn't match your request to have "the lhs argument to dictate the return type." This causes the lhs argument to dictate the return implementation. The return type is always Angle. If you want to change the return type, you need to use the generics.