Hello I have a problem with my Swift code.
In my application some SKLabelNodes are supposed to have their y coordinate set to
CGRectGetMaxY(self.frame) - (nodes[i].frame.size.height / 2 + 30) * (i + 4)
Where
var i:Int = 0
is a counter.
It works perfectly fine if instead of (i + 4) I just give it a literal value e.g. 5 or even (i == 0? 4 : 5) (just to see on two consecutive integers if the formula is correct itself).
But when I go with any variable or constant or expression containing one, it displays an error "CGFloat is not convertible to Int". It seems completely illogical, because 4 is an integer and so is i and even (i + 4), in which case changing 4 to i shouldn't change the whole expression's type.
Can anyone explain why do I get this error and how can I possibly solve it?
Thanks in advance.
You have already explained and solved the matter perfectly. It's simply a matter of you accepting your own excellent explanation:
It works perfectly fine if ... I just give it a literal value ... But when I go with any variable or constant or expression containing one, it displays an error "CGFloat is not convertible to Int".
Correct. Numeric literals can be automatically coerced to another type, such as CGFloat. But variables cannot; you must coerce them, explicitly. To do so, initialize a new number, of the correct type, based on the old number.
So, this works automagically:
let f1 : CGFloat = 1.0
let f2 = f1 + 1
But here, you have to coerce:
let f1 : CGFloat = 1.0
let f2 = 1
let f3 = f1 + CGFloat(f2)
It's a pain, but it keeps you honest, I guess. Personally I agree with your frustration, as my discussion here will show: Swift numerics and CGFloat (CGPoint, CGRect, etc.) It is hard to believe that a modern language is expected to work with such clumsy numeric handling, especially when we are forced against our will to bang into the Core Graphics CGFloat type so frequently in real-life Cocoa programming. But that, it seems, is the deal. I keep hoping that Swift will change in this regard, but it hasn't happened so far. And there is a legitimate counterargument that implicit coercion, which is what we were doing in Objective-C, is dangerous (and indeed, things like the coming of 64-bit have already exposed the danger).
Related
I have been working to understand why I am getting the error messages shown in the attachment.
The bottom-most message that indicates a comma is needed makes no sense to me at all.
The other two messages may well be related to a problem with data types, but I cannot determine what data type rules I have violated.
Many thanks for your time and attention.
It's a few different errors cropping up, and the error about the separator is not really indicative of the problem.
SecondPartFraction is being declared twice. If those are meant to be two different variables, they should have two different names. If you simply wish to reassign a new value to SecondPartFraction, just drop the var off the second time you use it (as is has already been declared, you simply need to refer to it again).
Doubles and Ints can't be together for division, so that error is correct. If you want to get a Double result, just change the 16 to 16.0. Then the compiler won't complain.
The numbers you're getting are originating from a textfield too, which might cause some problems. If the user enters text into your textfields, instead of numbers, the app will crash since StepFirstPart and StepSecondPart are force unwrapped. You will probably want to do some kind of Optional Chaining to handle the case where the entry is not numeric.
In the last line, the label text is being set to an Int - in order to do this, you'll have to use string interpolation instead, since the text for a label must be a string rather than a number:
TotalNumRisers.text = "\(TotalRisers)"
Just one last quick note - in Swift, camel casing is standard for naming, so the first letter of every variable should be lowercase, then the others upper. So StepFirstPart would become stepFirstPart instead.
You create the same variable twice here e.x
var x = 0
var x = value + x
instead it should be
var x = 0
x = value + x // remove var from here
I have no idea why this example is ambiguous. (My apologies for not adding the code here, it's simply too long.)
I have added prefix (_ maxLength) as an overload to LazyDropWhileBidirectionalCollection. subscript(position) is defined on LazyPrefixCollection. Yet, the following code from the above example shouldn't be ambiguous, yet it is:
print([0, 1, 2].lazy.drop(while: {_ in false}).prefix(2)[0]) // Ambiguous use of 'lazy'
It is my understanding that an overload that's higher up in the protocol hierarchy will get used.
According to the compiler it can't choose between two types; namely LazyRandomAccessCollection and LazySequence. (Which doesn't make sense since subscript(position) is not a method of LazySequence.) LazyRandomAccessCollection would be the logical choice here.
If I remove the subscript, it works:
print(Array([0, 1, 2].lazy.drop(while: {_ in false}).prefix(2))) // [0, 1]
What could be the issue?
The trail here is just too complicated and ambiguous. You can see this by dropping elements. In particular, drop the last subscript:
let z = [0, 1, 2].lazy.drop(while: {_ in false}).prefix(2)
In this configuration, the compiler wants to type z as LazyPrefixCollection<LazyDropWhileBidirectionalCollection<[Int]>>. But that isn't indexable by integers. I know it feels like it should be, but it isn't provable by the current compiler. (see below) So your [0] fails. And backtracking isn't powerful enough to get back out of this crazy maze. There are just too many overloads with different return types, and the compiler doesn't know which one you want.
But this particular case is trivially fixed:
print([0, 1, 2].lazy.drop(while: {_ in false}).prefix(2).first!)
That said, I would absolutely avoid pushing the compiler this hard. This is all too clever for Swift today. In particular overloads that return different types are very often a bad idea in Swift. When they're simple, yes, you can get away with it. But when you start layering them on, the compiler doesn't have a strong enough proof engine to resolve it. (That said, if we studied this long enough, I'm betting it actually is ambiguous somehow, but the diagnostic is misleading. That's a very common situation when you get into overly-clever Swift.)
Now that you describe it (in the comments), the reasoning is straightforward.
LazyDropWhileCollection can't have an integer index. Index subscripting is required to be O(1). That's the meaning of the Index subscript versus other subscripts. (The Index subscript must also return the Element type or crash; it can't return an Element?. That's way there's a DictionaryIndex that's separate from Key.)
Since the collection is lazy and has an arbitrary number of missing elements, looking up any particular integer "count" (first, second, etc.) is O(n). It's not possible to know what the 100th element is without walking through at least 100 elements. To be a collection, its O(1) index has to be in a form that can only be created by having previously walked the sequence. It can't be Int.
This is important because when you write code like:
for i in 1...1000 { print(xs[i]) }
you expect that to be on the order of 1000 "steps," but if this collection had an integer index, it would be on the order of 1 million steps. By wrapping the index, they prevent you from writing that code in the first place.
This is especially important in highly generic languages like Swift where layers of general-purpose algorithms can easily cascade an unexpected O(n) operation into completely unworkable performance (by "unworkable" I mean things that you expected to take milliseconds taking minutes or more).
Change the last row to this:
let x = [0, 1, 2]
let lazyX: LazySequence = x.lazy
let lazyX2: LazyRandomAccessCollection = x.lazy
let lazyX3: LazyBidirectionalCollection = x.lazy
let lazyX4: LazyCollection = x.lazy
print(lazyX.drop(while: {_ in false}).prefix(2)[0])
You can notice that the array has 4 different lazy conformations - you will have to be explicit.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I just to know from experienced IOS Developers, what are the advantages and disadvantages of using the "yoda condition". As I am learning the swift language. I don't know much about this topic. I did not find any suitable answer. Any help would Highly be appreciated. Thank you.
C, C++, Objective-C, and Java (and some other languages) have three properties that work together in an error-prone way:
Assignment is an expression. That is, x = 1 is an expression with a value (the value 1 in this example).
Conditionals in control flow statements (like if and while statements) allow any expression that can be coerced to a Boolean value. Integers, floating point numbers, and pointers can all be coerced to Boolean: zero or null means false and anything else means true.
The assignment operator = and the equality operator == are very similar.
Because of these three features, it is easy to accidentally write if (x = 1) when you meant to write if (x == 1). Your program still compiles, but probably behaves incorrectly some of the time.
The “Yoda conditional” is a style that helps prevent this error. If you are in the habit of writing if (1 == x), and you accidentally write if (1 = x), your program will not compile.
However, the if (x = 1) error cannot happen in Swift, because Swift lacks two of the three properties described above.
In Swift, an assignment like x = 1 is a statement, not an expression. It cannot be used as the conditional of a control statement.
In Swift, the conditional of a control statement must explicitly be a Bool. No other types are automatically coerced to Bool. Even if assigment were an expression, if (x = 1) would still be prohibited because x = 1 would have the type of x, which cannot be Bool because 1 is not a Bool value.
So there is no reason to ever use a Yoda condition in Swift (unless you find it clearer for some other reason).
Note also that many modern compilers for other languages will print a warning if you use an assignment as a conditional, so Yoda syntax isn't as useful as it used to be in C, C++, and Objective-C.
The concept of a "Yoda condition" is nonsensical, because:
the putative idea of "Yoda" style is: the two sides of the "==" operator, are reversed "from what the usually would be"...
However, that is nonsensical.
Because
there is no sense at all of what they "usually would be".
It's a fairly dumb piece of "made-up" lingo.
Programmers write the if condition,
simply depending on,
what feels best to express the algorithm at hand.
Forget about it.
At best, it is of historic interest as a curiosity, irrelevant to anyone young enough to be alive today. To any new programmers learning (such as the OP here), it's irrelevant.
In the sense of the OP's question, it is irrelevant: there is no right or wrong order.
At best it's a matter of style to express the idea of the algorithm at hand.
Just to expand on what others have already said, yoda conditions are a holdover from C because of a specific bug that is easy to make in C and impossible in Swift. The bug in question goes like this:
if (x = 5) { ... }
The problem is that x = 5 is an assignment. You probably meant x == 5. In C, assignment statements return the value, so x = 5 returns 5, which is not 0, so it's true no matter what x is. This was a really common bug until compilers finally started to warn you about it. C actually relies on this, so it's not something C could remove. It's very common to use this in code like:
if (ch = getchar()) { ... }
The fix to this bug was to invert the condition by habit. if (5 = x) { ... } is a syntax error because 5 is a literal.
This is all irrelevant in Swift. Swift made = a statement rather than an expression, specifically to avoid this kind of bug. So if x = 5 { ...} is a syntax error already.
Yoda style just makes things harder to read; don't use it in Swift. Even in C, the compiler will give you stern warnings about this bug, so unless you are working in an old code base where it is the style, there's no reason to use it even there.
I'm not an experienced PHP programmer, where I believe it is also common. There it may still make sense. But not in Swift.
var number = 3
vs
var number: Int = 3
How does using specific types vs type inference affect compile time? Has anyone done experiments or some math on this topic?
Does this runtime affect at all in anyway?
Compile time: In most cases, this will be trivial. In your example, 3 is an integer literal; integer literals can adapt to their use, but it's trivial that number will have type Int.
At runtime, there is absolutely no difference. Both statements are 100 percent equivalent.
Both examples will do the same.
differences appear when using float values.
var double = 2.5
var float : Float = 2.5
In swift its better to write less code. This makes the code healthier and for sure it will be faster.
I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…