I'm using open source project (NSBKeyframeAnimation) for some of animations in my p roject. Here are example of methods that i'm using:
double NSBKeyframeAnimationFunctionEaseInQuad(double t,double b, double c, double d)
{
return c*(t/=d)*t + b;
}
I have updated my Xcode to 5.0, and every method from this project started to show me warnings like this: "Unsequenced modification and access to 't' ". Should i rewrite all methods to objective-c or there's another approach to get rid of all these warnings?
The behavior of the expression c*(t/=d)*t + b is undefined, and you should fix it,
e.g. to
t /= d;
return c*t*t + b;
See for example Undefined behavior and sequence points for a detailed explanation.
those warnings can be disabled
put this before the code triggering the warning
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunsequenced"
and this after that code
#pragma clang diagnostic pop
however, nothing guarantees that compilers will always handle this case gracefully.
i came to this page because i got 50 of those warnings, in exactly the same source file.
i'm grateful for those functions, but the programmer should realize that trying to write everything on one line is very "1980's", when the compilers weren't nearly as optimized as today.
and when it actually matter to win a few processor cycles, we only had a few million, not the billions we have now.
i would always put readability first.
The error you are referring to appears in all versions of Xcode, seeing as it is not Xcode that is the source of the warning, but the programmer; and, it is not Xcode that generates the warning, it is the GCC compiler it uses to debug your code that is responsible for identifying the potential issue the warning raises.
While it may be that rewriting the expression will resolve the error, you do not have to do that in this case (or, technically, any other case like this). You can leave it as is.
The answer is to add sequence (or order) to the modification of the variable (i.e., to the assignment of a new value) and to its expression (i.e., to the return of its new value), and that only takes a few extra characters to achieve, namely, a pair of braces ensconced in parentheses and a semicolon (see Statements and Declarations in Expressions).
double NSBKeyframeAnimationFunctionEaseInQuad(double t,double b, double c, double d)
{
return c*({(t/=d);})*t + b;
}
Here are before-and-after screenshots that demonstrate the solution:
Related
I copied and pasted the widely available code for the djb2 hashing function, but it generates the error shown below (I am using the CS50.ide, which may be a factor). Since this error IS fixed by a second set of parentheses, can someone explain why those aren't in the code I find everywhere online?
dictionary.c:67:14: error: using the result of an assignment as a condition without
parentheses [-Werror,-Wparentheses]
while (c = *word++)
~~^~~~~~~~~
dictionary.c:67:14: note: place parentheses around the assignment to silence this
warning
while (c = *word++)
^
( )
dictionary.c:67:14: note: use '==' to turn this assignment into an equality comparison
while (c = *word++)
^
==
= is used for setting variables to a value. == is the relational operator used for comparing the equality of values. Perhaps you are finding the C++ version of the function. Perhaps it is the IDE compiler rules/config.+
I understand about = vs ==. My question was how come i get the compiler error with code that is correct, since it is a well established hash function.
turns out is related to the cs50 makefile being more stringent than clang on it's own. needlessly frustrating.
This is kind of a weird and un-Swift-thonic question, so bear with me.
I want to do in Swift something like the same thing I'm currently doing in Objective-C/C++, so I'll start by describing that.
I have some existing C++ code that defines a macro that, when used in an expression anywhere in the code, will insert an entry into a table in the binary at compile time. In other words, the user writes something like this:
#include "magic.h"
void foo(bool b) {
if (b) {
printf("%d\n", MAGIC(xyzzy));
}
}
and thanks to the definition
#define MAGIC(Name) \
[]{ static int __attribute__((used, section("DATA,magical"))) Name; return Name; }()
what actually happens at compile time is that a static variable named xyzzy (modulo name-mangling) is created and allocated into the special magical section of my Mach-O binary, so that running nm -m foo.o to dump the symbols shows something a lot like this:
0000000000000098 (__TEXT,__eh_frame) non-external EH_frame0
0000000000000050 (__TEXT,__cstring) non-external L_.str
0000000000000000 (__TEXT,__text) external __Z3foob
00000000000000b0 (__TEXT,__eh_frame) external __Z3foob.eh
0000000000000040 (__TEXT,__text) non-external __ZZ3foobENK3$_0clEv
00000000000000d8 (__TEXT,__eh_frame) non-external __ZZ3foobENK3$_0clEv.eh
0000000000000054 (__DATA,magical) non-external [no dead strip] __ZZZ3foobENK3$_0clEvE5xyzzy
(undefined) external _printf
Through the magic of getsectbynamefromheader(), I can then load the symbol table for the magical section, scan through it, and find out (by demangling every symbol I find) that at some point in the user's code, he calls MAGIC(xyzzy). Eureka!
I can replicate the whole second half of that workflow just fine in Swift — starting with the getsectbynamefromheader() part. However, the first part has me stumped.
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
As far as I know, Swift has no way to insert symbols into a given section — no equivalent of __attribute__((section)). This is okay, though, since nothing in my plan requires a dedicated section; that part just makes the second half easier.
As far as I know, the only way to get a symbol into the symbol table in Swift is via a local struct definition. Something like this:
func foo(b: Bool) -> Void {
struct Local { static var xyzzy = 0; };
println(Local.xyzzy);
}
That works, but it's a bit of extra typing, and can't be done inline in an expression (not that that'll matter if we can't make a MAGIC macro in Swift anyway), and I'm worried that the Swift compiler might optimize it away.
So, there are three questions here, all about how to make Swift do things that Swift doesn't want to do: Macros, attributes, and creating symbols that are resistant to compiler optimization.
I'm aware of #asmname but I don't think it helps me since I can already deal with demangling on my own.
I'm aware that Swift has "generics", but they seem to be closer to Java generics than to C++ templates; I don't think they can be used as a substitute for macros in this particular case.
I'm aware that the code for the Swift compiler is now open-source; I've skimmed bits of it in vain; but I can't read through all of it looking for tricks that might not even be there.
Here is the answer to your question about preprocessor (and macros).
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
Swift project has a preprocessor (but, AFAIK, it is not distributed with Swift's binary).
From swift-users mailing list:
What are .swift.gyb files?
It’s a preprocessor the Swift
team wrote so that when they needed to build, say, ten nearly-identical
variants of Int, they wouldn’t have to literally copy and paste the same
code ten times. If you open one of those files, you’ll see that they’re
mainly Swift code, but with some lines of code intermixed that are written in Python.
It is not as beautiful as C macros, but, IMHO, is more powerful.
You can see the available commands with ./swift/utils/gyb --help command after cloning the Swift's git repo.
$ swift/utils/gyb --help
usage, etc (TL;DR)...
Example template:
- Hello -
%{
x = 42
def succ(a):
return a+1
}%
I can assure you that ${x} < ${succ(x)}
% if int(y) > 7:
% for i in range(3):
y is greater than seven!
% end
% else:
y is less than or equal to seven
% end
- The End. -
When run with "gyb -Dy=9", the output is
- Hello -
I can assure you that 42 < 43
y is greater than seven!
y is greater than seven!
y is greater than seven!
- The End. -
My example of GYB usage is available on GitHub.Gist.
For more complex examples look for *.swift.gyb files in #apple/swift/stdlib/public/core.
I'm debugging a UIProgressView. Specifically I'm calling -setProgress: and -setProgress:animated:.
When I call it in LLDB using:
p (void) [_progressView setProgress:(float)0.5f]
the progressView ends up with a progress value of 0. Apparently, LLDB doesn't parse the float value correctly.
Any idea how I can get float arguments being parsed correctly by LLDB?
Btw, I'm experiencing the same problem in GDB.
In Xcode 4.5 and before, this was a common problem. If I remember correctly, what was really happening there was that the old C no-prototype type promotion rules were in effect. If you were passing a floating point value, it had type double. If you were passing an integral value, it was passed as int, that kind of thing. If you wrote (float)0.8f, lldb would take those 4 bytes of (float) and pass them to something that reads 8 bytes and interprets it as a double.
In Xcode 4.6, lldb will fetch the argument types from the Objective-C runtime, if it can, so it knows that the argument is really taking a float here. You shouldn't even need the (float) cast.
My guess is that when you give lldb a pointer to an object p (void) [0x260da630 setProgress:..., the expression parser isn't looking at the object's isa to get the class & getting the types out of it. As soon as you added a cast to the object address, it got the types.
I think when you wrote setProgress:(float)0.8f for gdb, it would take this as a special indication that this argument is a float type -- in essence, you were providing the prototype. It's something that I think lldb's expression parser should do some time in the future, but the fact that clang is used to do all the expression parsing means that it's a little harder to shoehorn these non-standard meanings into it. (there are already a few, of course, e.g. p $r0 works ;)
Found the problem. In reality my LLDB command looked slightly different:
p (void) [0x260da630 setProgress:(float)0.8f animated:NO]
where 0x260da630 is my UIProgressView. Apparently, the debugger really needs to know the exact type of the receiving object and doesn't honor the cast of the argument, so
p (void) [(UIProgressView*)0x260da630 setProgress:(float)0.8f animated:NO]
works. (Even casting to id wasn't sufficient!)
Thanks for your comments, Martin R and Martin Ullrich, and apologies for having broken my question for better readability!
Btw, I swear, I had used the property instead of the address as well. But perhaps restarting Xcode also helped…
What is the difference between forward declaration and forward reference?
Forward declaration is, in my head, when you declare a function that isn't yet implemented, but is this incorrect? Do you have to look at the specified situation for either declaring a case "forward reference" or "forward declaration"?
A forward declaration is the declaration of a method or variable before you implement and use it. The purpose of forward declarations is to save compilation time.
The forward declaration of a variable causes storage space to be set aside, so you can later set the value of that variable.
The forward declaration of a function is also called a "function prototype," and is a declaration statement that tells the compiler what a function’s return type is, what the name of the function is, and the types its parameters. Compilers in languages such as C/C++ and Pascal store declared symbols (which include functions) in a lookup table and references them as it comes across them in your code. These compilers read your code sequentially, that is, top to bottom, so if you don't forward declare, the compiler discovers a symbol that it can't reference in the lookup table, and it raises an error that it doesn't know how to respond to the function.
The forward declaration is a hint to the compiler that you have defined (filled out the implementation of) the function elsewhere.
For example:
int first(int x); // forward declaration of first
...
int first(int x) {
if (x == 0) return 1;
else return 2;
}
But, you ask, why don't we just have the compiler make two passes on every source file: the first one to index all the symbols inside, and the second to parse the references and look them up? According to Dan Story:
When C was created in 1972, computing resources were much more scarce
and at a high premium -- the memory required to store a complex
program's entire symbolic table at once simply wasn't available in
most systems. Fixed storage was also expensive, and extremely slow, so
ideas like virtual memory or storing parts of the symbolic table on
disk simply wouldn't have allowed compilation in a reasonable
timeframe... When you're dealing with magnetic tape where seek times
were measured in seconds and read throughput was measured in bytes per
second (not kilobytes or megabytes), that was pretty meaningful.
C++, while created almost 17 years later, was defined as a superset
of C, and therefore had to use the same mechanism.
By the time Java rolled around in 1995, average computers had enough
memory that holding a symbolic table, even for a complex project, was
no longer a substantial burden. And Java wasn't designed to be
backwards-compatible with C, so it had no need to adopt a legacy
mechanism. C# was similarly unencumbered.
As a result, their designers chose to shift the burden of
compartmentalizing symbolic declaration back off the programmer and
put it on the computer again, since its cost in proportion to the
total effort of compilation was minimal.
In Java and C#, identifiers are recognized automatically from source files and read directly from dynamic library symbols. In these languages, header files are not needed for the same reason.
A forward reference is the opposite. It refers to the use of an entity before its declaration. For example:
int first(int x) {
if (x == 0) return 1;
return second(x-1); // forward reference to second
}
int second(int x) {
if (x == 0) return 0;
return first(x-1);
}
Note that "forward reference" is used sometimes, though less often, as a synonym for "forward declaration."
From Wikipedia:
Forward Declaration
Declaration of a variable or function which are not defined yet. Their defnition can be seen later on.
Forward Reference
Similar to Forward Declaration but where the variable or function appears first the definition is also in place.
forward declarations are used to allow single-pass compilation of a language (C, Pascal).
if forward references are allowed without forward declaration (Java, C#), a two-pass compiler is required.
While looking through some code, I found two different code snippets for initialization. I don't mean the method names, but the round brackets.
This one has just two of them:
if (self = [super initWithFrame:frame]) {
That's the way I do it all the time, and it seems to work. Now in an Apple example I found this:
if ((self = [super init])) {
Do I have to put it twice into round brackets here? Or is it just fine to put it in one pair of brackets, like the first example?
One pair of brackets is just fine
I call them "paranoia brackets" :)
EDIT: some C/C++ compilers will issue a warning because of the use of the assignment operator (it will say something like "did you mean ==" ?). Using extra parentheses prevents this warning. But XCode doesn't show this kind of warning, so there's no need to do that.
In some languages, if( foo = bar ) implies that the assignment was correctly applied. So the calls:
if( ( foo = bar ) )
would evaluate the assignment and return the result as the outer () act as a LHS, ie,
blah = foo = bar
the outer () act sort of like blah.
In ANSI C and it's children C++ and Objective-C this isn't strictly necessary. However as has been mentioned some compilers will issue a warning since the "=" / "==" type-o can be a nasty one. That type-o led to the idiom of putting the invariant or constant at the left hand side to cause compile time catching of the problem:
if( nil == foo )
if both sides are variables though it's still possibly a mistake.
There is a good reason for doing this even though gcc isn't warning you or evaluating things differently.
If you are writing code in a team environment your peers may not be sure you meant "=" or just mistyped "==", causing them to peer more closely at what you're doing even though there's no need to. In the style of "write once to be read 1000 times" you put in clues to prevent people from having to waste time when reading your code. For instance, use clear and spelled out variable names (no economy on bytes these days!). Don't use obtuse and overly optimized constructs in areas that aren't drags on performance - write your code cleanly.
In this case, use (( )) to indicate you knew it was "=" not "==".
Don't forget, Apple is writing their examples for not just a dozen people to read, but potentially every man, woman, and child on earth to read now and in the future. Clarity is of upmost importance.
Looks like a cut and paste error to me! (Or a lover of Lisp.) There is no good reason to have the second pair of brackets but, as you note, it's not actually harmful.
If I've got this right the first one checks that the assignment self = [super initWithFrame:frame] happens and the second one checks that the result of that assignment is true
But this could just be a lack of tea speaking...