This is kind of a weird and un-Swift-thonic question, so bear with me.
I want to do in Swift something like the same thing I'm currently doing in Objective-C/C++, so I'll start by describing that.
I have some existing C++ code that defines a macro that, when used in an expression anywhere in the code, will insert an entry into a table in the binary at compile time. In other words, the user writes something like this:
#include "magic.h"
void foo(bool b) {
if (b) {
printf("%d\n", MAGIC(xyzzy));
}
}
and thanks to the definition
#define MAGIC(Name) \
[]{ static int __attribute__((used, section("DATA,magical"))) Name; return Name; }()
what actually happens at compile time is that a static variable named xyzzy (modulo name-mangling) is created and allocated into the special magical section of my Mach-O binary, so that running nm -m foo.o to dump the symbols shows something a lot like this:
0000000000000098 (__TEXT,__eh_frame) non-external EH_frame0
0000000000000050 (__TEXT,__cstring) non-external L_.str
0000000000000000 (__TEXT,__text) external __Z3foob
00000000000000b0 (__TEXT,__eh_frame) external __Z3foob.eh
0000000000000040 (__TEXT,__text) non-external __ZZ3foobENK3$_0clEv
00000000000000d8 (__TEXT,__eh_frame) non-external __ZZ3foobENK3$_0clEv.eh
0000000000000054 (__DATA,magical) non-external [no dead strip] __ZZZ3foobENK3$_0clEvE5xyzzy
(undefined) external _printf
Through the magic of getsectbynamefromheader(), I can then load the symbol table for the magical section, scan through it, and find out (by demangling every symbol I find) that at some point in the user's code, he calls MAGIC(xyzzy). Eureka!
I can replicate the whole second half of that workflow just fine in Swift — starting with the getsectbynamefromheader() part. However, the first part has me stumped.
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
As far as I know, Swift has no way to insert symbols into a given section — no equivalent of __attribute__((section)). This is okay, though, since nothing in my plan requires a dedicated section; that part just makes the second half easier.
As far as I know, the only way to get a symbol into the symbol table in Swift is via a local struct definition. Something like this:
func foo(b: Bool) -> Void {
struct Local { static var xyzzy = 0; };
println(Local.xyzzy);
}
That works, but it's a bit of extra typing, and can't be done inline in an expression (not that that'll matter if we can't make a MAGIC macro in Swift anyway), and I'm worried that the Swift compiler might optimize it away.
So, there are three questions here, all about how to make Swift do things that Swift doesn't want to do: Macros, attributes, and creating symbols that are resistant to compiler optimization.
I'm aware of #asmname but I don't think it helps me since I can already deal with demangling on my own.
I'm aware that Swift has "generics", but they seem to be closer to Java generics than to C++ templates; I don't think they can be used as a substitute for macros in this particular case.
I'm aware that the code for the Swift compiler is now open-source; I've skimmed bits of it in vain; but I can't read through all of it looking for tricks that might not even be there.
Here is the answer to your question about preprocessor (and macros).
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
Swift project has a preprocessor (but, AFAIK, it is not distributed with Swift's binary).
From swift-users mailing list:
What are .swift.gyb files?
It’s a preprocessor the Swift
team wrote so that when they needed to build, say, ten nearly-identical
variants of Int, they wouldn’t have to literally copy and paste the same
code ten times. If you open one of those files, you’ll see that they’re
mainly Swift code, but with some lines of code intermixed that are written in Python.
It is not as beautiful as C macros, but, IMHO, is more powerful.
You can see the available commands with ./swift/utils/gyb --help command after cloning the Swift's git repo.
$ swift/utils/gyb --help
usage, etc (TL;DR)...
Example template:
- Hello -
%{
x = 42
def succ(a):
return a+1
}%
I can assure you that ${x} < ${succ(x)}
% if int(y) > 7:
% for i in range(3):
y is greater than seven!
% end
% else:
y is less than or equal to seven
% end
- The End. -
When run with "gyb -Dy=9", the output is
- Hello -
I can assure you that 42 < 43
y is greater than seven!
y is greater than seven!
y is greater than seven!
- The End. -
My example of GYB usage is available on GitHub.Gist.
For more complex examples look for *.swift.gyb files in #apple/swift/stdlib/public/core.
Related
I'm already using -O3, so I don't think I can do much better in general, but the generated assembly is still a tight fit for the timing requirements. Is there a way to, for example, tell avr-gcc to keep non-speed-critical code out of a speed-critical section?
if (READY)
{
ACKNOWLEDGE();
SETUP_PART1();
SETUP_PART2();
//start speed-critical section
SYNC_TO_TIMER();
MINIMAL_SPEED_CRITICAL_CODE();
//end speed-critical section
CLEANUP();
}
A tedious reading of the -O3-optimized assembly listing (.lss file) shows that SETUP_PART2(); has been reordered to come between SYNC_TO_TIMER(); and MINIMAL_SPEED_CRITICAL_CODE();. That's a problem because it adds time spent in the speed-critical section that doesn't need to exist.
I don't want to make the critical section itself any longer than it needs to be, so I'm hesitant to relax the optimization. (the same action that causes this problem may also be what makes the final version fit inside the time available) So how can I tell it that "this behavior" needs to stay pure and uncluttered, but still optimize it fully?
I've used inline assembly before, so I'm sure I could figure out again how to communicate between that and C without completely tying up or clobbering registers, etc. But I'd much rather stick with 100% C if I can, and let the compiler figure out how not to run over itself.
Not quite the direct answer that I as looking for, but I found something that works. So it's technically an XY problem, but I'll still take a direct answer to the question as stated.
Here's what I ended up doing:
1. Separate the speed-critical section into its own function, and prevent it from inlining:
void __attribute__((noinline)) _fast_function(uint8_t arg1, uint8_t arg2)
{
//start speed-critical section
SYNC_TO_TIMER();
MINIMAL_SPEED_CRITICAL_CODE(arg1, arg2);
//end speed-critical section
}
void normal_function(void)
{
if (READY)
{
ACKNOWLEDGE();
SETUP_PART1();
SETUP_PART2(); //determines arg1, arg2
_fast_function(arg1, arg2);
CLEANUP();
}
}
That keeps the speed-critical stuff together, and by itself. It also adds some overhead, but that happens OUTSIDE the fast part, where I can afford it.
2. Manually unroll some loops:
The original code was something like this:
uint8_t data[3];
uint8_t byte_num = 3;
while(byte_num)
{
byte_num--;
uint8_t bit_mask = 0b10000000;
while(bit_mask)
{
if(data[byte_num] & bit_mask) WEIRD_OUTPUT_1();
else WEIRD_OUTPUT_0();
bit_mask >>= 1;
}
}
The new code is like this:
#define WEIRD_OUTPUT(byte,bit) \
do \
{ \
if((byte) & (bit)) WEIRD_OUTPUT_1(); \
else WEIRD_OUTPUT_0(); \
} while(0)
uint8_t data[3];
WEIRD_OUTPUT(data[2],0b10000000);
WEIRD_OUTPUT(data[2],0b01000000);
WEIRD_OUTPUT(data[2],0b00100000);
WEIRD_OUTPUT(data[2],0b00010000);
WEIRD_OUTPUT(data[2],0b00001000);
WEIRD_OUTPUT(data[2],0b00000100);
WEIRD_OUTPUT(data[2],0b00000010);
WEIRD_OUTPUT(data[2],0b00000001);
WEIRD_OUTPUT(data[1],0b10000000);
WEIRD_OUTPUT(data[1],0b01000000);
WEIRD_OUTPUT(data[1],0b00100000);
WEIRD_OUTPUT(data[1],0b00010000);
WEIRD_OUTPUT(data[1],0b00001000);
WEIRD_OUTPUT(data[1],0b00000100);
WEIRD_OUTPUT(data[1],0b00000010);
WEIRD_OUTPUT(data[1],0b00000001);
WEIRD_OUTPUT(data[0],0b10000000);
WEIRD_OUTPUT(data[0],0b01000000);
WEIRD_OUTPUT(data[0],0b00100000);
WEIRD_OUTPUT(data[0],0b00010000);
WEIRD_OUTPUT(data[0],0b00001000);
WEIRD_OUTPUT(data[0],0b00000100);
WEIRD_OUTPUT(data[0],0b00000010);
WEIRD_OUTPUT(data[0],0b00000001);
In addition to dropping the loop code itself, this also replaces an array index with a direct access, and a variable with a constant, each of which offers its own speedup.
And it still barely fits in the time available. But it does fit! :)
You can try to use a memory barrier to prevent ordering code across it:
__asm volatile ("" ::: "memory");
The compiler will not move volatile accesses or memory accesses across such a barrier. It might however move other stuff across it like arithmetic that does not involve memory access.
If the compiler is reordering code, that it's a valid compilation w.r.t. the C language standard, which means that you'll have some hard time avoiding it by pure C code hacks.
If you are after speed, then consider writing WEIRD_OUTPUT(data[2],0b10000000); etc. as inline asm, or writing the whole function in assembly. This gives you definite control over the timing (not considering IRQs), whereas the C language standard does not make any statements on program timing whatsoever.
Variables of struct declared by data type of language in the header file. Usually data type using to declare variables, but other data type pass to preprocessors. When we should use to a data type send to preprocessor for declare variables? Why data type and variables send to processor?
#define DECLARE_REFERENCE(type, name) \
union { type name; int64_t name##_; }
typedef struct _STRING
{
int32_t flags;
int32_t length;
DECLARE_REFERENCE(char*, identifier);
DECLARE_REFERENCE(uint8_t*, string);
DECLARE_REFERENCE(uint8_t*, mask);
DECLARE_REFERENCE(MATCH*, matches_list_head);
DECLARE_REFERENCE(MATCH*, matches_list_tail);
REGEXP re;
} STRING;
Why this code is doing this for declarations? Because as the body of DECLARE_REFERENCE shows, when a type and name are passed to this macro it does more than just the declaration - it builds something else out of the name as well, for some other unknown purpose. If you only wanted to declare a variable, you wouldn't do this - it does something distinct from simply declaring one variable.
What it actually does? The unions that the macro declares provide a second name for accessing the same space as a different type. In this case you can get at the references themselves, or also at an unconverted integer representation of their bit pattern. Assuming that int64_t is the same size as a pointer on the target, anyway.
Using a macro for this potentially serves several purposes I can think of off the bat:
Saves keystrokes
Makes the code more readable - but only to people who already know what the macros mean
If the secondary way of getting at reference data is only used for debugging purposes, it can be disabled easily for a release build, generating compiler errors on any surviving debug code
It enforces the secondary status of the access path, hiding it from people who just want to see what's contained in the struct and its formal interface
Should you do this? No. This does more than just declare variables, it also does something else, and that other thing is clearly specific to the gory internals of the rest of the containing program. Without seeing the rest of the program we may never fully understand the rest of what it does.
When you need to do something specific to the internals of your program, you'll (hopefully) know when it's time to invent your own thing-like-this (most likely never); but don't copy others.
So the overall lesson here is to identify places where people aren't writing in straightforward C, but are coding to their particular application, and to separate those two, and not take quirks from a specific program as guidelines for the language as a whole.
Sometimes it is necessary to have a number of declarations which are guaranteed to have some relationship to each other. Some simple kinds of relationships such as constants that need to be numbered consecutively can be handled using enum declarations, but some applications require more complex relationships that the compiler can't handle directly. For example, one might wish to have a set of enum values and a set of string literals and ensure that they remain in sync with each other. If one declares something like:
#define GENERATE_STATE_ENUM_LIST \
ENUM_LIST_ITEM(STATE_DEFAULT, "Default") \
ENUM_LIST_ITEM(STATE_INIT, "Initializing") \
ENUM_LIST_ITEM(STATE_READY, "Ready") \
ENUM_LIST_ITEM(STATE_SLEEPING, "Sleeping") \
ENUM_LIST_ITEM(STATE_REQ_SYNC, "Starting synchronization") \
// This line should be left blank except for this comment
Then code can use the GENERATE_STATE_ENUM_LIST macro both to declare an enum type and a string array, and ensure that even if items are added or removed from the list each string will match up with its proper enum value. By contrast, if the array and enum declarations were separate, adding a new state to one but not the other could cause the values to get "out of sync".
I'm not sure what the purpose the macros in your particular case, but the pattern can sometimes be a reasonable one. The biggest 'question' is whether it's better to (ab)use the C preprocessor so as to allow such relationships to be expressed in valid-but-ugly C code, or whether it would be better to use some other tool to take a list of states and would generate the appropriate C code from that.
I'm using open source project (NSBKeyframeAnimation) for some of animations in my p roject. Here are example of methods that i'm using:
double NSBKeyframeAnimationFunctionEaseInQuad(double t,double b, double c, double d)
{
return c*(t/=d)*t + b;
}
I have updated my Xcode to 5.0, and every method from this project started to show me warnings like this: "Unsequenced modification and access to 't' ". Should i rewrite all methods to objective-c or there's another approach to get rid of all these warnings?
The behavior of the expression c*(t/=d)*t + b is undefined, and you should fix it,
e.g. to
t /= d;
return c*t*t + b;
See for example Undefined behavior and sequence points for a detailed explanation.
those warnings can be disabled
put this before the code triggering the warning
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunsequenced"
and this after that code
#pragma clang diagnostic pop
however, nothing guarantees that compilers will always handle this case gracefully.
i came to this page because i got 50 of those warnings, in exactly the same source file.
i'm grateful for those functions, but the programmer should realize that trying to write everything on one line is very "1980's", when the compilers weren't nearly as optimized as today.
and when it actually matter to win a few processor cycles, we only had a few million, not the billions we have now.
i would always put readability first.
The error you are referring to appears in all versions of Xcode, seeing as it is not Xcode that is the source of the warning, but the programmer; and, it is not Xcode that generates the warning, it is the GCC compiler it uses to debug your code that is responsible for identifying the potential issue the warning raises.
While it may be that rewriting the expression will resolve the error, you do not have to do that in this case (or, technically, any other case like this). You can leave it as is.
The answer is to add sequence (or order) to the modification of the variable (i.e., to the assignment of a new value) and to its expression (i.e., to the return of its new value), and that only takes a few extra characters to achieve, namely, a pair of braces ensconced in parentheses and a semicolon (see Statements and Declarations in Expressions).
double NSBKeyframeAnimationFunctionEaseInQuad(double t,double b, double c, double d)
{
return c*({(t/=d);})*t + b;
}
Here are before-and-after screenshots that demonstrate the solution:
I'm trying to understand a specific thing about ocaml modules and their compilation:
am I forced to redeclare types already declared in a .mli inside the specific .ml implementations?
Just to give an example:
(* foo.mli *)
type foobar = Bool of bool | Float of float | Int of int
(* foo.ml *)
type baz = foobar option
This, according to my normal way of thinking about interfaces/implementations, should be ok but it says
Error: Unbound type constructor foobar
while trying to compile with
ocamlc -c foo.mli
ocamlc -c foo.ml
Of course the error disappears if I declare foobar inside foo.ml too but it seems a complex way since I have to keep things synched on every change.
Is there a way to avoid this redundancy or I'm forced to redeclare types every time?
Thanks in advance
OCaml tries to force you to separate the interface (.mli) from the implementation (.ml. Most of the time, this is a good thing; for values, you publish the type in the interface, and keep the code in the implementation. You could say that OCaml is enforcing a certain amount of abstraction (interfaces must be published; no code in interfaces).
For types, very often, the implementation is the same as the interface: both state that the type has a particular representation (and perhaps that the type declaration is generative). Here, there can be no abstraction, because the implementer doesn't have any information about the type that he doesn't want to publish. (The exception is basically when you declare an abstract type.)
One way to look at it is that the interface already contains enough information to write the implementation. Given the interface type foobar = Bool of bool | Float of float | Int of int, there is only one possible implementation. So don't write an implementation!
A common idiom is to have a module that is dedicated to type declarations, and make it have only a .mli. Since types don't depend on values, this module typically comes in very early in the dependency chain. Most compilation tools cope well with this; for example ocamldep will do the right thing. (This is one advantage over having only a .ml.)
The limitation of this approach is when you also need a few module definitions here and there. (A typical example is defining a type foo, then an OrderedFoo : Map.OrderedType module with type t = foo, then a further type declaration involving'a Map.Make(OrderedFoo).t.) These can't be put in interface files. Sometimes it's acceptable to break down your definitions into several chunks, first a bunch of types (types1.mli), then a module (mod1.mli and mod1.ml), then more types (types2.mli). Other times (for example if the definitions are recursive) you have to live with either a .ml without a .mli or duplication.
Yes, you are forced to redeclare types. The only ways around it that I know of are
Don't use a .mli file; just expose everything with no interface. Terrible idea.
Use a literate-programming tool or other preprocessor to avoid duplicating the interface declarations in the One True Source. For large projects, we do this in my group.
For small projects, we just duplicate type declarations. And grumble about it.
You can let ocamlc generate the mli file for you from the ml file:
ocamlc -i some.ml > some.mli
In general, yes, you are required to duplicate the types.
You can work around this, however, with Camlp4 and the pa_macro syntax extension (findlib package: camlp4.macro). It defines, among other things, and INCLUDE construct. You can use it to factor the common type definitions out into a separate file and include that file in both the .ml and .mli files. I haven't seen this done in a deployed OCaml project, however, so I don't know that it would qualify as recommended practice, but it is possible.
The literate programming solution, however, is cleaner IMO.
No, in the mli file, just say "type foobar". This will work.
What is the difference between forward declaration and forward reference?
Forward declaration is, in my head, when you declare a function that isn't yet implemented, but is this incorrect? Do you have to look at the specified situation for either declaring a case "forward reference" or "forward declaration"?
A forward declaration is the declaration of a method or variable before you implement and use it. The purpose of forward declarations is to save compilation time.
The forward declaration of a variable causes storage space to be set aside, so you can later set the value of that variable.
The forward declaration of a function is also called a "function prototype," and is a declaration statement that tells the compiler what a function’s return type is, what the name of the function is, and the types its parameters. Compilers in languages such as C/C++ and Pascal store declared symbols (which include functions) in a lookup table and references them as it comes across them in your code. These compilers read your code sequentially, that is, top to bottom, so if you don't forward declare, the compiler discovers a symbol that it can't reference in the lookup table, and it raises an error that it doesn't know how to respond to the function.
The forward declaration is a hint to the compiler that you have defined (filled out the implementation of) the function elsewhere.
For example:
int first(int x); // forward declaration of first
...
int first(int x) {
if (x == 0) return 1;
else return 2;
}
But, you ask, why don't we just have the compiler make two passes on every source file: the first one to index all the symbols inside, and the second to parse the references and look them up? According to Dan Story:
When C was created in 1972, computing resources were much more scarce
and at a high premium -- the memory required to store a complex
program's entire symbolic table at once simply wasn't available in
most systems. Fixed storage was also expensive, and extremely slow, so
ideas like virtual memory or storing parts of the symbolic table on
disk simply wouldn't have allowed compilation in a reasonable
timeframe... When you're dealing with magnetic tape where seek times
were measured in seconds and read throughput was measured in bytes per
second (not kilobytes or megabytes), that was pretty meaningful.
C++, while created almost 17 years later, was defined as a superset
of C, and therefore had to use the same mechanism.
By the time Java rolled around in 1995, average computers had enough
memory that holding a symbolic table, even for a complex project, was
no longer a substantial burden. And Java wasn't designed to be
backwards-compatible with C, so it had no need to adopt a legacy
mechanism. C# was similarly unencumbered.
As a result, their designers chose to shift the burden of
compartmentalizing symbolic declaration back off the programmer and
put it on the computer again, since its cost in proportion to the
total effort of compilation was minimal.
In Java and C#, identifiers are recognized automatically from source files and read directly from dynamic library symbols. In these languages, header files are not needed for the same reason.
A forward reference is the opposite. It refers to the use of an entity before its declaration. For example:
int first(int x) {
if (x == 0) return 1;
return second(x-1); // forward reference to second
}
int second(int x) {
if (x == 0) return 0;
return first(x-1);
}
Note that "forward reference" is used sometimes, though less often, as a synonym for "forward declaration."
From Wikipedia:
Forward Declaration
Declaration of a variable or function which are not defined yet. Their defnition can be seen later on.
Forward Reference
Similar to Forward Declaration but where the variable or function appears first the definition is also in place.
forward declarations are used to allow single-pass compilation of a language (C, Pascal).
if forward references are allowed without forward declaration (Java, C#), a two-pass compiler is required.