I'm writing some code that interfaces an existing library written in C. In my Rust code I'd like to be able to use values from CPP macros. If I have a C include.h that looks like this:
#define INIT_FLAG 0x00000001
I'd like to be able to use it in Rust like this:
#[link(name="mylib")]
extern {
pub static init_flag: c_int = INIT_FLAG;
}
I've looked at other FFI code and I see a lot of people
duplicating these values in Rust instead of getting them from the FFI.
This seems a little brittle, and I'd also like to be able to handle
more complicated things that are defined via CPP macros.
Running cpp over my Rust files would only work if I'm sure my
CPP macros are only used for simple things.
It is impossible, and I don't think it will be possible in the future. C macros bring too many problems with them. If you want to run cpp over your Rust sources, you can do it manually.
If you don't want to do it and if there is a lot of constants and you also don't want to copy their values from C code to Rust you can make a C wrapper which will provide global variables with these values:
#define INIT_FLAG 0x00000001
...
const int init_flag = INIT_FLAG;
You compile this file, create a static library from it and link to it as usual:
$ gcc -c init_flag.c
$ ar r libinitflag.a init_flag.o
Rust source:
use std::libc;
#[link(name="initflag", kind="static")]
extern {
pub static init_flag: libc::c_int;
}
Rust source is nearly identical to what you tried to achieve. You will need C glue object file, however.
That's merely impossible because a C macro constant doesn't represent any object or entity at runtime. That's because the cpp preprocessor performs macro expansion (and handles the rest directives) even before compilation takes place. Consider the following snippet:
#define INIT_FLAG 0x00000001
/* some code */
unsigned dummy() { return INIT_FLAG; }
/* some other code */
Running cpp on the snippet yields preprocessed code (so called compilation unit, or translation unit) which has all occurences of INIT_FLAG replaced by the literal 0x00000001:
unsigned dummy() { return 0x00000001; }
The compilation unit then gets compiled, resulting in the object file, but now there's no trace of INIT_FLAG in it. Therefore, you cannot refer to INIT_FLAG when linking against the object file: it simply doesn't contain such symbol.
Related
This is kind of a weird and un-Swift-thonic question, so bear with me.
I want to do in Swift something like the same thing I'm currently doing in Objective-C/C++, so I'll start by describing that.
I have some existing C++ code that defines a macro that, when used in an expression anywhere in the code, will insert an entry into a table in the binary at compile time. In other words, the user writes something like this:
#include "magic.h"
void foo(bool b) {
if (b) {
printf("%d\n", MAGIC(xyzzy));
}
}
and thanks to the definition
#define MAGIC(Name) \
[]{ static int __attribute__((used, section("DATA,magical"))) Name; return Name; }()
what actually happens at compile time is that a static variable named xyzzy (modulo name-mangling) is created and allocated into the special magical section of my Mach-O binary, so that running nm -m foo.o to dump the symbols shows something a lot like this:
0000000000000098 (__TEXT,__eh_frame) non-external EH_frame0
0000000000000050 (__TEXT,__cstring) non-external L_.str
0000000000000000 (__TEXT,__text) external __Z3foob
00000000000000b0 (__TEXT,__eh_frame) external __Z3foob.eh
0000000000000040 (__TEXT,__text) non-external __ZZ3foobENK3$_0clEv
00000000000000d8 (__TEXT,__eh_frame) non-external __ZZ3foobENK3$_0clEv.eh
0000000000000054 (__DATA,magical) non-external [no dead strip] __ZZZ3foobENK3$_0clEvE5xyzzy
(undefined) external _printf
Through the magic of getsectbynamefromheader(), I can then load the symbol table for the magical section, scan through it, and find out (by demangling every symbol I find) that at some point in the user's code, he calls MAGIC(xyzzy). Eureka!
I can replicate the whole second half of that workflow just fine in Swift — starting with the getsectbynamefromheader() part. However, the first part has me stumped.
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
As far as I know, Swift has no way to insert symbols into a given section — no equivalent of __attribute__((section)). This is okay, though, since nothing in my plan requires a dedicated section; that part just makes the second half easier.
As far as I know, the only way to get a symbol into the symbol table in Swift is via a local struct definition. Something like this:
func foo(b: Bool) -> Void {
struct Local { static var xyzzy = 0; };
println(Local.xyzzy);
}
That works, but it's a bit of extra typing, and can't be done inline in an expression (not that that'll matter if we can't make a MAGIC macro in Swift anyway), and I'm worried that the Swift compiler might optimize it away.
So, there are three questions here, all about how to make Swift do things that Swift doesn't want to do: Macros, attributes, and creating symbols that are resistant to compiler optimization.
I'm aware of #asmname but I don't think it helps me since I can already deal with demangling on my own.
I'm aware that Swift has "generics", but they seem to be closer to Java generics than to C++ templates; I don't think they can be used as a substitute for macros in this particular case.
I'm aware that the code for the Swift compiler is now open-source; I've skimmed bits of it in vain; but I can't read through all of it looking for tricks that might not even be there.
Here is the answer to your question about preprocessor (and macros).
Swift has no preprocessor, so spelling the magic as elegantly as MAGIC(someidentifier) is impossible. I don't want it to be too ugly, though.
Swift project has a preprocessor (but, AFAIK, it is not distributed with Swift's binary).
From swift-users mailing list:
What are .swift.gyb files?
It’s a preprocessor the Swift
team wrote so that when they needed to build, say, ten nearly-identical
variants of Int, they wouldn’t have to literally copy and paste the same
code ten times. If you open one of those files, you’ll see that they’re
mainly Swift code, but with some lines of code intermixed that are written in Python.
It is not as beautiful as C macros, but, IMHO, is more powerful.
You can see the available commands with ./swift/utils/gyb --help command after cloning the Swift's git repo.
$ swift/utils/gyb --help
usage, etc (TL;DR)...
Example template:
- Hello -
%{
x = 42
def succ(a):
return a+1
}%
I can assure you that ${x} < ${succ(x)}
% if int(y) > 7:
% for i in range(3):
y is greater than seven!
% end
% else:
y is less than or equal to seven
% end
- The End. -
When run with "gyb -Dy=9", the output is
- Hello -
I can assure you that 42 < 43
y is greater than seven!
y is greater than seven!
y is greater than seven!
- The End. -
My example of GYB usage is available on GitHub.Gist.
For more complex examples look for *.swift.gyb files in #apple/swift/stdlib/public/core.
I am adding to Exim an embedded python interpreter. I have copied the embedded perl interface and expect python to work the same as the long-since-coded embedded perl interpreter. The goal is to allow the sysadmin to do complex functions in a powerful scripting language (i.e. python) instead of trying to use exim's standard ACL commands because it can get quite complex to do relatively simple things using the exim ACL language.
My current code as of the time of this writing is located at http://git.exim.org/users/tlyons/exim.git/blob/9b2c5e1427d3861a2154bba04ac9b1f2420908f7:/src/src/python.c . It is working properly in that it can import the sysadmin's custom python code, call functions in it, and handle the returned values (simple return types only: int, float, or string). However, it does not yet handle values that are passed to a python function, which is where my question begins.
Python seems to require that any args I pass to the embedded python function be explicitly cast to one of int,long,double,float or string using the c api. The problem is the sysadmin can put anything in that embedded python code and in the c side of things in exim, I won't know what those variable types are. I know that python is dynamically typed so I was hoping to maintain that compliance when passing values to the embedded code. But it's not working that way in my testing.
Using the following basic super-simple python code:
def dumb_add(a,b):
return a+b
...and the calling code from my exim ACL language is:
${python {dumb_add}{800}{100}}
In my c code below, reference counting is omitted for brevity. count is the number of args I'm passing:
pArgs = PyTuple_New(count);
for (i=0; i<count; ++i)
{
pValue = PyString_FromString((const char *)arg[i]);
PyTuple_SetItem(pArgs, i, pValue);
}
pReturn = PyObject_CallObject(pFunc, pArgs);
Yes, **arg is a pointer to an array of strings (two strings in this simple case). The problem is that the two values are treated as strings in the python code, so the result of that c code executing the embedded python is:
${python {dumb_add}{800}{100}}
800100
If I change the python to be:
def dumb_add(a,b):
return int(a)+int(b)
Then the result of that c code executing the python code is as expected:
${python {dumb_add}{800}{100}}
900
My goal is that I don't want to force a python user to manually cast all of the numeric parameters they pass to an embedded python function. Instead of PyString_FromString(), if there was a PyDynamicType_FromString(), I would be ecstatic. Exim's embedded perl parses the args and does the casting automatically, I was hoping for the same from the embedded python. Can anybody suggest if python can do this arg parsing to provide the dynamic typing I was expecting?
Or if I want to maintain that dynamic typing, is my only option going to be for me to parse each arg and guess at the type to cast it to? I was really really REALLY hoping to avoid that approach. If it comes to that, I may just document "All parameters passed are strings, so if you are actually trying to pass numbers, you must cast all parameters with int(), float(), double(), or long()". However, and there is always a comma after however, I feel that approach will sour strong python coders on my implementation. I want to avoid that too.
Any and all suggestions are appreciated, aside from "make your app into a python module".
The way I ended up solving this was by finding out how many args the function expected, and exit with an error if the number of args passed to the function didn't match. Rather than try and synthesize missing args or to simply omit extra args, for my use case I felt it was best to enforce matching arg counts.
The args are passed to this function as an unsigned char ** arg:
int count = 0;
/* Identify and call appropriate function */
pFunc = PyObject_GetAttrString(pModule, (const char *) name);
if (pFunc && PyCallable_Check(pFunc))
{
PyCodeObject *pFuncCode = (PyCodeObject *)PyFunction_GET_CODE(pFunc);
/* Should not fail if pFunc succeeded, but check to be thorough */
if (!pFuncCode)
{
*errstrp = string_sprintf("Can't check function arg count for %s",
name);
return NULL;
}
while(arg[count])
count++;
/* Sanity checking: Calling a python object requires to state number of
vars being passed, bail if it doesn't match function declaration. */
if (count != pFuncCode->co_argcount)
{
*errstrp = string_sprintf("Expected %d args to %s, was passed %d",
pFuncCode->co_argcount, name, count);
return NULL;
}
The string_sprintf is a function within the Exim source code which also handles memory allocation, making life easy for me.
What is the use/applicability of macro function without definition:
#ifndef __SYSCALL
#define __SYSCALL(a, b)
#endif
One can find this macro in Linux system in header file /usr/include/asm/msr.h
I also notice macro of following kind.
#define _M(x) x
And only reason to defined this kind of macro that I can think to make code uniform. like in #define SOMETHING (1 << 0). Is there any other hidden(better) use of this kind of macros?
An answer with example will be very helpful. Also
can someone provide me a text/link to read about this.
One of the most common case of a macro of this form:
#define _M(x) x
is to provide backwards compatibility for compilers that only supported the original K&R dialect of C, that predated the now-ubiquitous ANSI C dialect. In the original K&R dialect of the language, function arguments were not specified when declaring the function. In 1989, ANSI standardized the language and incorporated a number of improvements, including function prototypes that declared the number of type of arguments.
int f(int x, double y); /* ANSI C. K&R compilers would not accept this */
int f(); /* Function declared in the original K&R dialect */
While compilers that support the original K&R dialect of C are rare (or extinct) these days, a lot of software was written when both kinds of compilers needed to be supported, and macros provided an easy way to support both. There are still a lot of headers laying about that provide this backwards compatibility.
To provide backwards compatibility for K&R compilers, many header files have the following:
#if ANSI_PROTOTYPES
# define _P(x) x
#else
# define _P(x) ()
#endif
...
int f _P((int x, double y));
If the ANSI_PROTOTYPES definition has been correctly set (either by the user or by some prior #ifdef logic), then you get the desired behavior:
If ANSI_PROTOTYPES is defined, the definition expands to int f(int x, double y).
If ANSI_PROTOTYPES is not defined, the definition expands to int f()
This is often used with conditional expressions to disable a macro by causing it to be preprocessed to nothing. For example (simplified):
#ifdef DEBUG
#define ASSERT(x) if(!(x)) { abort(); }
#else
#define ASSERT(x) /* nothing */
#endif
Just a follow-up to my question.
I got good answers. but I am also adding some more helpful example where macros without definition are useful, one can find it helpful in future:
(1): Why do I see THROW in a C library?
uses to share header file between C and C++. The macro name is _THROW(x)
#ifdef __cplusplus
#define __THROW(x) throw(x)
#else
#define __THROW(x)
#endif
(2) to eliminate warnings when a function parameter isn't used:
This use is for c++. In C it will cause an error too few arguments But in C++ it works with no error: (codepad linked)
#define UNUSED(x)
int value = 0;
int foo(int UNUSED(value))
{
return 42;
}
int main(){
foo(value);
}
(for this I added c++ tag in my question)
Additionally,
(3): The use of #define _M(x) x is as follows, just to makes code line up uniformly:
/* Signed. */
# define INT8_C(c) c
# define INT16_C(c) c
# define INT32_C(c) c
# if __WORDSIZE == 64
# define INT64_C(c) c ## L
# else
# define INT64_C(c) c ## LL
# endif
the file is: /usr/include/stdint.h
It means that code that uses that macro will conditionally preprocess away to nothing.
As simple examples, consider debug code, logging or assertions.
This is probably a debug macro or an platform macro. For example lets say I have a debugger attached to INT3. I might have this when I'm debugging
#define debug() INT3()
Then to be safe I'll add this to production code (to make sure I took them all out)
#define debug()
This looks like something similar
It could be that in some cases on some systems this code needs to make a call -- for example on a certain CPU architecture or OS. But on your system it is just no-oped.
I'm looking to move some of my lighter weight metaprogramming from Nemerle to Boo and I'm trying to figure out how to define custom operators. For example, I can do the following in Nemerle:
macro #<-(func, v) {
<[ $func($v) ]>
}
Then these two are equivalent:
foo <- 5;
foo(5);
I can't find a way of doing this in Boo -- any ideas?
While Boo supports operator overloading by defining the appropriate static operator function (op_addition), and also supports syntactic macros, it does not support creating custom operators at this time.
I'm not sure if this is exactly what you need but you can create syntactic macros in Boo. There's some information on the CodeHaus site, http://boo.codehaus.org/Syntactic+Macros, but the syntax has changed in one of the recent releases. I don't know of any tutorials on the new syntax but the source release for Boo 0.8.2 has some examples (some of the language structures are implemented as macros). If you don't want to download the full source a view of the SVN repository is available, https://svn.codehaus.org/boo/boo/trunk/src/Boo.Lang.Extensions/Macros/. The assert macro would be a good place to start.
HTH
Stoo
How does function name scoping work across multiple C files?
I'm porting a standard gnu toolchain project to iPhone OS, and using Xcode to do it.
The code builds through make, but not through xcode. when building through xcode, the linker complains that the same symbol (function) is defined in two objects. the code has two distinct source files that #include a common file between them. While... odd (to me at least), it seems to work for the standard toolchain. any ideas if this is something that's somehow handled differently through a standard makefile?
All functions not marked static have global scope (they are dumped in a single namespace). Functions marked static are limited to the translation unit they are defined in.
You have a One Definition Rule violation.
One of your headers probably has a definition for a variable. E.g: in common.h you have
int foo = 42;
change it to:
extern int foo; // a declaration
and then create a common.c file where you put a definition:
int foo = 42;
In C, if I remember correctly, static function names are local to the source file in which they are defined, but all other function names exist in a global namespace. So if you have file1.c with
void fn1() {}
static void fn2() {}
and file2.c with
void fn1() {}
static void fn2() {}
and you tried to compile them with something like
cc file1.c file2.c
then you would get a name conflict between the fn1 in file1.c and the fn1 in file2.c, but not between the two fn2 functions (because they're static). (Of course, you'd get a bunch of other errors too because this program doesn't do anything, but those aren't relevant to scoping.)
If it compiles without errors from the Makefile, but not from your XCode project, it is most likely because the compiler's options are being set differently in the two environments. Check the Makefile to see what options are passed to 'gcc'. In particular, it's possible that the duplicate definitions are conditionally-compiled with #ifdef, and you may need to add some preprocessor definitions to your XCode project's settings.
My guess is that the common header defines an inline function that is resulting in a duplicate symbol. If that is the case, prefix those functions with a static inline, or define them as inline with an extern declaration in another .c file.