c/objective-c question - iphone

I am truing to use a c function in a .c file from within my objective-c class.
I have imported the header for the c file. but I am still getting a problem and my program would not compile.
Undefined symbols:
"gluUnProject(float, float, float, float const*, float const*, int const*, float*, float*, float*)", referenced from:
-[GLView checkCollission:object:] in GLView.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
Any idea how I can resolve this issue ?
Any help is certainly appreciated.
Qutaibah

This error is posted by the linker and not the compiler. Often this is caused by the code being compiled as C and included from C++ or the other way around.
You can normally fix this by ensuring that the function definitions in the header file enforces any C++ compiler to use C syntax by adding the following to the header:
#ifdef __cplusplus
extern "C" {
#endif
... definitions goes here ...
#ifdef __cplusplus
}
#endif
This method also ensures that the .c file itself is treating the function definitions as C and not by accident get compiled as C++.
If you do not wish to alter the header, you can encapsulate the #include statement in the same way. This will however not ensure correct compilation of the .c file itself.
EDIT:
Just a thought: I presume that you are actually compiling the .c file?

Related

PostgreSQL: Cant match function in DLL

I created a custom DLL containing following function:
#include "postgres.h"
#include <string.h>
#include "fmgr.h"
#include "utils/geo_decls.h"
PG_MODULE_MAGIC;
/* by value */
PG_FUNCTION_INFO_V1(add_one);
Datum
add_one(PG_FUNCTION_ARGS)
{
int32 arg = PG_GETARG_INT32(0);
PG_RETURN_INT32(arg + 1);
}
But when i try to create it in PgAdmin4 using next command:
CREATE FUNCTION add_one(INTEGER) RETURNS INTEGER
AS '$libdir/new_func', 'add_one'
LANGUAGE C STRICT ;
It says:
ERROR:cannot match function "add_one" in file "C:/Program Files/PostgreSQL/14/lib/new_func.dll"
SQL-state: 42883
DLL was compiled according to this guide.
Is there any way to check the list of functions in a .dll using PGAdmin? Or how can i try to solve this?
After some time i found out the reason why my DLL could not been read.
I used wrong compiler.
First i used compiler from this site. Then i followed the guide in Question.
Few days ago I found another MinGW compiler. And finally everything works.
Compiler i used for the last time here.

implicit declaration of function ‘bpf’

I have been studying BPF recently, but it is not proceeding because of a very basic problem.
I included linux/bpf.h as described in man bpf(2), but GCC can not find bpf function. This code is just for the test to make sure that GCC can find bpf function.
#include <linux/bpf.h>
int main()
{
bpf(0,(void *)0,0);
return 0;
}
GCC output is this.
$ gcc -o test bpf.c
bpf.c: In function ‘main’:
bpf.c:5:2: warning: implicit declaration of function ‘bpf’ [-Wimplicit-function-declaration]
bpf(0,(void *)0,0);
^~~
/usr/bin/ld: /tmp/cc4tjrUh.o: in function `main':
bpf.c:(.text+0x19): undefined reference to `bpf'
collect2: error: ld returned 1 exit status
I'm using Archlinux and linux kernel version is 4.20.11-arch1-1-ARCH.
Please help me how to include bpf function.
The manual page documents the system call bpf. While this is not intuitive, there is actually no function defined in the header <linux/bpf.h> that is simply called bpf(). Instead, you can do an indirect syscall with syscall(__NR_bpf, ...) (see also man syscall).
Projects relying on this syscall often define a wrapper that looks like this:
int bpf(enum bpf_cmd cmd, union bpf_attr *attr, unsigned int size)
{
return syscall(__NR_bpf, cmd, attr, size);
}
Here is an example from libbpf.

How do I use C preprocessor macros with Rust's FFI?

I'm writing some code that interfaces an existing library written in C. In my Rust code I'd like to be able to use values from CPP macros. If I have a C include.h that looks like this:
#define INIT_FLAG 0x00000001
I'd like to be able to use it in Rust like this:
#[link(name="mylib")]
extern {
pub static init_flag: c_int = INIT_FLAG;
}
I've looked at other FFI code and I see a lot of people
duplicating these values in Rust instead of getting them from the FFI.
This seems a little brittle, and I'd also like to be able to handle
more complicated things that are defined via CPP macros.
Running cpp over my Rust files would only work if I'm sure my
CPP macros are only used for simple things.
It is impossible, and I don't think it will be possible in the future. C macros bring too many problems with them. If you want to run cpp over your Rust sources, you can do it manually.
If you don't want to do it and if there is a lot of constants and you also don't want to copy their values from C code to Rust you can make a C wrapper which will provide global variables with these values:
#define INIT_FLAG 0x00000001
...
const int init_flag = INIT_FLAG;
You compile this file, create a static library from it and link to it as usual:
$ gcc -c init_flag.c
$ ar r libinitflag.a init_flag.o
Rust source:
use std::libc;
#[link(name="initflag", kind="static")]
extern {
pub static init_flag: libc::c_int;
}
Rust source is nearly identical to what you tried to achieve. You will need C glue object file, however.
That's merely impossible because a C macro constant doesn't represent any object or entity at runtime. That's because the cpp preprocessor performs macro expansion (and handles the rest directives) even before compilation takes place. Consider the following snippet:
#define INIT_FLAG 0x00000001
/* some code */
unsigned dummy() { return INIT_FLAG; }
/* some other code */
Running cpp on the snippet yields preprocessed code (so called compilation unit, or translation unit) which has all occurences of INIT_FLAG replaced by the literal 0x00000001:
unsigned dummy() { return 0x00000001; }
The compilation unit then gets compiled, resulting in the object file, but now there's no trace of INIT_FLAG in it. Therefore, you cannot refer to INIT_FLAG when linking against the object file: it simply doesn't contain such symbol.

how to get 18 bit code address from symbol defined in linker command file

In Code Composer, you can define new symbols in the linker command file simply:
_Addr_start = 0x5C00;
_AppLength = 0x4C000;
before the memory map and section assignment. This is done in the bootloader example from TI.
You can then refer to the address (as integers) in your c-code as this
extern uint32_t _Addr_start; // note that uint32_t is fake.
extern uint32_t _AppLength; // there is no uint32_t object allocated
printf("start = %X len= %X\r\n", (uint32_t)&_Addr_start, (uint32_t)&_AppLength);
The problem is that if you use the 'small' memory model, the latter symbol (at 0x45C00) gives linker warning, because it tries to cast it to a 16-bit pointer.
"C:/lakata/hardware-platform/CommonSW/otap.c", line 78: warning #17003-D:
relocation from function "OtapGetExternal_CRC_Calc" to symbol "_AppLength"
overflowed; the 18-bit relocated address 0x3f7fc is too large to encode in
the 16-bit field (type = 'R_MSP_REL16' (161), file = "./otap.obj", offset =
0x00000002, section = ".text:OtapGetExternal_CRC_Calc")
I tried using explicit far pointers, but code composer doesn't understand the keyword far. I tried to make the dummy symbol a function pointer, to trick the compiler into thinking that dereferencing it would.... The pointer points to code space, and the code space model is "large" while the data space model is "small".
I figured it out before I finished entering the question!
Instead of declaring the symbol as
extern uint32_t _AppLength; // pretend it is a dummy data
declare it as
void _AppLength(void); // pretend it is a dummy function
Then the pointer conversion works properly, because &_AppLength is assumed to be far now. (When it declared as an integer, &_AppLength is assumed to be near and the linker fails.)

Macro without definition in C

What is the use/applicability of macro function without definition:
#ifndef __SYSCALL
#define __SYSCALL(a, b)
#endif
One can find this macro in Linux system in header file /usr/include/asm/msr.h
I also notice macro of following kind.
#define _M(x) x
And only reason to defined this kind of macro that I can think to make code uniform. like in #define SOMETHING (1 << 0). Is there any other hidden(better) use of this kind of macros?
An answer with example will be very helpful. Also
can someone provide me a text/link to read about this.
One of the most common case of a macro of this form:
#define _M(x) x
is to provide backwards compatibility for compilers that only supported the original K&R dialect of C, that predated the now-ubiquitous ANSI C dialect. In the original K&R dialect of the language, function arguments were not specified when declaring the function. In 1989, ANSI standardized the language and incorporated a number of improvements, including function prototypes that declared the number of type of arguments.
int f(int x, double y); /* ANSI C. K&R compilers would not accept this */
int f(); /* Function declared in the original K&R dialect */
While compilers that support the original K&R dialect of C are rare (or extinct) these days, a lot of software was written when both kinds of compilers needed to be supported, and macros provided an easy way to support both. There are still a lot of headers laying about that provide this backwards compatibility.
To provide backwards compatibility for K&R compilers, many header files have the following:
#if ANSI_PROTOTYPES
# define _P(x) x
#else
# define _P(x) ()
#endif
...
int f _P((int x, double y));
If the ANSI_PROTOTYPES definition has been correctly set (either by the user or by some prior #ifdef logic), then you get the desired behavior:
If ANSI_PROTOTYPES is defined, the definition expands to int f(int x, double y).
If ANSI_PROTOTYPES is not defined, the definition expands to int f()
This is often used with conditional expressions to disable a macro by causing it to be preprocessed to nothing. For example (simplified):
#ifdef DEBUG
#define ASSERT(x) if(!(x)) { abort(); }
#else
#define ASSERT(x) /* nothing */
#endif
Just a follow-up to my question.
I got good answers. but I am also adding some more helpful example where macros without definition are useful, one can find it helpful in future:
(1): Why do I see THROW in a C library?
uses to share header file between C and C++. The macro name is _THROW(x)
#ifdef __cplusplus
#define __THROW(x) throw(x)
#else
#define __THROW(x)
#endif
(2) to eliminate warnings when a function parameter isn't used:
This use is for c++. In C it will cause an error too few arguments But in C++ it works with no error: (codepad linked)
#define UNUSED(x)
int value = 0;
int foo(int UNUSED(value))
{
return 42;
}
int main(){
foo(value);
}
(for this I added c++ tag in my question)
Additionally,
(3): The use of #define _M(x) x is as follows, just to makes code line up uniformly:
/* Signed. */
# define INT8_C(c) c
# define INT16_C(c) c
# define INT32_C(c) c
# if __WORDSIZE == 64
# define INT64_C(c) c ## L
# else
# define INT64_C(c) c ## LL
# endif
the file is: /usr/include/stdint.h
It means that code that uses that macro will conditionally preprocess away to nothing.
As simple examples, consider debug code, logging or assertions.
This is probably a debug macro or an platform macro. For example lets say I have a debugger attached to INT3. I might have this when I'm debugging
#define debug() INT3()
Then to be safe I'll add this to production code (to make sure I took them all out)
#define debug()
This looks like something similar
It could be that in some cases on some systems this code needs to make a call -- for example on a certain CPU architecture or OS. But on your system it is just no-oped.