How to print preprocessor macros under Sun Studio? - macros

I'm working under Sun Studio 12.3 on SunOS 5.11 (Solaris 11.3). I need to see the macros that Sun Studio defines to fix a bug report taken under the suite. This is similar to Solaris and Preprocessor Macros, but the cited question uses GCC and its preprocessor; and not Sun Studio's preprocessor.
I've run CC -flags but I don't see an option similar to GCC's cpp -dM or g++ -dM -E - </dev/null. CC does have a -E, but its fairly anemic and does not print any preprocessor definitions:
$ echo $CXX
/opt/solarisstudio12.3/bin/CC
$ $CXX -E /dev/null
#1 "/dev/null"
Using a real test file produces a similar result - preprocessor macros are missing:
$ $CXX -E test.cxx | grep __cplusplus
$
I also found the discussion of preprocessor macros in the Sun Studio manual at 2.5.3 Predefined Names. Table A-2 is OK, but its mostly anemic, too. Its missing basics like __cplusplus, and its missing other defines like _RWSTD_NO_CLASS_PARTIAL_SPEC.
How do I print preprocessor macros under Sun Studio?
$ /opt/solarisstudio12.3/bin/CC -flags
______________________________________________________________________________
Items within [ ] are optional. Items within < > are variable parameters.
Bar | indicates choice of literal values.
______________________________________________________________________________
-# Verbose mode
-### Show compiler commands built by driver, no compilation
-B[static|dynamic] Specify dynamic or static binding
-D<name[=token]> Associate name with token as if by #define
-E Compile source through preprocessor only, output to stdout
-G Build a dynamic shared library
-H Print path name of each file included during compilation
-I<dir> Add <dir> to preprocessor #include file search path
-KPIC Compile position independent code with 32-bit addresses
-Kpic Compile position independent code
-L<dir> Pass to linker to add <dir> to the library search path
-M<file> Pass <file> mapfile to linker
-O Use default optimization level (-xO3)
-O<n> Same as -xO<n>
-P Compile source through preprocessor only, output to .i file
-PIC Same as -KPIC
-Qoption <prog> <o>[,<o>...] Pass options list <o> to compilation phase <prog>
-R<dir[:dir]> Build runtime search path list into executable
-S Compile and only generate assembly code (.s)
-U<name> Delete initial definition of preprocessor symbol <name>
-V Report version number of each compilation phase
-W<c>,<arg> Pass <arg> to specified component <c> (a,d,h,i,l,m,p,u,0,2)
-Xlinker <arg> Pass <arg> to linker
-Xm Support dollar character in C++ identifiers
-Y<c>,<dir> Specify <dir> for location of component <c> (a,l,m,p,0,h,i,u)
-YA,<dir> Change default directory searched for components
-YI,<dir> Change default directory searched for include files
-YP,<dir> Change default directory for finding libraries files
-YS,<dir> Change default directory for startup object files
-c Compile only - produce .o files, suppress linking
-compat=5 Standard mode; accept source code that conforms to the C++ standard (default mode)
-compat=g G++ compatibility mode; accept g++ source code and generate g++ compatible object code
+d Do not expand inline functions
-dalign Ignored
-d{n|y} Dynamic [-dy] or static [-dn] option to linker
-dryrun Show compiler commands built by driver, no compilation
-e<arg> Passed to linker
-erroff[=<tags>] Suppress warnings specified by tags; <tags>={%none, %all, <tag list>}
-errtags[={yes|no}] Display messages with tags
-errwarn[=<tags>] Treats warnings specified by tags as errors; <tags>={%none, %all, <tag list>}
-fPIC Same as -KPIC
-fast Optimize using a selection of options
-features=<a>[,<a>] Enable/disable various C++ language features
-filt[=<a>[,<a>]] Control the filtering of both linker and compiler error messages;
<a>={errors,names,returns,stdlib}
-flags Show this summary of compiler options
-flagsrc=<f> Accept command options from file <f>
-fnonstd Initialize floating-point hardware to non-standard preferences
-fns[={yes|no}] Select non-standard floating point mode
-fpic Same as -Kpic
-fprecision=<a> Set FP rounding precision mode; <a>={single|double|extended}
-fround=<r> Select the IEEE rounding mode in effect at startup
-fsimple[=<n>] Select floating-point optimization preferences <n>
-fstore Force floating pt. values to target precision on assignment
-ftrap=<t> Select floating-point trapping mode in effect at startup
-g Compile for debugging
-g0 Compile for debugging by dbx but allow inlining
-g3 Compile for debugging by dbx including macros.
-h <name> Assign <name> to generated dynamic shared library
-help Same as -xhelp=flags
-i Passed to linker to ignore any LD_LIBRARY_PATH setting
-include <file> Include the contents of <file> before other files
-inline=<v> Attempt inlining of specified user routines; <v>={%auto,<func>,no%<func>}
-instances=<a> Control the link attributes of template instantiations;
<a>={static|global|extern|explicit|semiexplicit}
-instlib=<library> Inhibit generation of instances already in <library>
-keeptmp Keep temporary files created during compilation
-l<name> Link with library lib<name>.a or lib<name>.so
-libmieee Same as -xlibmieee
-libmil Same as -xlibmil
-library=<a>[,<a>] Incorporates specified CC-provided libraries into compilation and linking
-m32 Set 32-bit addressing model
-m64 Set 64-bit addressing model
-mc Remove duplicate strings from .comment section of output files
-migration Show where to get information about migrating from C++ 4.2
-mr Remove all strings from .comment section of output files
-mr,"string" Remove all strings and append "string" to .comment section
-mt[={yes|no}] Specify options needed when compiling multi-threaded code
-native Optimize for the host system (-xtarget=native)
-noex Same as -features=no%except
-nofstore Do not force floating pt. values to target precision on assignment
-o <outputfile> Set name of output file to <outputfile>
-p Compile for profiling with prof
+p Ignore non-standard preprocessor asserts
-pg Compile for profiling with gprof
-pic Same as -Kpic
-qp Compile for profiling with prof
-s Strip symbol table from the executable file
-shared Same as -G
-staticlib=<a>[,<a>] Force linkage of specified libraries to be static
-sync_stdio[={yes|no}] Controls synchronization of the I/O libraries
-temp=<path> Use <path> as directory for temporary files
-template=<a>[,<a>] Enable/disable various template options;
<a>={wholeclass,extdef,geninlinefuncs}
-time Same as -xtime
-traceback[=<a>[,<a>]] Provide stack traceback for the abnormal termination by signals; <a>={%none|common|<signal>}
-unroll=<n> Enable unrolling loops <n> times where possible
-v Same as -verbose=diags
-verbose=<a>[,<a>] Control verbosity during compilation; <a>={template,diags,version}
-w Suppress compiler warning messages
+w Print warnings about additional questionable constructs
+w2 Emit warnings for code with additional portability problems
-xF[=<a>[,<a>]] Compile for later mapfile reordering and data fragmentation
-xM Generate makefile dependencies
-xM1 Generate makefile dependencies, but exclude /usr/include
-xMD Generate makefile dependencies and compile at once
-xMMD Generate makefile dependencies like -xMD, but excluding standard headers
-xMF <file> Specify output <file> for makefile dependencies dump
-xMerge Merge data segment into text segment
-xO<n> Generate optimized code; <n>={1|2|3|4|5}
-xaddr32[={yes|no}] Generate binaries assuming the associated process is restricted to the lower 32bit address space
-xalias_level[=<a>] Enable optimizations based on the specified alias_level;
<a>={any|simple|compatible}
-xanalyze=code Generate static analysis information for the code analyzer
-xannotate[={yes|no}] Annotate binaries for optimization and analysis
-xar Create archive library with instantiated templates
-xarch=<a> Specify target architecture instruction set
-xautopar Enable automatic loop parallelization
-xbuiltin[=<a>] Inline system functions and intrinsics when beneficial;
<a>={%none|%default|%all}
-xcache=<t> Define cache properties for use by optimizer
-xchar[=<a>] Treat type char as signed (s) or unsigned (u);
<a>={s|signed|u|unsigned}
-xcheck[=<a>[,<a>]] Generate runtime checks for error condition;
<a>={stkovf,init_local}
-xchip=<a> Specify the target processor for use by the optimizer
-xcode=<a> Generate different code for forming addresses; <a>={pic13|pic32}
-xdebugformat=<a> Selects the format of debugging information; <a>={stabs|dwarf}
-xdepend[={yes|no}] Analyze loops for data dependencies
-xdumpmacros[=<a>[,<a>]] Prints macro definitions on the standard error output;
<a>={defs,undefs,use,loc,conds,sys}
-xdryrun The same as -###
-xe Perform only syntax/semantic checking, no code generation
-xhelp=<a> Display on-line help information; <a>={flags|readme}
-xia Enable interval arithmetic
-xinline=<v> Attempt inlining of specified user routines; <v>={%auto,<func>,no%<func>}
-xinstrument=[no_]datarace Enable/disable instrumentation for race detection tool
-xipo[=<n>] Enable optimization and inlining across source files; <n>={0|1|2}
-xipo_archive=<a> Enable crossfile optimization including archive files;
<a>={none|readonly|writeback}
-xivdep[=<a>] Ignore loop-carried dependences on array references in a loop; <a>={loop|loop_any|back|back_any|none}
-xjobs=<n> Maximum number of components compiler will fork in parallel
-xkeepframe[=<v>] Do not optimize stack frame of specified user routine; <v>={%all|%none|[no%]<func>}
-xlang=<a>[,<a>] The set of languages used in the program; <a>={f90,f95,c99}
-xldscope=<a> Indicates the appropriate linker scoping within the source program;
<a>={global|symbolic|hidden}
-xlibmieee Force IEEE 754 return values for math routines in exceptional cases
-xlibmil Inline selected libm math routines for optimization
-xlibmopt Link with optimized math library
-xlic_lib=sunperf (Obsolete) Use -library=sunperf instead
-xloopinfo Show loops that parallelized
-xmaxopt=[off,1,2,3,4,5] Maximum optimization level allowed on #pragma opt
-xmodel=<a> Specify memory model for 64-bit programs;
<a>={small|kernel|medium}
-xnolib Do not link with default system libraries
-xnolibmil Cancel -xlibmil on command line
-xnolibmopt Cancel -xlibmopt on command line
-xnorunpath Do not build a runtime search path into the executable
-xopenmp[=<a>] Enable OpenMP language extension; <a>={none|noopt|parallel}
-xpagesize=<a> Controls the preferred page size for the stack and for the heap; <a>={4K|2M|4M|1G|default}
-xpagesize_heap=<a> Controls the preferred page size for the heap; <a>={4K|2M|4M|1G|default}
-xpagesize_stack=<a> Controls the preferred page size for the stack; <a>={4K|2M|4M|1G|default}
-xpch=<t> Enable precompiled headers. Collect data for, or use existing, PCH file; <t>={auto|autofirst|{collect,use}:<file>[.cpch]}
-xpchstop=<file> Specified include file marks end of initial common sequence of pre-processing directives for precompiled headers
-xpec[={yes|no}] Generate a PEC binary
-xpg Compile for profiling with gprof
-xport64[=<a>] Enable extra checking for code ported from 32-bit to 64-bit platforms;
<a>={no|implicit|full}
-xprefetch[=<p>] Specify instruction prefetch; <p>={auto,no%auto,explicit,no%explicit}
-xprefetch_auto_type=<a> Specify automatic indirect prefetch insertion for loops;
<a>={indirect_array_access}
-xprefetch_level[=<n>] Controls the aggressiveness of the -xprefetch=auto option; <n>={1|2|3}
-xprofile=<t> Collect data for a profile or use a profile to optimize; <t>={{collect,use}[:<path>],tcov}
-xprofile_ircache[=<t>] Path to intermediate file cache used with -xprofile option
-xreduction Recognize reduction operations in parallelized loops
-xregs=<a>[,<a>] Specify the usage of optional registers; <a>={frameptr}
-xrestrict[=<f>] Treat pointer valued function parameters as restricted; <f>={%none,%all,<function-name list>}
-xs Allow debugging without object (.o) files
-xspace Do not do optimizations that increase code size
-xtarget=<a> Specify target system for optimization
-xtemp=<dir> Set directory for temporary files to <dir>
-xthreadvar[=<a>] Control code generation for thread variables; <a>={dynamic}
-xtime[=<a>] Report the execution time for each compilation phase; <a>={1|2|3}
-xtrigraphs[={yes|no}] Enable|disable trigraph translation
-xunroll=<n> Enable unrolling loops <n> times where possible
-xustr=<a> Recognize sixteen-bit string literals; <a>={ascii_utf16_ushort|no}
-xvector[=<a>[,<a>]] Automatic generation of calls to the vector library functions and/or the generation of the SIMD instructions
-xvpara Verbose parallelization warnings
-xwe Convert all warnings to errors
Suffix 'a' Object library
Suffix 'il' Inline template file
Suffix 'o' Object file
Suffix 'so' Shared object
Suffix 's' Assembler source
Suffix 'S' Assembler source for cpp
Suffix 'c' C++ source
Suffix 'cc' C++ source
Suffix 'cxx' C++ source
Suffix 'cpp' C++ source
Suffix 'C' C++ source
Suffix 'c++' C++ source
Suffix 'i' C++ source after preprocessing
Suffix 'err' ld error file
Suffix 'd' Build dependencies file

Use the -xdumpmacros option.
Per the Solaris Studio 12.4 C User's Guide:
B.2.105 -xdumpmacros[=value[,value...]]
Use this option when you want to see how macros are behaving in your
program. This option provides information such as macro defines,
undefines, and instances of usage. It prints output to the standard
error (stderr), based on the order in which macros are processed.
The -xdumpmacros option is in effect through the end of the file or
until it is overridden by the dumpmacros or end_dumpmacros pragma. See
dumpmacros.
The following table lists the valid arguments for value. The prefix
no% disables the associated value.
...
cc -E -xdumpmacros /dev/null produces this output:
#define __LINE__
#define __FILE__
#define __STDC__ 0
#define __STDC_VERSION__ 199409L
#define __DATE__ "Jun 9 2016"
#define __TIME__ "09:09:48"
#define __STDC_IEC_559__ 1
#define __STDC_IEC_559_COMPLEX__ 1
#define __STDC_HOSTED__ 1
#define __SunOS_5_11 1
#define __SUNPRO_C 0x5120
#define __unix 1
#define __SVR4 1
#define __sun 1
#define __SunOS 1
#define __i386 1
#define __BUILTIN_VA_ARG_INCR 1
#define __C99FEATURES__ 1
#define __PRAGMA_REDEFINE_EXTNAME 1
#define unix 1
#define sun 1
#define i386 1
#define __RESTRICT 1
#define __FLT_EVAL_METHOD__ - 1
#define __SUN_PREFETCH 1
#define __NOVECTORSIZE__ 1
# 1 "/dev/null"
#ident "acomp: Sun C 5.12 SunOS_i386 2011/11/16"

Related

Where does dev_dbg writes log to?

In a device driver source in the Linux tree, I saw dev_dbg(...) and dev_err(...), where do I find the logged message?
One reference suggest to add #define DEBUG . The other reference involves dynamic debug and debugfs, and I got lost.
dev_dbg() expands to dynamic_dev_dbg(), dev_printk(), or no-op depending on the compilation flags.
#if defined(CONFIG_DYNAMIC_DEBUG)
#define dev_dbg(dev, format, ...) \
do { \
dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
} while (0)
#elif defined(DEBUG)
#define dev_dbg(dev, format, arg...) \
dev_printk(KERN_DEBUG, dev, format, ##arg)
#else
#define dev_dbg(dev, format, arg...) \
({ \
if (0) \
dev_printk(KERN_DEBUG, dev, format, ##arg); \
})
#endif
dynamic_dev_dbg() and dev_printk() call dev_printk_emit() which calls vprintk_emit().
This very same function is called in a normal mode when you just do a printk(). Just note here, that the rest functions like dev_err() will end up in the same function.
Thus, obviously, the buffer is all the same, i.e. kernel intrenal buffer.
The logged message at the end is printed to
Current console if kernel loglevel value (can be changed via kernel command line or via procfs) is high enough for certain message, here KERN_DEBUG.
Internal buffer which can be read by running dmesg command.
Note, data in 2 is kept as long as there still room in the buffer. Since it's limited and circular, newer data preempts old one.
Additional information how to enable Dynamic Debug.
First of all, be sure you have CONFIG_DYNAMIC_DEBUG=y in the kernel configuration.
Assume we would like to enable all debug prints in the built-in module with name 8250. To achieve that we simple add to the kernel command line the following 8250.dyndbg=+p.
If the same driver is compiled as loadable module we may either add options 8250 dyndbg to the modprobe configuration or to the shell command line when do it manually, like modprobe 8250 dyndbg.
More details are described in the Dynamic Debug documentation.
The "How certain debug prints are automatically enabled in linux kernel?" raises the question why some debug prints are automatically enabled and how DEBUG affects that when CONFIG_DYNAMIC_DEBUG=y. The answer is lying in the dynamic_debug.h and since it's used during compilation the _DPRINTK_FLAGS_DEFAULT defines the certain message appearence.
#if defined DEBUG
#define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT
#else
#define _DPRINTK_FLAGS_DEFAULT 0
#endif
you can find dev_err(...) in kernel messages. As the name implies, dev_err(...) messages are error messages, so they will definitely be printed if the execution comes to that point. dev_dbg(...) are debug messages which are more generously used in the kernel driver code and they are not printed by default. So everything you have read about dynamic_debugging comes into play with dev_dbg(...).
There are several pre-conditions to have dynamic debugging working, below 1. and 2. are general preconditions for dynamic debugging. 3. and later are for your particular driver/module/subsystem and can be .
Dynamic debugging support has to be in your kernel config CONFIG_DYNAMIC_DEBUG=y. You may check if it is the case zgrep DYNAMIC_DEBUG /proc/config.gz
debugfs has to be mounted. You can check with sudo mount | grep debugfs and if not existing, you can mount with sudo mount -t debugfs /sys/kernel/debug
refer to dynamic_debugging and enable the particular file/function/line you are interested

OpenCobol & PostgreSQL on Windows with Visual Studio

I'm currently facing a problem with this team of 4.
Using binaries I downloaded on kiska's site. I'm able to compile cobol to C and run it with cobcrun or compile it to an executable. However I can 't get opencobol to find the postgres commands.
Here is the strat of my cobol script :
identification division.
program-id. pgcob.
data division.
working-storage section.
01 pgconn usage pointer.
01 pgres usage pointer.
01 resptr usage pointer.
01 resstr pic x(80) based.
01 result usage binary-long.
01 answer pic x(80).
procedure division.
display "Before connect:" pgconn end-display
call "PQconnectdb" using
by reference "dbname = postgres" & x"00"
by reference "host = 10.37.180.146" & "00"
returning pgconn
end-call
...
the call PQconnectdb fail with module ont found : PQconnectdb
I noticed that if i rename the libpq.dll the error message change to can't find entry point. So at least I'm sure it can get my dll.
After digging into the code of the call method of the libcob library. I found it it was possible to pre load some dll using an environment variable COB_PRE_LOAD but sitll no results.
Here is what look the script to compile the cobol :
call "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\vcvarsamd64.bat"
set COB_CONFIG_DIR=C:\OpenCobol\config
set COB_COPY_DIR=C:\OpenCobol\Copy
set COB_LIBS=%COB_LIBS% c:\OpenCobol\libpq.lib
set COB_LIBRARY_PATH=C:\OpenCobol\bin
set COB_PRE_LOAD=C:\OpenCobol\libpq.dll
#echo on
cobc -info
cobc -free -o pgcob -L:"C:\OpenCobol" -llibpq.lib test_cobol\postgres.cob
call cobcrun pgcob
I don't see anything missing, I'm using the 64-bit binaries from kiska's site and use the 64-bit cl.exe from Visual Studio, Postgres is a 64 bit version too (checked with dependencyChecker).
I even tryed to compile the generated C from Visual Studio, same result, but I may miss something, I'm pretty rotten in C and never really had to manage DLL or use Visual Studio.
What am I missing ?
COB_PRE_LOAD doesn't take any path or extension, see the short documentation for the available runtime configurations. I guess
set COB_LIBRARY_PATH=C:\OpenCobol\bin;C:\OpenCobol
set COB_PRE_LOAD=libpq
Will work. You can omit the C:\OpenCobol\bin if you did not placed any additional executables there.
If it doesn't work (even if it does) I'd try to get the C functions resolved at compile time. Either use
CALL STATIC "PQconnectdb" using ...
or an appropriate CALL-CONVENTION or leave the program as-is and use
cobc -free -o pgcob -L"C:\OpenCobol" -llibpq -K PQconnectdb test_cobol\postgres.cob
From cobc --help:
-K generate CALL to <entry> as static
In general: the binaries from kiska.net are quite outdated. I highly suggest getting newer ones from the official download site or ideally build them on your own from source, see the documentation for building GnuCOBOL with VisualStudio.

C++ OpenGL Eclipse MinGW Linkage Error [duplicate]

What are undefined reference/unresolved external symbol errors? What are common causes and how to fix/prevent them?
Compiling a C++ program takes place in several steps, as specified by 2.2 (credits to Keith Thompson for the reference):
The precedence among the syntax rules of translation is specified by the following phases [see footnote].
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. [SNIP]
Each instance of a backslash character (\) immediately followed by a new-line character is deleted, splicing physical source lines to
form logical source lines. [SNIP]
The source file is decomposed into preprocessing tokens (2.5) and sequences of white-space characters (including comments). [SNIP]
Preprocessing directives are executed, macro invocations are expanded, and _Pragma unary operator expressions are executed. [SNIP]
Each source character set member in a character literal or a string literal, as well as each escape sequence and universal-character-name
in a character literal or a non-raw string literal, is converted to
the corresponding member of the execution character set; [SNIP]
Adjacent string literal tokens are concatenated.
White-space characters separating tokens are no longer significant. Each preprocessing token is converted into a token. (2.7). The
resulting tokens are syntactically and semantically analyzed and
translated as a translation unit. [SNIP]
Translated translation units and instantiation units are combined as follows: [SNIP]
All external entity references are resolved. Library components are linked to satisfy external references to entities not defined in the
current translation. All such translator output is collected into a
program image which contains information needed for execution in its
execution environment. (emphasis mine)
[footnote] Implementations must behave as if these separate phases occur, although in practice different phases might be folded together.
The specified errors occur during this last stage of compilation, most commonly referred to as linking. It basically means that you compiled a bunch of implementation files into object files or libraries and now you want to get them to work together.
Say you defined symbol a in a.cpp. Now, b.cpp declared that symbol and used it. Before linking, it simply assumes that that symbol was defined somewhere, but it doesn't yet care where. The linking phase is responsible for finding the symbol and correctly linking it to b.cpp (well, actually to the object or library that uses it).
If you're using Microsoft Visual Studio, you'll see that projects generate .lib files. These contain a table of exported symbols, and a table of imported symbols. The imported symbols are resolved against the libraries you link against, and the exported symbols are provided for the libraries that use that .lib (if any).
Similar mechanisms exist for other compilers/ platforms.
Common error messages are error LNK2001, error LNK1120, error LNK2019 for Microsoft Visual Studio and undefined reference to symbolName for GCC.
The code:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
struct A
{
virtual ~A() = 0;
};
struct B: A
{
virtual ~B(){}
};
extern int x;
void foo();
int main()
{
x = 0;
foo();
Y y;
B b;
}
will generate the following errors with GCC:
/home/AbiSfw/ccvvuHoX.o: In function `main':
prog.cpp:(.text+0x10): undefined reference to `x'
prog.cpp:(.text+0x19): undefined reference to `foo()'
prog.cpp:(.text+0x2d): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD1Ev[B::~B()]+0xb): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD0Ev[B::~B()]+0x12): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1Y[typeinfo for Y]+0x8): undefined reference to `typeinfo for X'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1B[typeinfo for B]+0x8): undefined reference to `typeinfo for A'
collect2: ld returned 1 exit status
and similar errors with Microsoft Visual Studio:
1>test2.obj : error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
1>test2.obj : error LNK2001: unresolved external symbol "int x" (?x##3HA)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual __thiscall A::~A(void)" (??1A##UAE#XZ)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall X::foo(void)" (?foo#X##UAEXXZ)
1>...\test2.exe : fatal error LNK1120: 4 unresolved externals
Common causes include:
Failure to link against appropriate libraries/object files or compile implementation files
Declared and undefined variable or function.
Common issues with class-type members
Template implementations not visible.
Symbols were defined in a C program and used in C++ code.
Incorrectly importing/exporting methods/classes across modules/dll. (MSVS specific)
Circular library dependency
undefined reference to `WinMain#16'
Interdependent library order
Multiple source files of the same name
Mistyping or not including the .lib extension when using the #pragma (Microsoft Visual Studio)
Problems with template friends
Inconsistent UNICODE definitions
Missing "extern" in const variable declarations/definitions (C++ only)
Visual Studio Code not configured for a multiple file project
Errors on Mac OS X when building a dylib, but a .so on other Unix-y systems is OK
Class members:
A pure virtual destructor needs an implementation.
Declaring a destructor pure still requires you to define it (unlike a regular function):
struct X
{
virtual ~X() = 0;
};
struct Y : X
{
~Y() {}
};
int main()
{
Y y;
}
//X::~X(){} //uncomment this line for successful definition
This happens because base class destructors are called when the object is destroyed implicitly, so a definition is required.
virtual methods must either be implemented or defined as pure.
This is similar to non-virtual methods with no definition, with the added reasoning that
the pure declaration generates a dummy vtable and you might get the linker error without using the function:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
int main()
{
Y y; //linker error although there was no call to X::foo
}
For this to work, declare X::foo() as pure:
struct X
{
virtual void foo() = 0;
};
Non-virtual class members
Some members need to be defined even if not used explicitly:
struct A
{
~A();
};
The following would yield the error:
A a; //destructor undefined
The implementation can be inline, in the class definition itself:
struct A
{
~A() {}
};
or outside:
A::~A() {}
If the implementation is outside the class definition, but in a header, the methods have to be marked as inline to prevent a multiple definition.
All used member methods need to be defined if used.
A common mistake is forgetting to qualify the name:
struct A
{
void foo();
};
void foo() {}
int main()
{
A a;
a.foo();
}
The definition should be
void A::foo() {}
static data members must be defined outside the class in a single translation unit:
struct X
{
static int x;
};
int main()
{
int x = X::x;
}
//int X::x; //uncomment this line to define X::x
An initializer can be provided for a static const data member of integral or enumeration type within the class definition; however, odr-use of this member will still require a namespace scope definition as described above. C++11 allows initialization inside the class for all static const data members.
Failure to link against appropriate libraries/object files or compile implementation files
Commonly, each translation unit will generate an object file that contains the definitions of the symbols defined in that translation unit.
To use those symbols, you have to link against those object files.
Under gcc you would specify all object files that are to be linked together in the command line, or compile the implementation files together.
g++ -o test objectFile1.o objectFile2.o -lLibraryName
-l... must be to the right of any .o/.c/.cpp files.
The libraryName here is just the bare name of the library, without platform-specific additions. So e.g. on Linux library files are usually called libfoo.so but you'd only write -lfoo. On Windows that same file might be called foo.lib, but you'd use the same argument. You might have to add the directory where those files can be found using -L‹directory›. Make sure to not write a space after -l or -L.
For Xcode: Add the User Header Search Paths -> add the Library Search Path -> drag and drop the actual library reference into the project folder.
Under MSVS, files added to a project automatically have their object files linked together and a lib file would be generated (in common usage). To use the symbols in a separate project, you'd
need to include the lib files in the project settings. This is done in the Linker section of the project properties, in Input -> Additional Dependencies. (the path to the lib file should be
added in Linker -> General -> Additional Library Directories) When using a third-party library that is provided with a lib file, failure to do so usually results in the error.
It can also happen that you forget to add the file to the compilation, in which case the object file won't be generated. In gcc you'd add the files to the command line. In MSVS adding the file to the project will make it compile it automatically (albeit files can, manually, be individually excluded from the build).
In Windows programming, the tell-tale sign that you did not link a necessary library is that the name of the unresolved symbol begins with __imp_. Look up the name of the function in the documentation, and it should say which library you need to use. For example, MSDN puts the information in a box at the bottom of each function in a section called "Library".
Declared but did not define a variable or function.
A typical variable declaration is
extern int x;
As this is only a declaration, a single definition is needed. A corresponding definition would be:
int x;
For example, the following would generate an error:
extern int x;
int main()
{
x = 0;
}
//int x; // uncomment this line for successful definition
Similar remarks apply to functions. Declaring a function without defining it leads to the error:
void foo(); // declaration only
int main()
{
foo();
}
//void foo() {} //uncomment this line for successful definition
Be careful that the function you implement exactly matches the one you declared. For example, you may have mismatched cv-qualifiers:
void foo(int& x);
int main()
{
int x;
foo(x);
}
void foo(const int& x) {} //different function, doesn't provide a definition
//for void foo(int& x)
Other examples of mismatches include
Function/variable declared in one namespace, defined in another.
Function/variable declared as class member, defined as global (or vice versa).
Function return type, parameter number and types, and calling convention do not all exactly agree.
The error message from the compiler will often give you the full declaration of the variable or function that was declared but never defined. Compare it closely to the definition you provided. Make sure every detail matches.
The order in which interdependent linked libraries are specified is wrong.
The order in which libraries are linked DOES matter if the libraries depend on each other. In general, if library A depends on library B, then libA MUST appear before libB in the linker flags.
For example:
// B.h
#ifndef B_H
#define B_H
struct B {
B(int);
int x;
};
#endif
// B.cpp
#include "B.h"
B::B(int xx) : x(xx) {}
// A.h
#include "B.h"
struct A {
A(int x);
B b;
};
// A.cpp
#include "A.h"
A::A(int x) : b(x) {}
// main.cpp
#include "A.h"
int main() {
A a(5);
return 0;
};
Create the libraries:
$ g++ -c A.cpp
$ g++ -c B.cpp
$ ar rvs libA.a A.o
ar: creating libA.a
a - A.o
$ ar rvs libB.a B.o
ar: creating libB.a
a - B.o
Compile:
$ g++ main.cpp -L. -lB -lA
./libA.a(A.o): In function `A::A(int)':
A.cpp:(.text+0x1c): undefined reference to `B::B(int)'
collect2: error: ld returned 1 exit status
$ g++ main.cpp -L. -lA -lB
$ ./a.out
So to repeat again, the order DOES matter!
Symbols were defined in a C program and used in C++ code.
The function (or variable) void foo() was defined in a C program and you attempt to use it in a C++ program:
void foo();
int main()
{
foo();
}
The C++ linker expects names to be mangled, so you have to declare the function as:
extern "C" void foo();
int main()
{
foo();
}
Equivalently, instead of being defined in a C program, the function (or variable) void foo() was defined in C++ but with C linkage:
extern "C" void foo();
and you attempt to use it in a C++ program with C++ linkage.
If an entire library is included in a header file (and was compiled as C code); the include will need to be as follows;
extern "C" {
#include "cheader.h"
}
what is an "undefined reference/unresolved external symbol"
I'll try to explain what is an "undefined reference/unresolved external symbol".
note: i use g++ and Linux and all examples is for it
For example we have some code
// src1.cpp
void print();
static int local_var_name; // 'static' makes variable not visible for other modules
int global_var_name = 123;
int main()
{
print();
return 0;
}
and
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
//extern int local_var_name;
void print ()
{
// printf("%d%d\n", global_var_name, local_var_name);
printf("%d\n", global_var_name);
}
Make object files
$ g++ -c src1.cpp -o src1.o
$ g++ -c src2.cpp -o src2.o
After the assembler phase we have an object file, which contains any symbols to export.
Look at the symbols
$ readelf --symbols src1.o
Num: Value Size Type Bind Vis Ndx Name
5: 0000000000000000 4 OBJECT LOCAL DEFAULT 4 _ZL14local_var_name # [1]
9: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 global_var_name # [2]
I've rejected some lines from output, because they do not matter
So, we see follow symbols to export.
[1] - this is our static (local) variable (important - Bind has a type "LOCAL")
[2] - this is our global variable
src2.cpp exports nothing and we have seen no its symbols
Link our object files
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123
Linker sees exported symbols and links it. Now we try to uncomment lines in src2.cpp like here
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
extern int local_var_name;
void print ()
{
printf("%d%d\n", global_var_name, local_var_name);
}
and rebuild an object file
$ g++ -c src2.cpp -o src2.o
OK (no errors), because we only build object file, linking is not done yet.
Try to link
$ g++ src1.o src2.o -o prog
src2.o: In function `print()':
src2.cpp:(.text+0x6): undefined reference to `local_var_name'
collect2: error: ld returned 1 exit status
It has happened because our local_var_name is static, i.e. it is not visible for other modules.
Now more deeply. Get the translation phase output
$ g++ -S src1.cpp -o src1.s
// src1.s
look src1.s
.file "src1.cpp"
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
.globl global_var_name
.data
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; assembler code, not interesting for us
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",#progbits
So, we've seen there is no label for local_var_name, that's why linker hasn't found it. But we are hackers :) and we can fix it. Open src1.s in your text editor and change
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
to
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
i.e. you should have like below
.file "src1.cpp"
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
.globl global_var_name
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; ...
we have changed the visibility of local_var_name and set its value to 456789.
Try to build an object file from it
$ g++ -c src1.s -o src2.o
ok, see readelf output (symbols)
$ readelf --symbols src1.o
8: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 local_var_name
now local_var_name has Bind GLOBAL (was LOCAL)
link
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123456789
ok, we hack it :)
So, as a result - an "undefined reference/unresolved external symbol error" happens when the linker cannot find global symbols in the object files.
If all else fails, recompile.
I was recently able to get rid of an unresolved external error in Visual Studio 2012 just by recompiling the offending file. When I re-built, the error went away.
This usually happens when two (or more) libraries have a cyclic dependency. Library A attempts to use symbols in B.lib and library B attempts to use symbols from A.lib. Neither exist to start off with. When you attempt to compile A, the link step will fail because it can't find B.lib. A.lib will be generated, but no dll. You then compile B, which will succeed and generate B.lib. Re-compiling A will now work because B.lib is now found.
Template implementations not visible.
Unspecialized templates must have their definitions visible to all translation units that use them. That means you can't separate the definition of a template
to an implementation file. If you must separate the implementation, the usual workaround is to have an impl file which you include at the end of the header that
declares the template. A common situation is:
template<class T>
struct X
{
void foo();
};
int main()
{
X<int> x;
x.foo();
}
//differentImplementationFile.cpp
template<class T>
void X<T>::foo()
{
}
To fix this, you must move the definition of X::foo to the header file or some place visible to the translation unit that uses it.
Specialized templates can be implemented in an implementation file and the implementation doesn't have to be visible, but the specialization must be previously declared.
For further explanation and another possible solution (explicit instantiation) see this question and answer.
This is one of most confusing error messages that every VC++ programmers have seen time and time again. Let’s make things clarity first.
A. What is symbol?
In short, a symbol is a name. It can be a variable name, a function name, a class name, a typedef name, or anything except those names and signs that belong to C++ language. It is user defined or introduced by a dependency library (another user-defined).
B. What is external?
In VC++, every source file (.cpp,.c,etc.) is considered as a translation unit, the compiler compiles one unit at a time, and generate one object file(.obj) for the current translation unit. (Note that every header file that this source file included will be preprocessed and will be considered as part of this translation unit)Everything within a translation unit is considered as internal, everything else is considered as external. In C++, you may reference an external symbol by using keywords like extern, __declspec (dllimport) and so on.
C. What is “resolve”?
Resolve is a linking-time term. In linking-time, linker attempts to find the external definition for every symbol in object files that cannot find its definition internally. The scope of this searching process including:
All object files that generated in compiling time
All libraries (.lib) that are either explicitly or implicitly
specified as additional dependencies of this building application.
This searching process is called resolve.
D. Finally, why Unresolved External Symbol?
If the linker cannot find the external definition for a symbol that has no definition internally, it reports an Unresolved External Symbol error.
E. Possible causes of LNK2019: Unresolved External Symbol error.
We already know that this error is due to the linker failed to find the definition of external symbols, the possible causes can be sorted as:
Definition exists
For example, if we have a function called foo defined in a.cpp:
int foo()
{
return 0;
}
In b.cpp we want to call function foo, so we add
void foo();
to declare function foo(), and call it in another function body, say bar():
void bar()
{
foo();
}
Now when you build this code you will get a LNK2019 error complaining that foo is an unresolved symbol. In this case, we know that foo() has its definition in a.cpp, but different from the one we are calling(different return value). This is the case that definition exists.
Definition does not exist
If we want to call some functions in a library, but the import library is not added into the additional dependency list (set from: Project | Properties | Configuration Properties | Linker | Input | Additional Dependency) of your project setting. Now the linker will report a LNK2019 since the definition does not exist in current searching scope.
Incorrectly importing/exporting methods/classes across modules/dll (compiler specific).
MSVS requires you to specify which symbols to export and import using __declspec(dllexport) and __declspec(dllimport).
This dual functionality is usually obtained through the use of a macro:
#ifdef THIS_MODULE
#define DLLIMPEXP __declspec(dllexport)
#else
#define DLLIMPEXP __declspec(dllimport)
#endif
The macro THIS_MODULE would only be defined in the module that exports the function. That way, the declaration:
DLLIMPEXP void foo();
expands to
__declspec(dllexport) void foo();
and tells the compiler to export the function, as the current module contains its definition. When including the declaration in a different module, it would expand to
__declspec(dllimport) void foo();
and tells the compiler that the definition is in one of the libraries you linked against (also see 1)).
You can similary import/export classes:
class DLLIMPEXP X
{
};
undefined reference to WinMain#16 or similar 'unusual' main() entry point reference (especially for visual-studio).
You may have missed to choose the right project type with your actual IDE. The IDE may want to bind e.g. Windows Application projects to such entry point function (as specified in the missing reference above), instead of the commonly used int main(int argc, char** argv); signature.
If your IDE supports Plain Console Projects you might want to choose this project type, instead of a windows application project.
Here are case1 and case2 handled in more detail from a real world problem.
Also if you're using 3rd party libraries make sure you have the correct 32/64 bit binaries
Microsoft offers a #pragma to reference the correct library at link time;
#pragma comment(lib, "libname.lib")
In addition to the library path including the directory of the library, this should be the full name of the library.
Visual Studio NuGet package needs to be updated for new toolset version
I just had this problem trying to link libpng with Visual Studio 2013. The problem is that the package file only had libraries for Visual Studio 2010 and 2012.
The correct solution is to hope the developer releases an updated package and then upgrade, but it worked for me by hacking in an extra setting for VS2013, pointing at the VS2012 library files.
I edited the package (in the packages folder inside the solution's directory) by finding packagename\build\native\packagename.targets and inside that file, copying all the v110 sections. I changed the v110 to v120 in the condition fields only being very careful to leave the filename paths all as v110. This simply allowed Visual Studio 2013 to link to the libraries for 2012, and in this case, it worked.
Suppose you have a big project written in c++ which has a thousand of .cpp files and a thousand of .h files.And let's says the project also depends on ten static libraries. Let's says we are on Windows and we build our project in Visual Studio 20xx. When you press Ctrl + F7 Visual Studio to start compiling the whole solution ( suppose we have just one project in the solution )
What's the meaning of compilation ?
Visual Studio search into file .vcxproj and start compiling each file which has the extension .cpp. Order of compilation is undefined.So you must not assume that the file main.cpp is compiled first
If .cpp files depends on additional .h files in order to find symbols
that may or may not be defined in the file .cpp
If exists one .cpp file in which the compiler could not find one symbol, a compiler time error raises the message Symbol x could not be found
For each file with extension .cpp is generated an object file .o and also Visual Studio writes the output in a file named ProjectName.Cpp.Clean.txt which contains all object files that must be processed by the linker.
The Second step of compilation is done by Linker.Linker should merge all the object file and build finally the output ( which may be an executable or a library)
Steps In Linking a project
Parse all the object files and find the definition which was only declared in headers ( eg: The code of one method of a class as is mentioned in previous answers, or event the initialization of a static variable which is member inside a class)
If one symbol could not be found in object files he also is searched in Additional Libraries.For adding a new library to a project Configuration properties -> VC++ Directories -> Library Directories and here you specified additional folder for searching libraries and Configuration properties -> Linker -> Input for specifying the name of the library.
-If the Linker could not find the symbol which you write in one .cpp he raises a linker time error which may sound like
error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
Observation
Once the Linker find one symbol he doesn't search in other libraries for it
The order of linking libraries does matter.
If Linker finds an external symbol in one static library he includes the symbol in the output of the project.However, if the library is shared( dynamic ) he doesn't include the code ( symbols ) in output, but Run-Time crashes may occur
How To Solve this kind of error
Compiler Time Error :
Make sure you write your c++ project syntactical correct.
Linker Time Error
Define all your symbol which you declare in your header files
Use #pragma once for allowing compiler not to include one header if it was already included in the current .cpp which are compiled
Make sure that your external library doesn't contain symbols that may enter into conflict with other symbols you defined in your header files
When you use the template to make sure you include the definition of each template function in the header file for allowing the compiler to generate appropriate code for any instantiations.
Use the linker to help diagnose the error
Most modern linkers include a verbose option that prints out to varying degrees;
Link invocation (command line),
Data on what libraries are included in the link stage,
The location of the libraries,
Search paths used.
For gcc and clang; you would typically add -v -Wl,--verbose or -v -Wl,-v to the command line. More details can be found here;
Linux ld man page.
LLVM linker page.
"An introduction to GCC" chapter 9.
For MSVC, /VERBOSE (in particular /VERBOSE:LIB) is added to the link command line.
The MSDN page on the /VERBOSE linker option.
A bug in the compiler/IDE
I recently had this problem, and it turned out it was a bug in Visual Studio Express 2013. I had to remove a source file from the project and re-add it to overcome the bug.
Steps to try if you believe it could be a bug in compiler/IDE:
Clean the project (some IDEs have an option to do this, you can also
manually do it by deleting the object files)
Try start a new project,
copying all source code from the original one.
Linked .lib file is associated to a .dll
I had the same issue. Say i have projects MyProject and TestProject. I had effectively linked the lib file for MyProject to the TestProject. However, this lib file was produced as the DLL for the MyProject was built. Also, I did not contain source code for all methods in the MyProject, but only access to the DLL's entry points.
To solve the issue, i built the MyProject as a LIB, and linked TestProject to this .lib file (i copy paste the generated .lib file into the TestProject folder). I can then build again MyProject as a DLL. It is compiling since the lib to which TestProject is linked does contain code for all methods in classes in MyProject.
Since people seem to be directed to this question when it comes to linker errors I am going to add this here.
One possible reason for linker errors with GCC 5.2.0 is that a new libstdc++ library ABI is now chosen by default.
If you get linker errors about undefined references to symbols that involve types in the std::__cxx11 namespace or the tag [abi:cxx11] then it probably indicates that you are trying to link together object files that were compiled with different values for the _GLIBCXX_USE_CXX11_ABI macro. This commonly happens when linking to a third-party library that was compiled with an older version of GCC. If the third-party library cannot be rebuilt with the new ABI then you will need to recompile your code with the old ABI.
So if you suddenly get linker errors when switching to a GCC after 5.1.0 this would be a thing to check out.
Your linkage consumes libraries before the object files that refer to them
You are trying to compile and link your program with the GCC toolchain.
Your linkage specifies all of the necessary libraries and library search paths
If libfoo depends on libbar, then your linkage correctly puts libfoo before libbar.
Your linkage fails with undefined reference to something errors.
But all the undefined somethings are declared in the header files you have
#included and are in fact defined in the libraries that you are linking.
Examples are in C. They could equally well be C++
A minimal example involving a static library you built yourself
my_lib.c
#include "my_lib.h"
#include <stdio.h>
void hw(void)
{
puts("Hello World");
}
my_lib.h
#ifndef MY_LIB_H
#define MT_LIB_H
extern void hw(void);
#endif
eg1.c
#include <my_lib.h>
int main()
{
hw();
return 0;
}
You build your static library:
$ gcc -c -o my_lib.o my_lib.c
$ ar rcs libmy_lib.a my_lib.o
You compile your program:
$ gcc -I. -c -o eg1.o eg1.c
You try to link it with libmy_lib.a and fail:
$ gcc -o eg1 -L. -lmy_lib eg1.o
eg1.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
The same result if you compile and link in one step, like:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
/tmp/ccQk1tvs.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
A minimal example involving a shared system library, the compression library libz
eg2.c
#include <zlib.h>
#include <stdio.h>
int main()
{
printf("%s\n",zlibVersion());
return 0;
}
Compile your program:
$ gcc -c -o eg2.o eg2.c
Try to link your program with libz and fail:
$ gcc -o eg2 -lz eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
Same if you compile and link in one go:
$ gcc -o eg2 -I. -lz eg2.c
/tmp/ccxCiGn7.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
And a variation on example 2 involving pkg-config:
$ gcc -o eg2 $(pkg-config --libs zlib) eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
What are you doing wrong?
In the sequence of object files and libraries you want to link to make your
program, you are placing the libraries before the object files that refer to
them. You need to place the libraries after the object files that refer
to them.
Link example 1 correctly:
$ gcc -o eg1 eg1.o -L. -lmy_lib
Success:
$ ./eg1
Hello World
Link example 2 correctly:
$ gcc -o eg2 eg2.o -lz
Success:
$ ./eg2
1.2.8
Link the example 2 pkg-config variation correctly:
$ gcc -o eg2 eg2.o $(pkg-config --libs zlib)
$ ./eg2
1.2.8
The explanation
Reading is optional from here on.
By default, a linkage command generated by GCC, on your distro,
consumes the files in the linkage from left to right in
commandline sequence. When it finds that a file refers to something
and does not contain a definition for it, to will search for a definition
in files further to the right. If it eventually finds a definition, the
reference is resolved. If any references remain unresolved at the end,
the linkage fails: the linker does not search backwards.
First, example 1, with static library my_lib.a
A static library is an indexed archive of object files. When the linker
finds -lmy_lib in the linkage sequence and figures out that this refers
to the static library ./libmy_lib.a, it wants to know whether your program
needs any of the object files in libmy_lib.a.
There is only object file in libmy_lib.a, namely my_lib.o, and there's only one thing defined
in my_lib.o, namely the function hw.
The linker will decide that your program needs my_lib.o if and only if it already knows that
your program refers to hw, in one or more of the object files it has already
added to the program, and that none of the object files it has already added
contains a definition for hw.
If that is true, then the linker will extract a copy of my_lib.o from the library and
add it to your program. Then, your program contains a definition for hw, so
its references to hw are resolved.
When you try to link the program like:
$ gcc -o eg1 -L. -lmy_lib eg1.o
the linker has not added eg1.o to the program when it sees
-lmy_lib. Because at that point, it has not seen eg1.o.
Your program does not yet make any references to hw: it
does not yet make any references at all, because all the references it makes
are in eg1.o.
So the linker does not add my_lib.o to the program and has no further
use for libmy_lib.a.
Next, it finds eg1.o, and adds it to be program. An object file in the
linkage sequence is always added to the program. Now, the program makes
a reference to hw, and does not contain a definition of hw; but
there is nothing left in the linkage sequence that could provide the missing
definition. The reference to hw ends up unresolved, and the linkage fails.
Second, example 2, with shared library libz
A shared library isn't an archive of object files or anything like it. It's
much more like a program that doesn't have a main function and
instead exposes multiple other symbols that it defines, so that other
programs can use them at runtime.
Many Linux distros today configure their GCC toolchain so that its language drivers (gcc,g++,gfortran etc)
instruct the system linker (ld) to link shared libraries on an as-needed basis.
You have got one of those distros.
This means that when the linker finds -lz in the linkage sequence, and figures out that this refers
to the shared library (say) /usr/lib/x86_64-linux-gnu/libz.so, it wants to know whether any references that it has added to your program that aren't yet defined have definitions that are exported by libz
If that is true, then the linker will not copy any chunks out of libz and
add them to your program; instead, it will just doctor the code of your program
so that:-
At runtime, the system program loader will load a copy of libz into the
same process as your program whenever it loads a copy of your program, to run it.
At runtime, whenever your program refers to something that is defined in
libz, that reference uses the definition exported by the copy of libz in
the same process.
Your program wants to refer to just one thing that has a definition exported by libz,
namely the function zlibVersion, which is referred to just once, in eg2.c.
If the linker adds that reference to your program, and then finds the definition
exported by libz, the reference is resolved
But when you try to link the program like:
gcc -o eg2 -lz eg2.o
the order of events is wrong in just the same way as with example 1.
At the point when the linker finds -lz, there are no references to anything
in the program: they are all in eg2.o, which has not yet been seen. So the
linker decides it has no use for libz. When it reaches eg2.o, adds it to the program,
and then has undefined reference to zlibVersion, the linkage sequence is finished;
that reference is unresolved, and the linkage fails.
Lastly, the pkg-config variation of example 2 has a now obvious explanation.
After shell-expansion:
gcc -o eg2 $(pkg-config --libs zlib) eg2.o
becomes:
gcc -o eg2 -lz eg2.o
which is just example 2 again.
I can reproduce the problem in example 1, but not in example 2
The linkage:
gcc -o eg2 -lz eg2.o
works just fine for you!
(Or: That linkage worked fine for you on, say, Fedora 23, but fails on Ubuntu 16.04)
That's because the distro on which the linkage works is one of the ones that
does not configure its GCC toolchain to link shared libraries as-needed.
Back in the day, it was normal for unix-like systems to link static and shared
libraries by different rules. Static libraries in a linkage sequence were linked
on the as-needed basis explained in example 1, but shared libraries were linked unconditionally.
This behaviour is economical at linktime because the linker doesn't have to ponder
whether a shared library is needed by the program: if it's a shared library,
link it. And most libraries in most linkages are shared libraries. But there are disadvantages too:-
It is uneconomical at runtime, because it can cause shared libraries to be
loaded along with a program even if doesn't need them.
The different linkage rules for static and shared libraries can be confusing
to inexpert programmers, who may not know whether -lfoo in their linkage
is going to resolve to /some/where/libfoo.a or to /some/where/libfoo.so,
and might not understand the difference between shared and static libraries
anyway.
This trade-off has led to the schismatic situation today. Some distros have
changed their GCC linkage rules for shared libraries so that the as-needed
principle applies for all libraries. Some distros have stuck with the old
way.
Why do I still get this problem even if I compile-and-link at the same time?
If I just do:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
surely gcc has to compile eg1.c first, and then link the resulting
object file with libmy_lib.a. So how can it not know that object file
is needed when it's doing the linking?
Because compiling and linking with a single command does not change the
order of the linkage sequence.
When you run the command above, gcc figures out that you want compilation +
linkage. So behind the scenes, it generates a compilation command, and runs
it, then generates a linkage command, and runs it, as if you had run the
two commands:
$ gcc -I. -c -o eg1.o eg1.c
$ gcc -o eg1 -L. -lmy_lib eg1.o
So the linkage fails just as it does if you do run those two commands. The
only difference you notice in the failure is that gcc has generated a
temporary object file in the compile + link case, because you're not telling it
to use eg1.o. We see:
/tmp/ccQk1tvs.o: In function `main'
instead of:
eg1.o: In function `main':
See also
The order in which interdependent linked libraries are specified is wrong
Putting interdependent libraries in the wrong order is just one way
in which you can get files that need definitions of things coming
later in the linkage than the files that provide the definitions. Putting libraries before the
object files that refer to them is another way of making the same mistake.
A wrapper around GNU ld that doesn't support linker scripts
Some .so files are actually GNU ld linker scripts, e.g. libtbb.so file is an ASCII text file with this contents:
INPUT (libtbb.so.2)
Some more complex builds may not support this. For example, if you include -v in the compiler options, you can see that the mainwin gcc wrapper mwdip discards linker script command files in the verbose output list of libraries to link in. A simple work around is to replace the linker script input command file with a copy of the file instead (or a symlink), e.g.
cp libtbb.so.2 libtbb.so
Or you could replace the -l argument with the full path of the .so, e.g. instead of -ltbb do /home/foo/tbb-4.3/linux/lib/intel64/gcc4.4/libtbb.so.2
Befriending templates...
Given the code snippet of a template type with a friend operator (or function);
template <typename T>
class Foo {
friend std::ostream& operator<< (std::ostream& os, const Foo<T>& a);
};
The operator<< is being declared as a non-template function. For every type T used with Foo, there needs to be a non-templated operator<<. For example, if there is a type Foo<int> declared, then there must be an operator implementation as follows;
std::ostream& operator<< (std::ostream& os, const Foo<int>& a) {/*...*/}
Since it is not implemented, the linker fails to find it and results in the error.
To correct this, you can declare a template operator before the Foo type and then declare as a friend, the appropriate instantiation. The syntax is a little awkward, but is looks as follows;
// forward declare the Foo
template <typename>
class Foo;
// forward declare the operator <<
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&);
template <typename T>
class Foo {
friend std::ostream& operator<< <>(std::ostream& os, const Foo<T>& a);
// note the required <> ^^^^
// ...
};
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&)
{
// ... implement the operator
}
The above code limits the friendship of the operator to the corresponding instantiation of Foo, i.e. the operator<< <int> instantiation is limited to access the private members of the instantiation of Foo<int>.
Alternatives include;
Allowing the friendship to extend to all instantiations of the templates, as follows;
template <typename T>
class Foo {
template <typename T1>
friend std::ostream& operator<<(std::ostream& os, const Foo<T1>& a);
// ...
};
Or, the implementation for the operator<< can be done inline inside the class definition;
template <typename T>
class Foo {
friend std::ostream& operator<<(std::ostream& os, const Foo& a)
{ /*...*/ }
// ...
};
Note, when the declaration of the operator (or function) only appears in the class, the name is not available for "normal" lookup, only for argument dependent lookup, from cppreference;
A name first declared in a friend declaration within class or class template X becomes a member of the innermost enclosing namespace of X, but is not accessible for lookup (except argument-dependent lookup that considers X) unless a matching declaration at the namespace scope is provided...
There is further reading on template friends at cppreference and the C++ FAQ.
Code listing showing the techniques above.
As a side note to the failing code sample; g++ warns about this as follows
warning: friend declaration 'std::ostream& operator<<(...)' declares a non-template function [-Wnon-template-friend]
note: (if this is not what you intended, make sure the function template has already been declared and add <> after the function name here)
When your include paths are different
Linker errors can happen when a header file and its associated shared library (.lib file) go out of sync. Let me explain.
How do linkers work? The linker matches a function declaration (declared in the header) with its definition (in the shared library) by comparing their signatures. You can get a linker error if the linker doesn't find a function definition that matches perfectly.
Is it possible to still get a linker error even though the declaration and the definition seem to match? Yes! They might look the same in source code, but it really depends on what the compiler sees. Essentially you could end up with a situation like this:
// header1.h
typedef int Number;
void foo(Number);
// header2.h
typedef float Number;
void foo(Number); // this only looks the same lexically
Note how even though both the function declarations look identical in source code, but they are really different according to the compiler.
You might ask how one ends up in a situation like that? Include paths of course! If when compiling the shared library, the include path leads to header1.h and you end up using header2.h in your own program, you'll be left scratching your header wondering what happened (pun intended).
An example of how this can happen in the real world is explained below.
Further elaboration with an example
I have two projects: graphics.lib and main.exe. Both projects depend on common_math.h. Suppose the library exports the following function:
// graphics.lib
#include "common_math.h"
void draw(vec3 p) { ... } // vec3 comes from common_math.h
And then you go ahead and include the library in your own project.
// main.exe
#include "other/common_math.h"
#include "graphics.h"
int main() {
draw(...);
}
Boom! You get a linker error and you have no idea why it's failing. The reason is that the common library uses different versions of the same include common_math.h (I have made it obvious here in the example by including a different path, but it might not always be so obvious. Maybe the include path is different in the compiler settings).
Note in this example, the linker would tell you it couldn't find draw(), when in reality you know it obviously is being exported by the library. You could spend hours scratching your head wondering what went wrong. The thing is, the linker sees a different signature because the parameter types are slightly different. In the example, vec3 is a different type in both projects as far as the compiler is concerned. This could happen because they come from two slightly different include files (maybe the include files come from two different versions of the library).
Debugging the linker
DUMPBIN is your friend, if you are using Visual Studio. I'm sure other compilers have other similar tools.
The process goes like this:
Note the weird mangled name given in the linker error. (eg. draw#graphics#XYZ).
Dump the exported symbols from the library into a text file.
Search for the exported symbol of interest, and notice that the mangled name is different.
Pay attention to why the mangled names ended up different. You would be able to see that the parameter types are different, even though they look the same in the source code.
Reason why they are different. In the example given above, they are different because of different include files.
[1] By project I mean a set of source files that are linked together to produce either a library or an executable.
EDIT 1: Rewrote first section to be easier to understand. Please comment below to let me know if something else needs to be fixed. Thanks!
Inconsistent UNICODE definitions
A Windows UNICODE build is built with TCHAR etc. being defined as wchar_t etc. When not building with UNICODE defined as build with TCHAR defined as char etc. These UNICODE and _UNICODE defines affect all the "T" string types; LPTSTR, LPCTSTR and their elk.
Building one library with UNICODE defined and attempting to link it in a project where UNICODE is not defined will result in linker errors since there will be a mismatch in the definition of TCHAR; char vs. wchar_t.
The error usually includes a function a value with a char or wchar_t derived type, these could include std::basic_string<> etc. as well. When browsing through the affected function in the code, there will often be a reference to TCHAR or std::basic_string<TCHAR> etc. This is a tell-tale sign that the code was originally intended for both a UNICODE and a Multi-Byte Character (or "narrow") build.
To correct this, build all the required libraries and projects with a consistent definition of UNICODE (and _UNICODE).
This can be done with either;
#define UNICODE
#define _UNICODE
Or in the project settings;
Project Properties > General > Project Defaults > Character Set
Or on the command line;
/DUNICODE /D_UNICODE
The alternative is applicable as well, if UNICODE is not intended to be used, make sure the defines are not set, and/or the multi-character setting is used in the projects and consistently applied.
Do not forget to be consistent between the "Release" and "Debug" builds as well.
Clean and rebuild
A "clean" of the build can remove the "dead wood" that may be left lying around from previous builds, failed builds, incomplete builds and other build system related build issues.
In general the IDE or build will include some form of "clean" function, but this may not be correctly configured (e.g. in a manual makefile) or may fail (e.g. the intermediate or resultant binaries are read-only).
Once the "clean" has completed, verify that the "clean" has succeeded and all the generated intermediate file (e.g. an automated makefile) have been successfully removed.
This process can be seen as a final resort, but is often a good first step; especially if the code related to the error has recently been added (either locally or from the source repository).
Missing "extern" in const variable declarations/definitions (C++ only)
For people coming from C it might be a surprise that in C++ global constvariables have internal (or static) linkage. In C this was not the case, as all global variables are implicitly extern (i.e. when the static keyword is missing).
Example:
// file1.cpp
const int test = 5; // in C++ same as "static const int test = 5"
int test2 = 5;
// file2.cpp
extern const int test;
extern int test2;
void foo()
{
int x = test; // linker error in C++ , no error in C
int y = test2; // no problem
}
correct would be to use a header file and include it in file2.cpp and file1.cpp
extern const int test;
extern int test2;
Alternatively one could declare the const variable in file1.cpp with explicit extern
Even though this is a pretty old questions with multiple accepted answers, I'd like to share how to resolve an obscure "undefined reference to" error.
Different versions of libraries
I was using an alias to refer to std::filesystem::path: filesystem is in the standard library since C++17 but my program needed to also compile in C++14 so I decided to use a variable alias:
#if (defined _GLIBCXX_EXPERIMENTAL_FILESYSTEM) //is the included filesystem library experimental? (C++14 and newer: <experimental/filesystem>)
using path_t = std::experimental::filesystem::path;
#elif (defined _GLIBCXX_FILESYSTEM) //not experimental (C++17 and newer: <filesystem>)
using path_t = std::filesystem::path;
#endif
Let's say I have three files: main.cpp, file.h, file.cpp:
file.h #include's <experimental::filesystem> and contains the code above
file.cpp, the implementation of file.h, #include's "file.h"
main.cpp #include's <filesystem> and "file.h"
Note the different libraries used in main.cpp and file.h. Since main.cpp #include'd "file.h" after <filesystem>, the version of filesystem used there was the C++17 one. I used to compile the program with the following commands:
$ g++ -g -std=c++17 -c main.cpp -> compiles main.cpp to main.o
$ g++ -g -std=c++17 -c file.cpp -> compiles file.cpp and file.h to file.o
$ g++ -g -std=c++17 -o executable main.o file.o -lstdc++fs -> links main.o and file.o
This way any function contained in file.o and used in main.o that required path_t gave "undefined reference" errors because main.o referred to std::filesystem::path but file.o to std::experimental::filesystem::path.
Resolution
To fix this I just needed to change <experimental::filesystem> in file.h to <filesystem>.
When linking against shared libraries, make sure that the used symbols are not hidden.
The default behavior of gcc is that all symbols are visible. However, when the translation units are built with option -fvisibility=hidden, only functions/symbols marked with __attribute__ ((visibility ("default"))) are external in the resulting shared object.
You can check whether the symbols your are looking for are external by invoking:
# -D shows (global) dynamic symbols that can be used from the outside of XXX.so
nm -D XXX.so | grep MY_SYMBOL
the hidden/local symbols are shown by nm with lowercase symbol type, for example t instead of `T for code-section:
nm XXX.so
00000000000005a7 t HIDDEN_SYMBOL
00000000000005f8 T VISIBLE_SYMBOL
You can also use nm with the option -C to demangle the names (if C++ was used).
Similar to Windows-dlls, one would mark public functions with a define, for example DLL_PUBLIC defined as:
#define DLL_PUBLIC __attribute__ ((visibility ("default")))
DLL_PUBLIC int my_public_function(){
...
}
Which roughly corresponds to Windows'/MSVC-version:
#ifdef BUILDING_DLL
#define DLL_PUBLIC __declspec(dllexport)
#else
#define DLL_PUBLIC __declspec(dllimport)
#endif
More information about visibility can be found on the gcc wiki.
When a translation unit is compiled with -fvisibility=hidden the resulting symbols have still external linkage (shown with upper case symbol type by nm) and can be used for external linkage without problem if the object files become part of a static libraries. The linkage becomes local only when the object files are linked into a shared library.
To find which symbols in an object file are hidden run:
>>> objdump -t XXXX.o | grep hidden
0000000000000000 g F .text 000000000000000b .hidden HIDDEN_SYMBOL1
000000000000000b g F .text 000000000000000b .hidden HIDDEN_SYMBOL2
Functions or class-methods are defined in source files with the inline specifier.
An example:-
main.cpp
#include "gum.h"
#include "foo.h"
int main()
{
gum();
foo f;
f.bar();
return 0;
}
foo.h (1)
#pragma once
struct foo {
void bar() const;
};
gum.h (1)
#pragma once
extern void gum();
foo.cpp (1)
#include "foo.h"
#include <iostream>
inline /* <- wrong! */ void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (1)
#include "gum.h"
#include <iostream>
inline /* <- wrong! */ void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
If you specify that gum (similarly, foo::bar) is inline at its definition then
the compiler will inline gum (if it chooses to), by:-
not emitting any unique definition of gum, and therefore
not emitting any symbol by which the linker can refer to the definition of gum, and instead
replacing all calls to gum with inline copies of the compiled body of gum.
As a result, if you define gum inline in a source file gum.cpp, it is
compiled to an object file gum.o in which all calls to gum are inlined
and no symbol is defined by which the linker can refer to gum. When you
link gum.o into a program together with another object file, e.g. main.o
that make references to an external symbol gum, the linker cannot resolve
those references. So the linkage fails:
Compile:
g++ -c main.cpp foo.cpp gum.cpp
Link:
$ g++ -o prog main.o foo.o gum.o
main.o: In function `main':
main.cpp:(.text+0x18): undefined reference to `gum()'
main.cpp:(.text+0x24): undefined reference to `foo::bar() const'
collect2: error: ld returned 1 exit status
You can only define gum as inline if the compiler can see its definition in every source file in which gum may be called. That means its inline definition needs to exist in a header file that you include in every source file
you compile in which gum may be called. Do one of two things:
Either don't inline the definitions
Remove the inline specifier from the source file definition:
foo.cpp (2)
#include "foo.h"
#include <iostream>
void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (2)
#include "gum.h"
#include <iostream>
void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Rebuild with that:
$ g++ -c main.cpp foo.cpp gum.cpp
imk#imk-Inspiron-7559:~/develop/so/scrap1$ g++ -o prog main.o foo.o gum.o
imk#imk-Inspiron-7559:~/develop/so/scrap1$ ./prog
void gum()
void foo::bar() const
Success.
Or inline correctly
Inline definitions in header files:
foo.h (2)
#pragma once
#include <iostream>
struct foo {
void bar() const { // In-class definition is implicitly inline
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
};
// Alternatively...
#if 0
struct foo {
void bar() const;
};
inline void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
#endif
gum.h (2)
#pragma once
#include <iostream>
inline void gum() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Now we don't need foo.cpp or gum.cpp:
$ g++ -c main.cpp
$ g++ -o prog main.o
$ ./prog
void gum()
void foo::bar() const

Errors in linking fortran code that imports a MAT-file [duplicate]

This question already has an answer here:
Reading data from matlab files into C
(1 answer)
Closed 7 years ago.
I have to import a MAT-file in a fortran program. I followed the example file but I am facing some problems while linking. The compilation happens fine.
Minimal code:
#include "fintrf.h"
PROGRAM main
USE ssa
USE dmotifs
USE param
IMPLICIT NONE
! MAT-FILE Declarations !
INTEGER matOpen, matGetDir
INTEGER matGetVariableInfo
INTEGER mp, dir, adir(100), pa
INTEGER mxGetM, mxGetN, matClose
INTEGER ndir, i, clstat
CHARACTER*32 names(100)
!===========================!
if(all(fnames(:)%fn .NE. argfun)) then
write(*,*) "No such motif: ",argfun
write(*,*) "Input format-> main <motifname>"
stop
else
fin=fchton(argfun)
y0=nM2m*analys(p,argfun)
! ==> OPEN MAT-file <== !
mp=matOpen('./PRMS_lxr_29Apr15.mat','r')
if (mp .eq. 0) then
write(6,*) "Can't open MAT-file"
stop
end if
dir = matgetdir(mp, ndir)
if (dir .eq. 0) then
write(6,*) "Can't read MAT-file-directory."
stop
endif
call mxCopyPtrToPtrArray(dir, adir, ndir)
do 20 i=1,ndir
call mxCopyPtrToCharacter(adir(i), names(i), 32)
20 continue
write(6,*) 'Directory of Mat-file:'
do 30 i=1,ndir
write(6,*) names(i)
30 continue
write(6,*) 'Getting Header info from first array.'
pa = matGetVariableInfo(mp, names(1))
write(6,*) 'Retrieved ', names(1)
write(6,*) ' With size ', mxGetM(pa), '-by-', mxGetN(pa)
call mxDestroyArray(pa)
clstat=matClose(mp)
end if
END PROGRAM main
I am using gfortran 4.8.3 for compiling+linking using the default command:
gfortran main.f90 dmotifs.o param.o ssa.o -o main
This code compiles fine (without linking) when I do not include: #include "finitrf.h", otherwise the compiler says
Warning: main.f90:1: Illegal preprocessor directive
I tried renaming finitrf.h to finitrf.f90 but it did not make any difference. Nonetheless during linking I am getting these errors:
main.f90:(.text+0x3ea): undefined reference to `matopen_'
main.f90:(.text+0x487): undefined reference to `matgetdir_'
main.f90:(.text+0x52b): undefined reference to `mxcopyptrtoptrarray_'
main.f90:(.text+0x583): undefined reference to `mxcopyptrtocharacter_'
main.f90:(.text+0x71b): undefined reference to `matgetvariableinfo_'
main.f90:(.text+0x804): undefined reference to `mxgetm_'
main.f90:(.text+0x855): undefined reference to `mxgetn_'
main.f90:(.text+0x89c): undefined reference to `mxdestroyarray_'
main.f90:(.text+0x8b0): undefined reference to `matclose_'
collect2: error: ld returned 1 exit status
Do I need a makefile or add additional arguments in the compile command?
EDIT:
I added the -cpp option and that eliminates the problem of Illegal preprocessor directive
Now when I am compiling with paths to matlab external components (where finitf.h) is, I am still getting the same error.
gfortran main.f90 dmotifs.o param.o ssa.o -I/usr/local/matlab2008a/extern/include -L/usr/local/matlab2008a/extern/lib -cpp -o main
If I provide library path to /usr/local/matlab2008a/bin/glnxa64 that contains other matlab libraries including libmat.so, I still get the same errors.
For lower case file extensions *.f90 or *.f the pre-processor is typically deactivated. To enable that, either rename the (main) file to have a capital extension *.F90 or *.F, or provide the corresponding command-line option (-cpp for gfortran, -fpp for ifort).
Assuming the missing subroutines/functions are actually declared in fintrf.h, this should solve your problem.
You should additionally tell the compiler to link against the libraries containing the Matlab functions.
As pointed out by Alexander Vogt, the compiler requires -cpp option for the pre-processor to recognize the header file and not to treat it as illegal.
Linking requires finitrf.h which is usually located in the <matlabroot>/extern/include and the essential libraries are present in <matlabroot>/bin/<arch>/.
But just specifying this does not work and specification of the exact matlab library seems essential; these are libmat.so and libmx.so.
These libraries are in turn dependent on other libraries so another flag is required to set the rpath.
Finally it works with following command:
gfortran main.f90 dmotifs.o param.o ssa.o -I/usr/local/matlab2008a/extern/include -L/usr/local/matlab2008a/bin/glnxa64 -cpp -o main -lmat -lmx -Wl,-rpath /usr/local/matlab2008a/bin/glnxa64/
or in general
gfortran program.f90 -I<matlabroot>/extern/include -L<matlabroot>/bin/<arch> -cpp -lmat -lmx -Wl, -rpath <matlabroot>/bin/<arch> -o program.out
Also see this post that is about the same problem in C.

Ada scheduling in EDF

For some reason this EDF example doesn't compile. I'm using GNAT I tried it in Windows 8.1, Debian no result.
with Ada.Real_Time, ada.Task_Identification,ada.Dispatching.EDF; use Ada.Real_Time, ada.Task_Identification,ada.Dispatching.EDF;
Procedure exemple_ordon is
Task tache_Periodique;
Task body tache_Periodique is
Echeance: Time_Span := Milliseconds(30); heure: Time;
Begin
heure:= Clock; Set_Deadline(Clock + Echeance);
Loop
heure := heure + Echeance;Delay_Until_And_Set_Deadline(heure,Echeance);
end loop;
End tache_Periodique;
Begin Null; End exemple_ordon ;
The error message:
gnatmake -d -PC:\Users\Awk\default.gpr exemple_ordon.adb
gcc -c -g -O2 -I- -gnatA C:\Users\Awk\exemple_ordon.adb
Edf is not supported in this configuration
compilation abandoned
gnatmake: "C:\Users\Awk\exemple_ordon.adb" compilation error
The message Edf is not supported in this configuration tells the story!
I don’t have access to any supported version of GNAT, but the file a-disedf.ads (the spec of Ada.Dispatching.EDF) in FSF GCC 4.9.0 contains the comment
-- This unit is not implemented in typical GNAT implementations that lie on
-- top of operating systems, because it is infeasible to implement in such
-- environments.
-- If a target environment provides appropriate support for this package,
-- then the Unimplemented_Unit pragma should be removed from this spec and
-- an appropriate body provided.
so it’s possible that AdaCore may provide an implementation for some of the more real-time targets (e.g. VxWorks) for their paying customers.
Do you have access to AdaCore’s academic program (GAP)?
If you really need EDF scheduling, Concurrency in Ada by Burns and Wellings has an example (I have the paperback second edition); you can pick it up cheaply at AbeBooks.
If you ‘just’ need information about general tasking, there are several sources listed at the Ada Information Clearinghouse.
Many people developing high-integrity real time software in Ada use the Ravenscar Profile.