buildroot does not copy C++ headers to sysroot - buildroot

I started to used buildroot for my project. The project uses the codesourcery arm 2013.05.
All works fine, I have create bootable kernel image and a proper rootfs. Adding c based autotools packages is no problem. The programs created by the packages are on the target and run well.
The problems start when I add a C++ package. It fails to compile with the error "unsafe usage of /usr/include". Looking at the output of the configure shows this:
checking string usability... no
checking string presence... no
checking for string... no
checking vector usability... no
checking vector presence... no
checking for vector... no
When I look for the C++ header in the output folder of buildroot (ouput/host) I cannot find any of these C++ headers.
So I suspection buildroot not to installing/cp-ing the C++ headers.
Note: when configuring and building the package manually with the external toolchain - so not using buildroot - everythinh is fine - as the c++ headers are available in the external toolchain.
What I do wrong here ??

Buildroot is definitely copying the C++ headers, and people are building C++ applications every day with Buildroot.
However, if when building your application you get "unsafe usage of /usr/include", then it means that the Makefile of your application is broken, as it passes -I/usr/include in the CFLAGS, which is really bad when cross-compiling. Fix this, and your C++ header will go away.

Related

How do I specify a compiler in a different directory than MinGW in a NetBeans Toolchain module

I am attempting to make a simple ToolChain for the Borland 4.5 compiler with the Pharlap extender based on instructions at the Apache website: https://netbeans.apache.org/kb/docs/cnd/toolchain.html
I am basing the ToolChain on MinGW so that I can use those tools for make.
I cannot get the new toolset to find the Borland compilers the way MinGW is automatically discovered. If I use g++ as the compiler name, Netbeans finds that OK. The issue seems to be with the directory. I'm assuming an installation directory of C:\BC45\BIN and attempting to find BCC32.EXE in that directory.
When I run (clean, build, then run) the test installation of the netbeans module, I see my new toolchain in C/C++, but the field for the C++ compiler is always empty unless I specify a program in the c:\mingw\bin (base) directory.
I have tried variations on the following in my cpp xml file, making sure from time to time that it works just fine with g++ as the name:
<cpp>
<compiler name="bcc32.exe"/>
<recognizer pattern=".*[\\/]bc45.*[\\/]bin[\\/]?$"/>
I haven't found documents beyond the Apache website. I'm basing my guesses on what I have found in: %appdata%\NetBeans\12.4\config\CND\ToolChain\MinGW.xml
The XML above was OK as far as it goes. The example at apache.org only fills in the subclass of c++ (cpp). When I also subclassed the c, assembler, and linker, I ended up with the fields in the C++ options automatically populating as expected.
vcc4n (https://sourceforge.net/p/vcc4n/wiki/Home/) has a good example of implementing the four important classes for those build tools, but really just continuing the example to create and fill in the additional XML as specified in layer.xml is straightforward enough.

How do I find what the Eclipse Cross Settings Prefix should be?

I have installed the latest version of Eclipse on my Windows 7 64-bit machine and the mingw compiler. In setting up a Hello World project, all goes well until I am asked for the Cross Settings what the Prefix is and the Path. The Path is obvious, it's the path to the compiler. However, I haven't the slightest idea what the Prefix is and Googling for much of the day hasn't enlightened me other than finding that a lot of other people have asked the question. Unfortunately the answers I've found appear to be for specific hardware. All I want to do is to produce an executable that will run on a Windows 32 bit or 64 bit machine.
So, what is the Prefix and how do I find what it should be?
What is probably happening here is that CDT is not locating your MingW or GCC installations.
simple - but unlikely reason - covering bases
There can be many reasons, from the simple - but unlikely at this point:
You don't have mingw installed
You don't have GCC installed
This can be tested easily by starting a shell and running gcc --version.
CDT heuristic not working
To more complicated reasons relating to your installation not being detected because the heuristic in CDT did not work on your machine. To find the correct settings, CDT will do:
Check $MINGW_HOME/bin for existence
Check <Eclipse install location>/mingw/bin for existence
Look for mingw32-gcc.exe or x86_64-w64-mingw32-gcc.exe on the PATH
Check C:\MinGW for existence
If CDT cannot find any of the above, you may lead to the situation you are in.
So, how to fix it!
Option 1
Start Eclipse from within a mingw set up shell. i.e. the one you can successfully run gcc --version from. That way Eclipse will inherit an environment that can launch GCC successfully.
Option 2
Set your environment up so that MINGW_HOME is properly defined. You can do this at the system level or within the build settings in Eclipse CDT. For example, on my machine in the build settings for the project (Right-click on the project, choose Properties, then choose C/C++ -> Environment) I have set:
MINGW_HOME to C:\MinGW
MSYS_HOME to C:\MinGW\msys\1.0
PATH to ${MINGW_HOME}\bin;${MSYS_HOME}\bin;<my normal path>
and this allows Eclipse to launch gcc as part of the build process.
NOTE The above setting were done automatically on my machine because mingw was correctly located by the heuristic.
Here is a screenshot of the build settings if it helps:
Prefix: Under the hood
To try and answer part of your original question about what Prefix is, I provide the below information. It is unlikely to be particularly helpf
Prefix, in GCC parlance, refers to the directory under which all the related GCC files are placed. With different prefixes you can have multiple GCC installed on your machine.
From the GCC FAQ:
It may be desirable to install multiple versions of the compiler on
the same system. This can be done by using different prefix paths at
configure time and a few symlinks.
The concept comes from autotools in general. Autotools is the standard GNU make system (where you do ./configure && make - simplified). The prefix is the command line option to the configure stage (--prefix) to specify where to install the tool to. GCC above uses the --prefix to allow multiple GCCs on your system.
If you really want to know more about this, read the autobook. The section on configuring covers --prefix:
‘--prefix=prefix’
The –prefix option is one of the most frequently
used. If generated ‘Makefile’s choose to observe the argument you pass
with this option, it is possible to entirely relocate the
architecture-independent portion of a package when it is installed.
For example, when installing a package like Emacs, the following
command line will cause the Emacs Lisp files to be installed in
‘/opt/gnu/share’:
$ ./configure --prefix=/opt/gnu
It is important to stress that this behavior is dependent on the generated files making use of this
information. For developers writing these files, Automake simplifies
this process a great deal. Automake is introduced in Introducing GNU
Automake.
Additionally, Mingw takes advantage of all this prefix options. Read more about that on mingw's site. But the short of it is that the main prefix for mingw is /mingw.

How to compile distributable Fortran binaries on Mac OS X Mountain Lion?

Since Apple have stopped distributing gfortran with Xcode, how should I compile architecture independent Fortran code? I have Mac OS X Mountain Lion (10.8), and XCode 4.4 installed, with the Command Line Tools package installed.
Apple's Native Compilers
As far as I can tell, the Xcode C / C++ / ObjC compilers use a fork of the GNU compiler collection, with llvm as a backend; the latter I figure enables compiling and optimising "universal" binaries, for both Intel and PPC architectures.
3rd party binary Fortran compilers
HPC
I've only found a single website that distributes a binary version of gfortran specifically for Mountain Lion: the HPC website. However, I failed to get this to compile SciPy, and later saw in SciPy's README that it is "known to generate buggy scipy binaries".
CRAN/R
SciPy's recommended (free) Fortran compiler is the one on CRAN's R server, but this has not been updated for Mountain Lion yet. They provide instructions and a script for Building a Universal Compiler, but, again, this hasn't been updated for Mountain Lion yet..
G95
The G95 project hasn't had an update since 2010, so I didn't try it.. Anyone tried this on Mountain Lion?
MacPorts
I guess this will be the easiest way to get gfortran installed, but port search gfortran comes up with nothing, and I've not had any joy with MacPorts in the past (no offence to MacPorts; it's looks like a very active project, but I've been spoilt with Linux package managers, my favourite manager being aptitude) so on Mac OS X I've compiled software and libraries from source code in the past. Never been a problem 'til now...
Building a Fortran compiler
Having dug around on the internet a lot in the last couple of days, I've found other Fortran compilers, but I've failed to get any to cross-compile universal binaries, or to compile SciPy.
GCC - The Gnu Compiler Collection
I compiled the entire GCC collection (v4.6.3), including autotools, automake, libtool and m4 - like the GCC wiki and this blog describe - but the resulting compilers didn't compile universal binaries, probably because LLVM wasn't used as a backend.
DragonEgg
DragonEgg is a "gcc plugin that replaces GCC's optimisers and code-generators ... with LLVM". This looks interesting, but I don't know how I could use it to compile 'llvm-gfortran-4.x'. Can this be done?
Compatibility
Libraries
The compiler that comes with Xcode is (a fork of?) GCC v4.2. But GCC's current release and development branches are versions 4.6 and 4.7, respectively. Apparently, a GNU license change, or something, stopped Apple from updating to more modern versions of GCC. So, if I was to build dynamic libraries made with GCC's gfortran v4.6, could they then be linked with C code compiled by Xcode's native compiler? At a minimum, I figure resulting Mach-O binaries need both x86_64 and i386 code paths. Do GCC provide backwards compatibility with Apple's (forks of?) GCC? I know gfortran has the -ff2c flag, but is this stable across versions?
Compile flags
The GCC Fortran compiler I built from source didn't support the use of the -arch compile flag. I had been including the flags -arch x86_64 -arch i386 in both CFLAGS and FFLAGS environment variables on earlier OSX versions (Snow Leopard to Lion). Python's distutils, and probably other OSX compilers, expect these flags to work, when configured to build apps or frameworks, using Xcode's universal SDK.
In case you're wondering what compile flags I use, I've uploaded the script I use to pastebin, which I source before I compile anything, using: source ~/.bash_devenv.
The Ideal OSX Fortran Compiler
Create ppc and intel (32 and 64bit) universal binaries, specified by using the -arch flags.
Makes binaries compatible with XCode's linker.
Compiles SciPy, giving no errors (compatible with numpy's distutils and f2py).
I don't use Xcode so much, but integration with it would surely benefit other users. Even Intel are still having problems integrating ifort into Xcode 4.4, so this is not something I expect to work..
If you read all the above, then thank you! You can probably tell that I'm not averse to building my own Fortran compiler from source, but is it even possible? Have I missed something? A configure flag maybe? And if such a compiler is not available yet, then why not?!
(Update:) Apple's GCC
Apple provide the source code for their patched version of GCC, at opensource.apple.com. This actually includes the source code for gfortran, but what do you know - it doesn't compile (easily). I'm in the process of writing a build script to get this to work. Unfortunately, I've had to apply a couple of patches, and learn about "the Apple way" of building GNU software. This is the way to go I think. Any reasons why it shouldn't be? I'll update with an answer if I get it to work...
I managed to compile after installing gfortran from http://r.research.att.com/tools/gcc-42-5666.3-darwin11.pkg, as explained here. I had to try to open the package a couple of times, though. First time it said that only apps from App Store can be installed. After installing gfortran, python setup.py build and python setup.py install worked fine. The unit tests of scipy though give a fairly high number of fails, not sure it's normal.
Ran 5481 tests in 82.079s
FAILED (KNOWNFAIL=13, SKIP=42, errors=11, failures=72)
<nose.result.TextTestResult run=5481 errors=11 failures=72>
In case you didn't already notice this: In newer versions of Xcode you have to explicitly install command line tools in the following way:
Preferences -> Downloads -> Components
And then click the "install" button for command line tools. This includes gfortran:
> gfortran -v
Using built-in specs.
Target: i686-apple-darwin10
Thread model: posix
gcc version 4.2.1 (Apple Inc. build 5664)
Admittedly, this does not solve all of my fortran needs (in some cases "./configure" scripts will complain that they cannot "compile a simple fotran program").
You could use brew (or Homebrew) to install gfortran.
$ brew install gfortran
I know you said you don't like MacPorts, but if you install the gcc48 port, it does in fact include gfortran (although you'll also have to do sudo port select --set gcc mp-gcc48 to get it to set up the symlink named gfortran).
Also, FWIW, the MacPorts option is not necessarily a binary - MacPorts can actually build it from source, which is why it sometimes takes a while. On the other hand, it also sometimes seems to get archived binaries from somewhere, but I think it depends on what the original author of the portfile uploaded.
I ended up compiling gfortran the source code provided at Apple's developer tools source code page. This seems to be working okay now - I've successfully compiled x86-64 and i386/i686 LAPACK, ATLAS and BLAS fortran libraries - but there are some ranlib tests which fail, when running make -k test in the build directories. (I could provide more info on that pastebin or somewhere, if someone wants...)
Build process
After asking the question, I downloaded Apple's llvmgcc42 source code tar archive, which includes the source code for llvm/gcc C, C++, ObjC and fortran compilers, and spent some time trying to compile a universal build of gfortran. The build takes about 30-60 minutes on my quad-core 2.8GHz Mac Pro, and became quite an involved process, so I wrote a set of build scripts for it, which I've shared at github.com.
....
I'll keep a tar archive of my build here for the time being, if anyone would like a copy. (Updated 26-Sep-2012) It'll only work if installed with a prefix of /usr/local/ though, unless you run install_name_tool on the executables and dylibs, to change the prefix from /usr/local to wherever you want to put it. You can test install_name's with otool -L filename (more info on the reasons for this is here).
The final build I'm now using also includes updates to the gcc/fortran and libgfortran directories, which I got from GNU GCC 4.2.4. These sources I got from my local GCC's mirror. There were only minor changes between 4.2.1 and 4.2.4, and the build scripts include the patches needed to upgrade the code.
The build-gfortran.sh script I wrote downloads missing dependencies (mpfr and gmp), compiles and cross-compiles them, patches differing headers with architecture-dependent preprocessor macros, and runs lipo to create universal binaries and libraries, eventually supporting both i386 and x86_64 architectures. The process is similar for llvmCore, and then GCC. I mostly copied code from the build_llvm and build_gcc bash scripts provided with Apple's llvmgcc42, but had some of it had to be modified, including a few lipo and install_name_tool commands.
The official way to compile Apple's gcc, using Xcode's gnumake just didn't work for me. I thought this should work just byadding "fortran" to the LANGUAGES variable in build_gcc.
With regards to compiling Scipy, still can't get that building perfectly. I've had to use clang and clang++ as C/C++ compilers, or else I get EXC_BAD_ACCESS malloc errors. Haven't tried the gcc/g++ compilers I built, just used the system ones. This is as reported for Lion, on the Scipy install page. I'm down to 11 errors and 1 failure, which are all raised from the same 3 function calls (_fitpack._bspleval, numeric.asarray, testing.utils.chk_same_position). Think that's pretty good, but I'd like every test to pass...

Cross-compiling Makefile: dealing with test programs

I'm trying to cross-compile several libraries from OSX to iOS. I've successfully cross-compiled libjpeg and libogg.
But I can't compile libvorbis because configure insists on creating and running a small test program. This obviously fails, because it creates an armv7 binary, fails to run it, and then interprets this as missing ogg libraries.
How do you usually deal with this kind of problem? I'm tempted to hack the configure script to work around these issues, but because of this kind of failure some features may be disabled. I'm also thinking of letting configure generate a native Makefile and then convert it to use the iOS toolchain, but this seems too error prone.
Any advice?
If you are cross-compiling anything that has more dependencies than libc (glibc) it becomes much more complicated. You need to have already cross-compiled all the dependencies. And the cross-compiler toolchain and all helper build programs and scripts need to know how to find those dependencies (the cross-compiled libraries and headers).
You need to have already cross-compiled libogg (and its dependencies) and installed them into the cross-compile root directory. The headers and libraries from your build system can't be used for the host (arm7) system. They must be kept separate.
Also, if you want to have shared object libraries (*.so) and not just static libraries then there is a whole new set of complications. For example, while a cross-compiler toolchain contains a cross-compiled libc as part of the toolchain, you still need a libc for the host system. The libc that is part of the toolchain can be used for this, but the way it is structured is different than on the host system. Sometimes people copy and re-arrange the files, but often people just compile and install a new glibc for the root.
Anyways, all that to say, the two errors you are seeing are because the configure script is not able to find a cross-compiled libogg library. If you haven't already, you need to cross-compile libogg (and dependencies) and install them into your target root. Then you need to tell the configure script where your cross-compiled headers (yes, header are architecture specific) and libraries are in your target root. Usually using CFLAGS, LDFLAGS, CXXFLAGS, etc (NOT --prefix) but there may be other environment variables you need to set also to affect things like pkg-config, etc. After you have built each dependency, then you need to get the makefile to install the dependency to the root. Usually this is done with make DESTDIR=[root] install but some makefiles have their own mechanism (or no proper alternate install mechanism).
You may also need to override certain configure checks (using environment variables) that are poorly written and don't have good cross-compile defaults. These variables usually start with ac_cv_*
So the basic process is to do this for packages that you need (in dependency order):
export CFLAGS=-I[root]/usr/include LDFLAGS=-L[root]/usr/lib CXXFLAGS=-I[root]/usr/include
export ac_cv_[test1]=[yes|no] ac_cv_[test2]=[yes|no] ...
./configure --host=[arm7-blah-blah]
make
make DESTDIR=[root] install
Good luck. Once you feel comfortable with standard cross-compiling, then you will be ready to take on the real black art, the Canadian cross ;-)
I finally figured it out. I tricked configure by explicitly making it link with ogg (LDFLAGS="/usr/local/ios/lib/libogg-armv7.a" ./configure ...) and then removed the explicit reference to the library from the generated makefile.

netbeans c++ deployment

I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.