Cross-compiling Makefile: dealing with test programs - iphone

I'm trying to cross-compile several libraries from OSX to iOS. I've successfully cross-compiled libjpeg and libogg.
But I can't compile libvorbis because configure insists on creating and running a small test program. This obviously fails, because it creates an armv7 binary, fails to run it, and then interprets this as missing ogg libraries.
How do you usually deal with this kind of problem? I'm tempted to hack the configure script to work around these issues, but because of this kind of failure some features may be disabled. I'm also thinking of letting configure generate a native Makefile and then convert it to use the iOS toolchain, but this seems too error prone.
Any advice?

If you are cross-compiling anything that has more dependencies than libc (glibc) it becomes much more complicated. You need to have already cross-compiled all the dependencies. And the cross-compiler toolchain and all helper build programs and scripts need to know how to find those dependencies (the cross-compiled libraries and headers).
You need to have already cross-compiled libogg (and its dependencies) and installed them into the cross-compile root directory. The headers and libraries from your build system can't be used for the host (arm7) system. They must be kept separate.
Also, if you want to have shared object libraries (*.so) and not just static libraries then there is a whole new set of complications. For example, while a cross-compiler toolchain contains a cross-compiled libc as part of the toolchain, you still need a libc for the host system. The libc that is part of the toolchain can be used for this, but the way it is structured is different than on the host system. Sometimes people copy and re-arrange the files, but often people just compile and install a new glibc for the root.
Anyways, all that to say, the two errors you are seeing are because the configure script is not able to find a cross-compiled libogg library. If you haven't already, you need to cross-compile libogg (and dependencies) and install them into your target root. Then you need to tell the configure script where your cross-compiled headers (yes, header are architecture specific) and libraries are in your target root. Usually using CFLAGS, LDFLAGS, CXXFLAGS, etc (NOT --prefix) but there may be other environment variables you need to set also to affect things like pkg-config, etc. After you have built each dependency, then you need to get the makefile to install the dependency to the root. Usually this is done with make DESTDIR=[root] install but some makefiles have their own mechanism (or no proper alternate install mechanism).
You may also need to override certain configure checks (using environment variables) that are poorly written and don't have good cross-compile defaults. These variables usually start with ac_cv_*
So the basic process is to do this for packages that you need (in dependency order):
export CFLAGS=-I[root]/usr/include LDFLAGS=-L[root]/usr/lib CXXFLAGS=-I[root]/usr/include
export ac_cv_[test1]=[yes|no] ac_cv_[test2]=[yes|no] ...
./configure --host=[arm7-blah-blah]
make
make DESTDIR=[root] install
Good luck. Once you feel comfortable with standard cross-compiling, then you will be ready to take on the real black art, the Canadian cross ;-)

I finally figured it out. I tricked configure by explicitly making it link with ogg (LDFLAGS="/usr/local/ios/lib/libogg-armv7.a" ./configure ...) and then removed the explicit reference to the library from the generated makefile.

Related

recipe also produces -native output that needs packaging

I have a recipe which successfully invokes a legacy build command to cross-compile a target.
As a side effect it produces some custom native tools that are used in the build.
I want to reap those tools into a -tools-native package to allow other recipes to depend the main package to access the artifacts, and use the -tools-native package to further process those artifacts.
I can build such a native package as simply as adding:
PROVIDES = "${PN} ${PN}-tools-native"
SYSROOT_DIRS += "/"
PACKAGES += "${PN}-tools-native"
FILES_${PN}-tools-native += "/native-bin/*"
and having the install section install the native tools to /native-bin/
but yet it somehow isn't a real native package, and when DEPENDS'd by an additional recipe the native-bin artifacts are installed inrecipe-sysrootinstead ofrecipe-sysroot-native`
I also have to install the tools 0644 or bitbake tries to strip them (and fails, as they are native build).
Because the native tools are already generated by the legacy build commands, I don't need to actually invoke as a -native recipe variant.
It's a long process, I don't want to run it twice, either.
Currently I work around it by having the other recipes DEPEND on recipe-native-tools and fixup the permissions and PATH
But what's the proper way to do this?
This is generally handled by separate recipes. There is no mechanism to share native binaries from target recipes as their task hashes have the wrong kinds of information in them (they change depending on the target architecture).
Target recipes don't install their bindir/sbindir into the sysroot since we can't run them and as you mention, they're the wrong architecture so they confuse strip and so on.
You could try having a native recipe which depended upon this target recipe and which installs the binaries saved by the target recipe somewhere into its ${D} at do_install. That may well give some warnings since in general native recipes shouldn't depend on target recipes but is probably your best option if you can't build twice.

How to make a Dist::Zilla based Perl module (or app) install files into /etc/?

I maintain multiple Perl written (unix-ish) applications whose current installation process consists of a manually written Makefile and installs configuration files into /etc/.
I'd really like to switch their development to use Dist::Zilla, but so far I haven't found any Dist::Zilla plugin or feature which allows me to put given files into /etc/ when the make install (or ./Build install in case of using Module::Build instead of ExtUtils::MakeMaker) is run by the local administrator who's installing my application.
With pure ExtUtils::MakeMaker, I could define additional make targets in MY::postamble and the let the install target depend on one of them via the depend { install => … } attribute. Doing something similar, but via dzil build, would probably suffice, but I'd appreciate a more obvious way.
One orthogonal approach would be to make the application not to require the files under /etc/ to exist, but for just switching to Dist::Zilla that seems to much change in the actual code despite I only want to change the build system for now.
For the curious: the two applications I currently have in mind for switching to Dist::Zilla are xen-tools and unburden-home-dir.
The best thing to do is to avoid installing files into /etc from any Perl distribution. You cannot ensure that the cpan client (or the installing user) has permissions to install there, and there can be multiple Perls installed on a system, so each one of them would clobber the /etc files of another install. You can't really prevent the file from being overwritten by a subsequent install, so you shouldn't put config data there that you don't want to lose.
You could put the config file in /etc/, if the application knows to look for it there, but you should allow for that path to be customized (say on a test system, look for the file in the local directory, or in a user's home directory).
For installing read-only module-specific data, the best practice in Perl is to install into a Perl-install-specific location, and the module to do that is File::ShareDir::Install. You can use it from Dist::Zilla using the [ShareDir] plugin, Dist::Zilla::Plugin::ShareDir. It is even included in the [#Basic] plugin bundle, so if you use [#Basic] in your dist.ini, you don't need to do anything at all, other than drop your data files into the share/ directory in your distribution repository.
To access the contents of the sharedir from code, use File::ShareDir.
For porting a complex module installer to Dist::Zilla, I recommend my plugins MakeMaker::Custom or ModuleBuild::Custom, depending on which installer you prefer. These allow you to keep your existing Makefile.PL or Build.PL and just have Dist::Zilla plug in necessary bits like the dependencies.

Should aclocal.m4 go in source control?

I'm using a couple of macros from the autoconf archive in my configure.ac. When aclocal is run, the macros are placed in aclocal.m4. Since this file is automatically generated, I typically wouldn't put it in source control. However, the autogeneration won't work unless the user has the macros installed on their computer in the first place (on Ubuntu I had to do apt-get install autoconf-archive). Is it typical practice to include aclocal.m4 in source control?
Edit: summary: Do not include aclocal.m4 in source control. It is acceptable to include acinclude.m4.
No, it is not best practice to do so. However, it is probably typical practice. Best practice and common practice often diverge widely when dealing with the autotools. In my opinion, the expectation is that a developer who is running the autotools is capable of satisfying the dependencies and making the macros available (eg, by installing autoconf-archive), so the file should not be included in the version control system. It is, however, perfectly acceptable to put the macros in acinclude.m4 and put that file in source control. When invoked, aclocal will look for definitions in acinclude.m4 so that the developer running aclocal (and this is the point that seems to throw a lot of projects; there are really only a small handful of people who should ever be invoking the autotools on a project, and everyone else should be building from a release distribution. If developers working on a project are not modifying the autotool meta-files, there is no reason for them to be running the autotools) does not need to install the autoconf archive.

Inputs for Porting Perl

I am a very new to porting.
I was trying to port perl to a netbsd system. Since its a custom made build, we wont be able to run configure or make on the target netbsd system. So we are trying to cross-compile it in a host pc and copy the binary over target machine. And in order to do so, we have to make a makefile from scratch, since the format for the makefile in our build is different.
I have some basic doubts regarding this,
Firstly, In order to create a perl makefile for my custom build, what are the basic things will come. Such as ccflags, library paths etc.,?
There are some files like DynaLoader, uudmap.h, myConfig, Config.pm which gets generated while "make". How can i generate them using custum makefile.
How to set various library paths and what are they ?
The #INC, shows the perl search paths, how can i create it ?
Where exactly Perl modules get installed and when it happens?
A perl build normally involves building a stripped down version of perl named miniperl, which is then used extensively in the remainder of the process of building perl and the bundled modules.
There are two basic approaches to cross-compiling: to build miniperl for the target machine and build the modules, etc., there, or to build miniperl for the host and use it to build perl and modules for the target.
The WinCE port uses the latter approach; the rudimentary (last I knew, anyway) support for a -Dusecrosscompile switch to Configure uses the former.
I recommend you ask for advice and help on the perl5-porters mailing list: http://lists.perl.org/list/perl5-porters.html
And be prepared for hard work.
NetBSD's pkgsrc system has perl in it already and has the ability to generate binary packages that you can then install on a target machine.

netbeans c++ deployment

I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.