Building modules with linux kernel for custom flavor - operating-system

I followed the instructions given in the link: http://blog.avirtualhome.com/how-to-compile-a-new-ubuntu-11-04-natty-kernel/ for building a custom kernel and booting it. Everything works fine, except that when building it, I used the option skipmodule=true (as given in this link), so I guess the modules are not built for this kernel. So I have two questions:
How do I build only the modules for my flavor, now that I have the rest of the kernel built? 'make modules' will build it for generic flavor only, if I'm not wrong.
Also does it require me to build the entire kernel source, 'fakeroot debian/rules binary-i5' (i5 is my custom falvor), each time I make a change to one of my modules?
Thanks.

1) To build a linux kernel module for a specific kernel from the module source directory do:
make -C {path-to-kernel-source} M=`pwd` modules
The -C option tells is used to point to the kernel source tree where it finds the kernel's top-level Makefile. The M=`pwd` option points it to the module source directory, where it builds the 'modules' target.
2) Nope, its not necessary to build the source kernel. Either having the kernel source tree or the kernel headers suffice.

Related

Yocto deploy Debug or Release prebuild?

I am writing a bitbake recipe to deploy a third party pre-built tool, similar to this wiki page: https://wiki.yoctoproject.org/wiki/TipsAndTricks/Packaging_Prebuilt_Libraries
However, I have a Release and Debug pre-build versions of the tool available as *.so files. How do I distinguish inside the recipe which one of both build types I shall deploy?
Thanks and regards,
Martin
You can have two different virtual recipes each with their own .so file. This then warrants a selection in a configuration file (with PREFERRED_PROVIDER_virtual/my-recipe), so either in a machine or distro configuration file. This is probably preferred if you consider having release and debug distros.
A second option is to install the libraries in two different paths, in two different PACKAGES (use FILES_my-package for that) and make them RCONFLICTS_my-package each other to be sure they can't both be in the rootfs. After that, you could write a pkg_postinst_my-package() task specific to each package that actually move the library from the "different" path to the intended one. This will be run both at build time when creating the rootfs and at runtime on first boot, so you need to make sure to exclude one or the other (it's usually done by checking if ${D} exists, which does at build time but not runtime).
c.f.: http://docs.yoctoproject.org/dev-manual/dev-manual-common-tasks.html#post-installation-scripts
If you can manage to have both libraries installed in your rootfs and select the one you want with the LIBRARY_PATH environment variable, a simple recipe, with two packages with each library in a different location, will be sufficient.

Can Buildroot build the root filesystem without building the Linux kernel?

I tried:
git checkout 2018.05
make qemu_x86_64_defconfig
make BR2_JLEVEL="$(nproc)" "$(pwd)/output/images/rootfs.ext2"
but it still built the kernel at:
output/images/bzImage
I want to do that because:
I'm making a setup where you can pick between multiple different root filesystems, so I will need to build Linux kernel manually for the other root filesystems, and would not like Buildroot to waste time building it again
I don't want to wait 5 seconds every time for Buildroot to parse 100 Makefile configs when I want to rebuild the kernel :-)
I'm using LINUX_OVERRIDE_SRCDIR with Linux on a submodule, so Linux the headers should match the source I will use for the build.
Is there a fundamental dependency between, say, glibc and the kernel build, or is it just a weird use case never catered for?
Ah, I noticed now that any loadable kernel modules need to go on the rootfs and would require a kernel build, and that build does have some .ko in the rootfs.
Well, just disable BR2_LINUX_KERNEL and Buildroot will no longer build the kernel.

Inputs for Porting Perl

I am a very new to porting.
I was trying to port perl to a netbsd system. Since its a custom made build, we wont be able to run configure or make on the target netbsd system. So we are trying to cross-compile it in a host pc and copy the binary over target machine. And in order to do so, we have to make a makefile from scratch, since the format for the makefile in our build is different.
I have some basic doubts regarding this,
Firstly, In order to create a perl makefile for my custom build, what are the basic things will come. Such as ccflags, library paths etc.,?
There are some files like DynaLoader, uudmap.h, myConfig, Config.pm which gets generated while "make". How can i generate them using custum makefile.
How to set various library paths and what are they ?
The #INC, shows the perl search paths, how can i create it ?
Where exactly Perl modules get installed and when it happens?
A perl build normally involves building a stripped down version of perl named miniperl, which is then used extensively in the remainder of the process of building perl and the bundled modules.
There are two basic approaches to cross-compiling: to build miniperl for the target machine and build the modules, etc., there, or to build miniperl for the host and use it to build perl and modules for the target.
The WinCE port uses the latter approach; the rudimentary (last I knew, anyway) support for a -Dusecrosscompile switch to Configure uses the former.
I recommend you ask for advice and help on the perl5-porters mailing list: http://lists.perl.org/list/perl5-porters.html
And be prepared for hard work.
NetBSD's pkgsrc system has perl in it already and has the ability to generate binary packages that you can then install on a target machine.

perl build module with c source from other module

I am working on a module that I would like to have two backends, a Module(::PerlArray) and Module::PDL (which can will depend on Module). Both need access to a functions.c/.h file for building. This file has the rather complicated logic needed for the module. Rather than distribute it separately with each module, is there some way to keep it with the Module::PP on the system and then add it to the appropriate build flags in EU::MM or M::B (given the complexity here probably the latter)?
To put it more visually
--Module--
Module.pm
Module/PerlArray.pm
Module/PerlArray.xs (#include functions.h
#include perlarray_backend.h)
Module/src/functions.c
Module/src/perlarray_backend.c
Module/inc/functions.h
Module/inc/perlarray_backend.h
--Module::PDL--
Module/PDL.pm
Module/PDL.xs (#include functions.h /*from Module*/
#include pdl_backend.h)
Module/src/pdl_backend.c
Module/inc/pdl_backend.h
and the compilation makes functions.o and links. I'm sure I can figure out how to set the flags appropriately but how can I make Module keep the functions.c file while installing, and how can I then find it when installing Module::PDL? Is there some location I can place the functions.c/.h?
Have you looked at DBI? It does what you suggest: it installs some .h file(s) that the DBD drivers can #include in their XS code, as well as a library that the DBD drivers can call.
Modules should be independently installable. That is, providing I have the pre-requisite Perl modules installed (but not necessarily still lying around in source form), then it should be possible to install all the modules in one distributed tar file without reference to the source for any other module.
You have options. One is to have a single source directory create several distributed tar balls, and they can each have a copy of the shared function.[ch] in the distributed source.
The other main option is to bundle both modules into a single distributed tar ball.

netbeans c++ deployment

I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.