Difference between .o and .ko file - linux-device-driver

I am writing simple Linux module mod.c.
When I compile mod.c file, it creates two output file mod.o and mod.ko.
So I just want to know,
What is the difference between mod.o and mod.ko file?

The short answer is that the .ko file is your object file linked with some kernel automatically generated data structures that are needed by the kernel.
The .o file is the object file of your module - the result of compiling your C file. The kernel build system then automatically creates another C file with some data structures describing the kernel module (named your_module_kmod.c), compile this C file into another object file and links your object file and the object file it built together to create the .ko file.
The dynamic linker in the kernel that is in charge of loading kernel modules, expects to find the data structure the kernel put in the kmod object in the .ko file and will not be able to load your kernel module without them.

Before Linux 2.6, a user space program would interpret the ELF object
(.o) file and do all the work of linking it to the running kernel,
generating a finished binary image. The program would pass that image
to the kernel and the kernel would do little more than stick it in
memory. In Linux 2.6, the kernel does the linking. A user space
program passes the contents of the ELF object file directly to the
kernel. For this to work, the ELF object image must contain additional
information. To identify this particular kind of ELF object file, we
name the file with suffix ".ko" ("kernel object") instead of ".o" For
example, the serial device driver that in Linux 2.4 lived in the file
serial.o in Linux 2.6 lives in the file serial.ko.
from http://tldp.org/HOWTO/Module-HOWTO/linuxversions.html .

Related

Can Yocto create more than one module from a single recipe?

I am trying to create a set of linux driver modules under yocto. The drivers are fundamentally very similar but for reasons of hardware, have a separate source file for each interface resulting in a set of N modules. All of the drivers share functionality which is contained in set of separate files.
What I would like is that all the module source files and the set of shared files share a directory (or the latter are in a subdirectory) and that the yocto recipe generates a module from each driver source resulting in N separate modules from the one recipe.
Is this feasible or can anyone suggest an alternative that does not require replication of the shared files for each module?

MATLAB R2020b: libmwlaunchermain.so: cannot open shared object file: No such file or directory when running compiled binary

I compiled my MATLAB script using mcc and I'm trying to run the binary in a Debian GNU/Linux 9 (stretch) environment.
I only have the full MATLAB installation available and don't have (and don't want) the MATLAB Runtime, as I believe the full MATLAB installation comes with the Matlab runtime already.
I get the following error when I try running the binary.
root#me:/home/matlab/my-project/build-binary# bash run_my_project.sh /usr/local/matlab/R2020b/bin/glnxa64
------------------------------------------
Setting up environment variables
---
LD_LIBRARY_PATH is .:/usr/local/matlab/R2020b/bin/glnxa64/runtime/glnxa64:/usr/local/matlab/R2020b/bin/glnxa64/bin/glnxa64:/usr/local/matlab/R2020b/bin/glnxa64/sys/os/glnxa64:/usr/local/matlab/R2020b/bin/glnxa64/sys/opengl/lib/glnxa64
./tfda_cli: error while loading shared libraries: libmwlaunchermain.so: cannot open shared object file: No such file or directory
I can confirm that the file it's not able to find is indeed in the directory I pointed it to.
root#me:/home/matlab/my-project/build-binary# ls /usr/local/matlab/R2020b/bin/glnxa64 | grep libmwlaunchermain.so
libmwlaunchermain.so
Other questions about this topic assume that the MATLAB Runtime is installed, which isn't the case for me since I only have the full MATLAB installation.
Thanks for any suggestions!

In Debian Linux can I build a single driver to become permanently part of my kernel image and part of my vmlinux file (for debugging)

I've got a custom kernel that I've built locally with gdb and kgdb enabled and installed. I have the vmlinux file for it that I use for source level kgdb. Each time I make a change I've been rebuilding the entire kernel. But I want to become more efficient than rebuilding whole kernels for every code change.
So I made a mod to my ata driver. Then I did a
make M=drivers/ata
It succeeded. Now how do I replace my previous ata driver with this ata driver and get this updated info into my vmlinux file for source debugging this new driver.
I'm not considering doing an insmod. I want to permanently modify my kernel image to replace the previous ata driver with this ata driver.
I think that is not possible to replace a driver in the Linux binary with a new version. You must use insmod, or recompile the entire kernel. I do not see any problem to recompile the whole kernel. If you already compiled it before, it takes few seconds

Building modules with linux kernel for custom flavor

I followed the instructions given in the link: http://blog.avirtualhome.com/how-to-compile-a-new-ubuntu-11-04-natty-kernel/ for building a custom kernel and booting it. Everything works fine, except that when building it, I used the option skipmodule=true (as given in this link), so I guess the modules are not built for this kernel. So I have two questions:
How do I build only the modules for my flavor, now that I have the rest of the kernel built? 'make modules' will build it for generic flavor only, if I'm not wrong.
Also does it require me to build the entire kernel source, 'fakeroot debian/rules binary-i5' (i5 is my custom falvor), each time I make a change to one of my modules?
Thanks.
1) To build a linux kernel module for a specific kernel from the module source directory do:
make -C {path-to-kernel-source} M=`pwd` modules
The -C option tells is used to point to the kernel source tree where it finds the kernel's top-level Makefile. The M=`pwd` option points it to the module source directory, where it builds the 'modules' target.
2) Nope, its not necessary to build the source kernel. Either having the kernel source tree or the kernel headers suffice.

How do I detect if a program is running within a PAR archive?

I'm working on a large Perl application which gets bundled with PAR, along with a bunch of support files.
When the app is running within PAR, I can use PAR::read_file to get at these various files inside the archive. However, while I'm developing, I don't want to have to re-PAR the whole application every time I tweak some code.
Is there a way that I can tell if the script is running within PAR or not at runtime, so I can choose to load the file from the PAR archive or the regular filesystem?
PAR::Environment can probably offer some clues:
PAR uses various environment variables both during the building process of executables or PAR archives and the use of them.
...
PAR_0
If the running program is run from within a PAR archive or pp-produced executable, this variable contains the name of the extracted program (i.e. .pl file).