Replace glibc libraries with uClibs libraries - buildroot

I'm thinking to replace the glibc library (libc.so) with the uClibc library directly in my filesystem. Will it work? Or do I need to recompile all the binaries?

Extremely bad idea. 3rd sentence on the 'About' page of uclibc:
Porting applications from glibc to uClibc typically involves just recompiling the source code

As all other answers are already saying: this will not work.
uClibc and glibc don't even have the same dynamic library loader (ld-linux.so). Shared executables specify which dynamic library loader and which shared libraries they need. You can get this information with readelf -l /path/to/executable (for the dynamic library loader) and readelf -d for the shared libraries.
A program linked against glibc will give something like this:
INTERP 0x000154 0x00008154 0x00008154 0x00013 0x00013 R 0x1
[Requesting program interpreter: /lib/ld-linux.so.3]
0x00000001 (NEEDED) Shared library: [libc.so.6]
A program linked against uClibc will give something like this:
INTERP 0x000114 0x00008114 0x00008114 0x00014 0x00014 R 0x1
[Requesting program interpreter: /lib/ld-uClibc.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.1]

No, this will not work, no matter how you dice it.
All of the the symbol ordinals of important relocations from the procedure linkage table and global offset table will no longer be correct. Any attempts to resolve the interned symbol hash sections would panic and you'd be left with a broken slough of bytes. Even if you were to work around this stunning mess, you'd have a host of either half-copied or overflowed pages containing instructions from multiple dynamic libraries whose size is no longer correctly resolved.
On the plus side, statically linked binaries will be unaffected by this change.

Related

How to tell MakeMaker to add exactly the libraries I want?

I'm using XS to create a Perl Module which uses a C library.
For testing purposes, I've created a test library which has two simple functions:
void kzA() (does a simple printf)
void kzB(int i, char *str) (does a printf of the received parameters)
I've also created some glue in XS, in order, for now, to access the kzA() function:
(I'm only showing the function itself, but the includes are there, too, in the XS)
void
ka()
CODE:
printf("Before kzA()\n");
kzA();
printf("After kzA()\n");
So, I compiled the test library as fc.so, and it is in the same directory as my xs file, and my Makefile.PL (/workspace/LirePivots)
In my Makefile.PL, I set the LIBS key to ['-L/workspace/LirePivots -l:fc.so'], but when executing it with perl (perl Makefile.PL), it says "Warning (mostly harmless): No library found for -l:fc.so"
It then writes a Makefile which does NOT mention said library. And then, after compiling (with "make") and installing (with "sudo make install"), when I run my test script which calls the ka() function from my module, I get the line "before", but the kzA() function isn't called, obviously, because it cannot find the kzA symbol, and the program stops there.
Creating a C test program which I would link with the very same arguments (-l:fc.so -L/workspace/LirePivots) does work, and, as long as I put the path in LD_LIBRARY_PATH, it finds the function and runs it correctly.
I also tried renaming the library libfc.so, and changing the -l part to "-lfc", but it didn't work either. It never manages to find the library.
Does anyone know what I do wrong ?
EDIT:
As requested, I created a minimum example: https://github.com/kzwix/xsTest
To run it, you'll need to have a Linux with Perl 5, along with XS (package perl-devel, on Redhat). And a C compiler, and make, obviously.
Ok, I still don't know the reason why it wouldn't find the library. But I found a workaround:
By adding the test library as "libfc.so" to /usr/lib, then running "sudo ldconfig", the library got added to the cache (when named fc.so, even if in /usr/lib, ldconfig would not add it. Also, it wouldn't work with symbolic links, either, I had to really copy it there).
After the library got added to the cache, "perl Makefile.PL" would still not find the library. I had to use "-L/usr/lib" in addition to "-lfc" for it to at long last find the library, and "agree" to add the parameters to the link step of the library.
After this happened, I could compile with "make", and executing the test program did work as originally intended (I saw both the "before" and "after" printf, and I saw the one from the function in the kzA() function from libfc.so)
Thanks Håkon Hægland for helping.
(I can't provide the Dockerfiles, they link to images internal to my organization, which I'm not allowed to share)

threadx module allocating module data section in run time

I see that when loading threadx module, the data section is allocated in run time from byte pool (at _txm_module_manager_internal_load).
Is there a way to define in advance (in the module preamble or such) the place where the module data will be located?
How dose the module code knows where its globals are located? the compiler can't know their addresses because the place of the data is determined at run time (I suppose somewhere in the assembly porting there is some relative address to the data section that changes for each module that the globals are accessed through it, but I am not so familiar with arm assembly to find it by myself).
The location of data for each module is determined at run-time. This is to allow loading the same module more than once, each instance having separate data, in different locations.
Globals are located in the data area. The modules are compiled as position independent code and position independent data. While loading the module the module startup code is provided with the load address of the data. The specifics of how this is achieved are different for each architecture and compiler.
#Andrés has answered correctly, but I wanted to add some info. You must build a Module with position independent code and data (e.g. ROPI and RWPI options if using the ARM compilers). The GCC options look like below (I bolded the relevant compiler options needed for position independence):
arm-none-eabi-gcc -c -g -mcpu=cortex-m4 **-fpie -fno-plt -mpic-data-is-text-relative -msingle-pic-base** txm_module_preamble.s
Let us know if you have further questions.

dynamically shared library -- for linux

I have just one question related to Linux shared library files.
I saw lot of links related to dynamically shared library for the Linux O.S
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
Here in above link it is mentioned ---
include file for the library: ctest.h
Now in LINUX to use the libdl in build function --- dlopen, dlsym, dlclose.
Do we really need to include the prototype file -- ctest.h -- for dynamic lybrary ?
Please give some suggestion related to above post.
You don't really need to include the header or prototype file for the dynamic library, you do however need to at least the specific type information for the value returned by dlsym.
See here and here for examples that don't contain the include file for the dynamic library.
In the example you posted, they started off with their library functions not having header files / function prototypes, which along with providing instruction on how to avoid C++ name mangling is why they included the header file in this case.
If you define your own libraries without function prototypes, either in the source file or the header file, then you will need to include the header file when using dlsym, otherwise the inclusion of the header file of the dynamic library is unnecessary as its function prototypes were already included in the generated shared object.
The function prototypes included in header file are so that functions implemented can be resolved by name by linker. Where as the shared object file regardless of how it is linked, contains the implementation of the library which the linker links to.
The short explanation is that header files that are included with the #include are processed by the preprocessor, which means the resulting source file / files passed to the linker has knowledge of who each function call is because it looks up the function call prototype that was in the include file and has been included in the modified source. Include files tell the linker about who the function call is.
Object files, Shared Object files and other libraries files tell either the linker about what the implementation of the function call prototype does.
To answer your question in the comment, you will only have to add the libdl.so path to LD_LIBRARY_PATH or to /etc/ld.so.conf and run ldconfig, if that library or its relevant symlink hierarchy isn't in a standard location such as /usr/lib/ or /lib/.
See the following relevant StackOverflow answer answer for more information.
Further information can be found at
Program Library Howto
C Libraries Howto
Compilers, Assemblers, Linkers, Loaders: A Short Course
Linkers and Loaders
Beginner's Guide to Linkers

Parsing Unix/iPhone/Mac OS X version of PE headers

This is a little convoluted, but lets try:
I'm integrating LUA scripting into my game engine, and I've done this in the past on win32 in an elegant way. On win32 all I did was to mark all of the functions I wanted to expose to LUA as export functions. Then, to integrate them into LUA, I'd parse the PE header of the executable, unmangle the names, parse the parameters and such, then register them with my LUA runtime. This allowed me to avoid manually registering every function individually just to expose them to LUA.
Now, flash forward to today where I'm working on the iPhone. I've looked through some Unix stuff and I've gotten very close to taking a similar approach, however I'm not sure it will actually work.
I'm not entirely familiar with Unix, but here is what I have so far on iPhone:
Step 1: Query for the executable path through objective-C and get the path of my app
Step 2: Use dlopen to get a handle to my app using: `dlopen(path, RTLD_NOW)`
Step 3: Use `dlsym( libraryHandle, objectName )` to attempt to get the address of a known symbol.
The above steps won't actually get me to where I want to be, but even that doesn't work. Does anyone have any experience doing this type of thing on Unix? Are there any headers or functions I can google to put me on the right track?
Thanks;)
iPhone does not support dynamic linking after the initital application launch. While what you want to do does not actually require linking in any new application TEXT, it would not shock me to find out that some of the dl* functions do not behave as expected.
You may be able to write some platform specific code, but I recommend using a technique developed by the various BSDs called linker sets. Bascially you annotate the functions you want to do something with (just like you currently mark them for export). Through some preprocessor magic they store the annotations, sometimes in an extra segment in the binary image, then have code that grabs that data and enumerates its. So you simply add all the functions you want into the linker set, then walk through the linker set and register all the functions in it with lua.
I know people have gotten this stuff up and running on Windows and Linux, I have used it on Mac OS X and various *BSDs. I am linking the FreeBSD linker_set implementation, but I have not personally seen the Windows implementation.
You need to pass --export-dynamic to the linker (via -Wl,--export-dynamic).
Note: This is for Linux, but could be a starting point for your search.
References:
http://sourceware.org/binutils/docs/ld/Options.html
If static linking is an option, integrate that into the linker script. Before linking, do "nm" on all object files, extract the global symbols, and generate a C file containing a (preferably sorted/hashed) mapping of all symbol names to symbol values:
struct symbol{ char* name; void * value } symbols = [
{"foo", foo},
{"bar", bar},
...
{0,0}};
If you want to be selective in what you expose, it might be easiest to implement a naming schema, e.g. prefixing all functions/methods with Lua_.
Alternatively, you can create a trivial macro,
#define ForLua(X) X
and then grep the sources for ForLua, to select the symbols that you want to incorporate.
You could just generate a mapfile and use that instead, no?

Don't expose symbols from a used library in own static library

I am writing a reusable static library for the iPhone, following the directions provided here.
I want to use minizip in my library internally, but don't want to expose it to the user.
It should be possible for the user to include minizip themselves, possibly a different version, and not cause clashes with my "inner" minizip version.
Is this possible?
Edit:
I've tried adding -fvisibility=hidden to additional compiler flags for minizip files and changing functions to be __private_extern__ and __attribute__((visibility("hidden"))), but it still seems to produce defined external symbols:
00000918 T _unzOpen
0000058e T _unzOpen2
00001d06 T _unzOpenCurrentFile
00001d6b T _unzOpenCurrentFile2
...
Edit #2:
Apparently the symbols marked with these annotations are only made private by the linker, which never happens when Xcode builds the sources, since it adds the -c parameter ("Compile or assemble the source files, but do not link.")
You could rename all exported symbol from minizip with objcopy.
something like
objcopy -redefine-sym=minizip.syms yourstaticlibray.a
and minizip.syms
_unzOpen _yourownprefix_unzOpen
_unzOpen2 _yourownprefix_unzOpen2
... ...
No clash if an executable is linked with an other minizip.a and yourstaticlibray.a, and because you renamed all the symbol in yourstaticlibray.a your call inside yourstaticlibray.a to minizip will use the prefixed symbol, and not the unzOpen one.
Since static library is nothing more than a set of .o files (which are not linked yet, as you have mentioned), the only way to completely hide presence of minizip from the outside world is to somehow compile minizip and your library together as a single compilation unit and make minizip functions/variables static.
You could have a look at how does SQLite do the "amalgamation" process which turns library source code into single .c file for further compilation: The SQLite Amalgamation.
As a bonus you'll get better optimization (really recent GCC and Binutils are able to make link-time optimizations, but this functionality is not released yet).