I'm working on a Mac launcher for a trace library - the tracing works by adding the library to DYLD_INSERT_LIBRARIES (the Mac equivalent of LD_PRELOAD). The DYLD_INSERT_LIBRARIES variable is then propagated by the trace library as further processes are spawned.
The trouble is that I need the 32-bit version of the trace library to be used for 32-bit tracee processes and the 64-bit version for 64-bit tracee processes. In the Linux launcher I have, this is achieved by using ${LIB} in LD_PRELOAD - the dynamic loader (ld.so) then replaces this with "the right thing" when loading a process.
Is there an equivalent of ld.so's ${LIB} variable for dyld on Mac? I couldn't immediately see one when I looked through the man page (https://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/dyld.1.html), but I may just be reading it wrong. If not, is there another way of achieving the same effect please?
I think what you want is to compile your inserted library as a fat binary (e.g., multiple architectures in the same binary). This should allow a single value of DYLD_INSERT_LIBRARIES to work for subprocesses of various architectures.
Related
I'm facing such a problem. I've built a Windows kernel mode driver using WDK11, and, in order to pass Windows 11 WHQL, I need to use new memory allocating function ExAllocatePool2. The problem is, this driver will fail to load on Windows 7.
You know, Win7 does not have ExAllocatePool2, just like a user-mode EXE cannot find a new-era API in a system DLL on an older Windows OS, so the EXE fails to run.
For a user mode EXE, I can use LoadLibrary and GetProcAddress trick to dynamically detect that new-era API is not available at runtime, so that I can fall back to call older API.
I think the same idea applies to kernel mode, using RtlIsNtDdiVersionAvailable to probe for OS version, then call best available API from the system. But how to do real runtime dynamic linking to desired API?
Or, if there is a way to tell the system to ignore ExAllocatePool2 resolving errors when loading my .sys, the question is considered resolved as well.
In one word, the requirement is, I want to have a single binary to support all of Win11/Win10/Win7, and use new APIs on new OS.
I have written a plugin for vanilla lua. I wish to protect this plugin, and I have heard of obfuscation. I tried XFuscator, but even after fixing line 5's logic, it doesnt work. Are there any newer, better ones floating out there?
Thanks!
If you are going to run your Lua script in the same machine you build it (I mean, same Lua version, same machine architecture), you could just compile it to bytecode using luac like this:
luac -s -o example.out example.lua
And distribute the .out file, that doesn't contain the Lua source code.
Note that Lua bytecode is platform specific (endianness, word size), and it could change in future Lua versions (in fact it already did in the past). For that reason, if you compile it, let's say, in a Intel x86-64 with Lua 5.3, you should run your generated .out only in this kind of machines or compatible ones.
I have memory related problem in my application on solaris9 environment where Tcl_DeleteInterp() function calls lot of free() and mutex_unlock() functions. To debug the problem i followed the below steps to compile tcl on solaris server (with TCL_MEM_DEBUG flag) but still i couldn't use the 'memory' command in my interpreter.
Ran configure script on server (./configure –prefix=<directory needs to be installed> --enable-symbols=mem)
Make clean all
Make install (tcl libraries and tlcsh exe is copied to the path specified in step1)
Compilation generated two libraries (libtcl8.4g.so and libtclstub8.4g.a), I copied libtcl8.4g.so as libtcl8.4.so to my app
Copied tcl8.4 directory as well.
I also copied the tclsh8.4 to $PROVHOME/bin and created soft link as tclsh-> tclsh8.4.
From my application i linked the debug symbol enabled libraries to the place where exactly i created the Tcl interpreter.
Initialized the Tcl interpreter to using Tcl_InitMemory() function (so that the memory command will be registered in the supplied(arg) interpreter.
When i used the interpreter exe (tclsh) separately i could execute the memory command, but when i used the same exe on my application its not working. Can someone help me what could be the possible reason for this problem ?
Also help me how can i cross verify the libraries that they are compiled with TCL_MEM_DEBUG flag.
Will the Tcl source code tar file contain Solaris directory where i have to build the libraries or should i use the unix source code for solaris platform as well ?
Thanks
Are you using [mem] interactively (which does expansion of unambiguous short command names) and forgetting to use the full name ([memory]) in your scripts?
You're using Tcl embedded in your code? You need to call Tcl_InitMemory (passing in the handle to the interpreter where you want the memory command created) after creation of the interpreter and before you run user scripts, i.e., straight after the Tcl_CreateInterp gives you the handle (which should in turn come after the Tcl_FindExecutable call that initializes the shared parts of the library).
You must also make sure that everything is built with that flag set so that the correct memory allocation APIs are used in both your code when it integrates with Tcl, and you must make sure that you are linking against the debugging build. It's probably the linking that has gone wrong, but I've not done that level of development on Solaris for many years.
I think you'll find that “Getting a list of used libraries by a running process (unix)” is relevant to your problems.
I'm having problems debugging a JNI application. I've read several threads in StackOverflow, like this one, this one or this one. I've also tried to start gdb in a separated shell and attach it to the running java process. In both cases, the problem is the same: GDB can't find the sources to debug. Things tried
Add "dir" line to gdbinit, pointing to C++ sources folder
Adding the C++ sources folder to the GDB debbuging configuration in Eclipse, in the "Sources" tab.
Adding set environment LD_LIBRARY_PATH=/path/to/library.so, being library.so the library file built from C++ source files
Attach ddd to the java process, but then I get an error because pthread_join.c is not found in the working directory. I don't have this file in my hard disk. I don't know what is this about.
Nothing worked. I've spent several days on this. I know my bug is in the C++ code called by the JNI wrapper, but I can't debug it. Any hints? If helps, I'm running Eclipse Juno in Debian 7 under a Parallels VM on Mac OS.
Many thanks in advance,
You need to have debug information in your native library. You should pass -g to your compiler and linker to have this information in the executable. You may also want to add -O0.
As an alternative to attaching to the Java process, you can create a C++ app and debug it directly. You just need to link in the functions you want to test. In the main function, create the VM, register the functions with RegisterNatives, and kick off a Java test class the uses them.
Hopefully, the debugger has no problem finding the sources since it is just part of the normal compile/link/debug loop of a C++ app.
I would suggest to start with the latest ADT bundle. You can even download the Mac version, so you will not even need Parallels (see a detailed instructions). Then, choose Debug Android Native Application in launch menu.
This isn't my area of expertise so I'm asking hopefully the right question.
We have a server that is lease rolling. The old server is a 32-bit windows server and the new server is 64-bit windows 2008 R2 SP1.
One of the web applications uses Perl to run some scripts.
We can run the same 32bit version on the new 64-bit machine? (e.g. if there is a same version but one is 32-bit and one is 64-bit, are they essentially the same?)
If a script is working on a 32-bit version, should it still work under the 64-bit version of Perl?
If the questions need clarifying, please let me know and I'll see about asking the appropriate person on our team.
I think the answers to both your questions are yes. The 32-bit applications should run fine on your 64-bit Windows, but will not be able to utilize any of the 64-bit features (where a larger usable address space may very well be the most important if you ever want to parse big XML using XML::Twig ;-).
The script running under 32-bit perl will work on a 64-bit perl, provided you get all the modules for the 64-bit perl in order, since they typically run from different directories. Also, be aware that for 64-bit perl on Windows you probably need Strawberry perl, ActiveState perl or similar. Cygwin is only 32-bit as far as I know.
Yes, you can, as long as you redeploy properly, including the build steps for dependencies. Just copying files over will only work if the whole application stack is pure-Perl, which is not likely. — Yes, they are essentially the same, but binary incompatible.
Likely yes. Problems can arise with the dependencies, however the number of modules that fail due to 32-bit/64-bit differences are miniscule.
If you're concerned about compatibility, you should be able to run the 32-bit version of perl on the 64-bit machine (assuming both are x86). But the 64-bit version should work more or less the same as the 32-bit one, with a couple of exceptions that should not affect scripts. (They have to do with C/XS code in modules, mostly. Binary-compatibility stuff. Meaning modules will have to be built for 64-bit. Fortunately, any Perl interpreter that doesn't suck will do the build stuff for you in the case of *nix, or provide a package manager that has pre-built modules like ActiveState does.)