How to make GNU assembler and linker output resulting code to stdout, not to file? - ld

So I could make python script, which passes assembler code to 'as' stdin and get's resulting binary through 'ld' stdout.
Update:
I have already tried '-o /dev/stdout', but this isn't work:
as -o /dev/stdout test.s
test.s: Assembler messages:
test.s: Fatal error: can't write /dev/stdout: Illegal seek
test.s: Fatal error: can't close /dev/stdout: Illegal seek
Update II:
why would you want to do such a thing?
want to create python binding for gsasl. to dynamically generate machine code to use it as callbacks which calls python callables somehow (for gsasl_callback_set). using stdout instead of temporary files for greater security and performance. just one of My crazy ideas.. since making callback with SWIG is not so easy.. currently trying to play with Vala and Gir,, maybe it will be more universal way.. and there is reasons why I don't want to use cython.

Those two tools' output can't be redirected. The hackish way to do so, is write a launcher which starts as or ld and after they completion reads contents of already made file pointed by -o option into stdout (or some other way).
On question in Update II
To work with callbacks in scripting languages, there are libffi, which is already used by Cython and many other language's meanings to interact with C/C++ code callbacks.
The great deal here present's so called "user data" parameter in C/C++ target callback, so other language's (Python in this case) object (probably function or method) pointer or descriptor could be passed with callback to preserve context. Gsasl's callback mechanisms do not have such "user data" parameter, and good fashioned binding to some class instance is not possible, so tracking callback context will require creating and using global variable in python, so it is better to find pure python implementation of something, than using globals or singletons.

Related

ILE RPG Bind by reference using CRTSQLRPGI

I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.

MATLAB compiler says some functions in my app are using not licensed for compilation functions

I want to compile my app using matlab-compiler it does so, but with issues...
It says there are some functions that are not licensed for compilation.
The problem is that I haven't used those functions (one of them is fimath.m) in my app.
I think these functions are used inside some of my functions which I don't know.
My question is how to find out which one of my functions are using those functions in order to remove them or replace them with other functions.
There are more than 50 functions in my app and it's not possible to check them one by one.
For every returned "unlicensed" function you can execute the following command,
dbstop in <function name> % without the <>
and afterwards run your code normally for several typical inputs/cases. If it stops at one of these breakpoints, look at the call stack (using either dbstack or the Editor tab of the MATLAB GUI), and identify the entry point from your own code.
If none of the breakpoints is ever hit, it could mean that these functions are referred-to inside the code, but some logic is preventing their execution (turning them, practically, to "unreachable code"). In this case, you will likely need to remove these references manually. To know where from, using information from the link posted by VTodorov you can list the dependencies of each file using
[fList,pList] = matlab.codetools.requiredFilesAndProducts('myFun.m');
which can be called on the output of dir (after some minor conversion). It could be useful to use the toponly flag.

Perl shallow syntax check? ie. do not check syntax of imports

How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.

How to resolve bindings during execution with embedded Python?

I'm embedding Python into a C++ application. I plan to use PyEval_EvalCode to execute Python code, but instead of providing the locals and globals as dictionaries, I'm looking for a way to have my program resolve symbol references dynamically.
For example, let's say my Python code consists of the following expression:
bear + lion * bunny
Instead of placing bear, lion and bunny and their associated objects into the dictionaries that I'm passing to PyEval_EvalCode, I'd like the Python interpreter to call back my program and request these named objects.
Is there a way to accomplish this?
By providing the locals and globals dictionaries, you are providing the environment in which the evaled code is executed. That effectively provides you with an interface to map names to objects defined in the C++ app.
Can you clarify why you do not want to use the dictionaries?
Another thing you could do is process the string in C++ and do string substitution before you eval the code....
Possibly. I've never tried this but in theory you might be able to implement a small extension class in C++ that overrides the __getattr__ method (probably via the tp_as_mapping or tp_getattro function pointers of PyTypeObject). Pass an instance of this as locals and/or globals to PyEval_EvalCode and your C++ method should be asked to resolve your lions, tigers, & bears for you.

Parsing Unix/iPhone/Mac OS X version of PE headers

This is a little convoluted, but lets try:
I'm integrating LUA scripting into my game engine, and I've done this in the past on win32 in an elegant way. On win32 all I did was to mark all of the functions I wanted to expose to LUA as export functions. Then, to integrate them into LUA, I'd parse the PE header of the executable, unmangle the names, parse the parameters and such, then register them with my LUA runtime. This allowed me to avoid manually registering every function individually just to expose them to LUA.
Now, flash forward to today where I'm working on the iPhone. I've looked through some Unix stuff and I've gotten very close to taking a similar approach, however I'm not sure it will actually work.
I'm not entirely familiar with Unix, but here is what I have so far on iPhone:
Step 1: Query for the executable path through objective-C and get the path of my app
Step 2: Use dlopen to get a handle to my app using: `dlopen(path, RTLD_NOW)`
Step 3: Use `dlsym( libraryHandle, objectName )` to attempt to get the address of a known symbol.
The above steps won't actually get me to where I want to be, but even that doesn't work. Does anyone have any experience doing this type of thing on Unix? Are there any headers or functions I can google to put me on the right track?
Thanks;)
iPhone does not support dynamic linking after the initital application launch. While what you want to do does not actually require linking in any new application TEXT, it would not shock me to find out that some of the dl* functions do not behave as expected.
You may be able to write some platform specific code, but I recommend using a technique developed by the various BSDs called linker sets. Bascially you annotate the functions you want to do something with (just like you currently mark them for export). Through some preprocessor magic they store the annotations, sometimes in an extra segment in the binary image, then have code that grabs that data and enumerates its. So you simply add all the functions you want into the linker set, then walk through the linker set and register all the functions in it with lua.
I know people have gotten this stuff up and running on Windows and Linux, I have used it on Mac OS X and various *BSDs. I am linking the FreeBSD linker_set implementation, but I have not personally seen the Windows implementation.
You need to pass --export-dynamic to the linker (via -Wl,--export-dynamic).
Note: This is for Linux, but could be a starting point for your search.
References:
http://sourceware.org/binutils/docs/ld/Options.html
If static linking is an option, integrate that into the linker script. Before linking, do "nm" on all object files, extract the global symbols, and generate a C file containing a (preferably sorted/hashed) mapping of all symbol names to symbol values:
struct symbol{ char* name; void * value } symbols = [
{"foo", foo},
{"bar", bar},
...
{0,0}};
If you want to be selective in what you expose, it might be easiest to implement a naming schema, e.g. prefixing all functions/methods with Lua_.
Alternatively, you can create a trivial macro,
#define ForLua(X) X
and then grep the sources for ForLua, to select the symbols that you want to incorporate.
You could just generate a mapfile and use that instead, no?