When I want to set a break point using !bpmd, I need the fully qualified method name.
According to the MDSN document ( http://msdn.microsoft.com/en-us/library/bb190764.aspx), I can find type names using ILDASM.
(So I can find fully qualified method names using ILDASM.)
However, this means I need another separate tool just for searching method names.
The background of this question is: I know I can use ‘x’ command for native apps.
As ‘x’ shows symbols, I can find function names easily.
So if there is something like ‘x’ command for .NET apps, I don’t need to use a separate tool.
I usually vaguely remember method names, but I don’t remember fully qualified method names.
As 'x' is to native code, !sosex.mx is to managed code.
Download sosex for free from stevestechspot.com. You can pass wildcards ? and * in your search. If you just want to set a breakpoint, though, you can just use !sosex.mbm (or !mbp, if you want to break on a source file, line number).
Related
I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.
Let us assume, I want to use a foreign Matlab library with a structure like this:
folderName
play.m
run.m
open.m
If I simply add folderName to my Matlab path variable, it will easily yield name conflicts. I don't want to rename the files, to be able to obtain new releases of the example library (the package concept is not used in the example library). Renaming would need to modify the code as well, if there are calls from one library function to the other.
How do I write local wrappers, which wrap the functions from that example library? My wrappers could then have my desired names and input parameters.
Clarification: How do I use an external library (toolbox) without name conflicts, without renaming and without modifying each function?
Rename files: Makes it hard to update the external library.
Simply put them in a package folder: This will break internal library function calls.
You want to use a package, which will establish a namespace, such that things in the package, are then qualified with the package name. You can find more information here: http://www.mathworks.com/help/matlab/matlab_oop/scoping-classes-with-packages.html
I have a fortran project whith some name conflicts (from doxygen's point of view). Sometimes a local variable in a procedure may have the same name as a subroutine or function. For compilation/linking there are no problems, as the different definitions live separate lives, for instance:
progA/main.f defines and uses the variable delta.
libB/delta.f defines a function named delta.
progB/main.f uses the function delta defined in libB.
progB is linked with libB, progA is not linked with libB.
In this case, when generating call/caller graphs, or linked source code, the variable delta in progA/main.f will be identified as the function delta. Is there some combination of doxygen settings I can use to inform it that progA is not supposed to use definitions in libB, or something similar?
Another issue is that I may have functions/subroutines with the same name in different subdirectories. Again, as long as they are not linked together this does not represent a problem for compilation, but doxygen cannot identify which of them is meant in links, calls, etc. Is there some work around this (without renaming procedures, that is)?
EDIT: I completely re-wrote the question since it seems like I was not clear enough in my first two versions. Thanks for the suggestions so far.
I would like to internationalize the source code for a tutorial project (please notice, not the runtime application). Here is an example (in Java):
/** A comment */
public String doSomething() {
System.out.println("Something was done successfully");
}
in English , and then have the French version be something like:
/** Un commentaire */
public String faitQuelqueChose() {
System.out.println("Quelque chose a été fait avec succès.");
}
and so on. And then have something like a properties file somewhere to edit these translations with usual tools, such as:
com.foo.class.comment1=A comment
com.foo.class.method1=doSomething
com.foo.class.string1=Something was done successfully
and for other languages:
com.foo.class.comment1=Un commentaire
com.foo.class.method1=faitQuelqueChose
com.foo.class.string1=Quelque chose a été fait avec succès.
I am trying to find the easiest, most efficient and unobtrusive way to do this with the least amount of manual grunt work (other than obviously translating the actual text). Preferably working under Eclipse. For example, the original code would be written in English, then externalized (to properties, preferably leaving the original source untouched), translated (humanly) and then re-generated (as a separate source file / project).
Some trails I have found (other than what AlexS suggested):
AntLR, a language parser / generator. There seems to be a supporting Eclipse plugin
Using Eclipse's AST (Abstract Syntax Tree) and I guess building some kind of plugin.
I am just surprised there isn't a tool out there that does this already.
I'd use unique strings as methodnames (or anything you want to be replaced by localized versions.
public String m37hod_1() {
System.out.println(m355a6e_1);
}
then I'd define a propertyfile for each language like this:
m37hod_1=doSomething
m355a6e_1="Something was done successfully"
And then I'd write a small program parsing the sourcefiles and replacing the strings. So everything just outside eclipse.
Or I'd use the ant task Replace and propertyfiles as well, instead of a standalone translation program.
Something like that:
<replace
file="${src}/*.*"
value="defaultvalue"
propertyFile="${language}.properties">
<replacefilter
token="m37hod_1"
property="m37hod_1"/>
<replacefilter
token="m355a6e_1"
property="m355a6e_1"/>
</replace>
Using one of these methods you won't have to explain anything about localization in your tutorials (except you want to), but can concentrate on your real topic.
What you want is a massive code change engine.
ANTLR won't do the trick; ASTs are necessary but not sufficient. See my essay on Life After Parsing. Eclipse's "AST" may be better, if the Eclipse package provides some support for name and type resolution; otherwise you'll never be able to figure out how to replace each "doSomething" (might be overloaded or local), unless you are willing to replace them all identically (and you likely can't do that, because some symbols refer to Java library elements).
Our DMS Software Reengineering Toolkit could be used to accomplish your task. DMS can parse Java to ASTs (including comment capture), traverse the ASTs in arbitrary ways, analyze/change ASTs, and the export modified ASTs as valid source code (including the comments).
Basically you want to enumerate all comments, strings, and declarations of identifiers, export them to an external "database" to be mapped (manually? by Google Translate?) to an equivalent. In each case you want to note not only the item of interest, but its precise location (source file, line, even column) because items that are spelled identically in the original text may need different spellings in the modified text.
Enumeration of strings is pretty easy if you have the AST; simply crawl the tree and look for tree nodes containing string literals. (ANTLR and Eclipse can surely do this, too).
Enumeration of comments is also straightforward if the parser you have captures comments. DMS does. I'm not quite sure if ANTLR's Java grammar does, or the Eclipse AST engine; I suspect they are both capable.
Enumeration of declarations (classes, methods, fields, locals) is relatively straightforward; there's rather more cases to worry about (e.g., anonymous classes containing extensions to base classes). You can code a procedure to walk the AST and match the tree structures, but here's the place that DMS starts to make a difference: you can write surface-syntax patterns that look like the source code you want to match. For instance:
pattern local_for_loop_index(i: IDENTIFIER, t: type, e: expression, e2: expression, e3:expression): for_loop_header
= "for (\t \i = \e,\e2,\e3)"
will match declarations of local for loop variables, and return subtrees for the IDENTIFIER, the type, and the various expressions; you'd want to capture just the identifier (and its location, easily done by taking if from the source position information that DMS stamps on every tree node). You'd probably need 10-20 such patterns to cover the cases of all the different kinds of identifiers.
Capture step completed, something needs to translate all the captured entities to your target language. I'll leave that to you; what's left is to put the translated entities back.
The key to this is the precise source location. A line number isn't good enough in practice; you may have several translated entities in the same line, in the worst case, some with different scopes (imagine nested for loops for example). The replacement process for comments, strings and the declarations are straightforward; rescan the tree for nodes that match any of the identified locations, and replace the entity found there with its translation. (You can do this with DMS and ANTLR. I think Eclipse ADT requires you generate a "patch" but I guess that would work.).
The fun part comes in replacing the identifier uses. For this, you need to know two things:
for any use of an identifier, what is the declaration is uses; if you know this, you can replace it with the new name for the declaration; DMS provides full name and type resolution as well as a usage list, making this pretty easy, and
Do renamed identifiers shadow one another in scopes differently than the originals? This is harder to do in general. However, for the Java language, we have a "shadowing" check, so you can at least decide after renaming that you have an issues. (There's even a renaming procedure that can be used to resolve such shadowing conflicts
After patching the trees, you simply rewrite the patched tree back out as a source file using DMS's built-in prettyprinter. I think Eclipse AST can write out its tree plus patches. I'm not sure ANTLR provides any facilities for regenerating source code from ASTs, although somebody may have coded one for the Java grammar. This is harder to do than it sounds, because of all the picky detail. YMMV.
Given your goal, I'm a little surprised that you don't want a sourcefile "foo.java" containing "class foo { ... }" to get renamed to .java. This would require not only writing the transformed tree to the translated file name (pretty easy) but perhaps even reconstructing the directory tree (DMS provides facilities for doing directory construction and file copies, too).
If you want to do this for many languages, you'd need to run the process once per language. If you wanted to do this just for strings (the classic internationalization case), you'd replace each string (that needs changing, not all of them do) by a call on a resource access with a unique resource id; a runtime table would hold the various strings.
One approach would be to finish the code in one language, then translate to others.
You could use Eclipse to help you.
Copy the finished code to language-specific projects.
Then:
Identifiers: In the Outline view (Window>Show View>Outline), select each item and Refactor>Rename (Alt+Shift+R). This takes care of renaming the identifier wherever it's used.
Comments: Use Search>File to find all instances of "/*" or "//". Click on each and modify.
Strings:
Use Source>Externalize strings to find all of the literal strings.
Search>File for "Messages.getString()".
Click on each result and modify.
On each file, ''Edit>Find/Replace'', replacing "//\$NON-NLS-.*\$" with empty string.
for the printed/logged string, java possess some internatization functionnalities, aka ResourceBundle. There is a tutorial about this on oracle site
Eclipse also possess a funtionnality for this ("Externalize String", as i recall).
for the function name, i don't think there anything out, since this will require you to maintain the code source on many version...
regards
Use .properties file, like:
Locale locale = new Locale(language, country);
ResourceBundle captions= ResourceBundle.getBundle("Messages",locale);
This way, Java picks the Messages.properties file according to the current local (which is acquired from the operating system or Java locale settings)
The file should be on the classpath, called Messages.properties (the default one), or Messages_de.properties for German, etc.
See this for a complete tutorial:
http://docs.oracle.com/javase/tutorial/i18n/intro/steps.html
As far as the source code goes, I'd strongly recommend staying with English. Method names like getUnternehmen() are worse to the average developer then plain English ones.
If you need to familiarize foreign developers to your code, write a proper developer documentation in their language.
If you'd like to have Javadoc in both English and other languages, see this SO thread.
You could write your code using freemarker templates (or another templating language such as velocity).
doSomething.tml
/** ${lang['doSomething.comment']} */
public String ${lang['doSomething.methodName']}() {
System.out.println("${lang['doSomething.message']}");
}
lang_en.prop
doSomething.comment=A comment
doSomething.methodName=doSomething
doSomething.message=Something was done successfully
And then merge the template with each language prop file during your build (using Ant / Gradle / Maven etc.)
This is a little convoluted, but lets try:
I'm integrating LUA scripting into my game engine, and I've done this in the past on win32 in an elegant way. On win32 all I did was to mark all of the functions I wanted to expose to LUA as export functions. Then, to integrate them into LUA, I'd parse the PE header of the executable, unmangle the names, parse the parameters and such, then register them with my LUA runtime. This allowed me to avoid manually registering every function individually just to expose them to LUA.
Now, flash forward to today where I'm working on the iPhone. I've looked through some Unix stuff and I've gotten very close to taking a similar approach, however I'm not sure it will actually work.
I'm not entirely familiar with Unix, but here is what I have so far on iPhone:
Step 1: Query for the executable path through objective-C and get the path of my app
Step 2: Use dlopen to get a handle to my app using: `dlopen(path, RTLD_NOW)`
Step 3: Use `dlsym( libraryHandle, objectName )` to attempt to get the address of a known symbol.
The above steps won't actually get me to where I want to be, but even that doesn't work. Does anyone have any experience doing this type of thing on Unix? Are there any headers or functions I can google to put me on the right track?
Thanks;)
iPhone does not support dynamic linking after the initital application launch. While what you want to do does not actually require linking in any new application TEXT, it would not shock me to find out that some of the dl* functions do not behave as expected.
You may be able to write some platform specific code, but I recommend using a technique developed by the various BSDs called linker sets. Bascially you annotate the functions you want to do something with (just like you currently mark them for export). Through some preprocessor magic they store the annotations, sometimes in an extra segment in the binary image, then have code that grabs that data and enumerates its. So you simply add all the functions you want into the linker set, then walk through the linker set and register all the functions in it with lua.
I know people have gotten this stuff up and running on Windows and Linux, I have used it on Mac OS X and various *BSDs. I am linking the FreeBSD linker_set implementation, but I have not personally seen the Windows implementation.
You need to pass --export-dynamic to the linker (via -Wl,--export-dynamic).
Note: This is for Linux, but could be a starting point for your search.
References:
http://sourceware.org/binutils/docs/ld/Options.html
If static linking is an option, integrate that into the linker script. Before linking, do "nm" on all object files, extract the global symbols, and generate a C file containing a (preferably sorted/hashed) mapping of all symbol names to symbol values:
struct symbol{ char* name; void * value } symbols = [
{"foo", foo},
{"bar", bar},
...
{0,0}};
If you want to be selective in what you expose, it might be easiest to implement a naming schema, e.g. prefixing all functions/methods with Lua_.
Alternatively, you can create a trivial macro,
#define ForLua(X) X
and then grep the sources for ForLua, to select the symbols that you want to incorporate.
You could just generate a mapfile and use that instead, no?