I am trying to wrap some UniData Subroutines to SOAP Web Service. I'm planning to use C# and UODOTNET library (IBM U2 Data Management Interface for .NET). Also I'm looking to create an engine to read all the available Subroutines from data server and also reads all the required parameters and dynamically generate required codes for Web Service.
My code would be something like this:
var session = UniObjects.OpenSession(
"192.168.0.1",
"user",
"password",
"account"
);
var cmd = session.CreateUniCommand();
cmd.Command = "LIST SUBURB.INDEX"; // ?????
cmd.Execute();
var res = cmd.Response;
Question 1: Is there any command that I can use to retrieve the list of all available subroutines?
Question 2: Is there any command that I can use to retrieve list of all parameters for specific subroutine?
Cheers
The short answer is no.
The longer answer is yes, but with a lot of work.
Since you are asking this question, I'm going to assume you are missing a lot of generally knowledge about the platform. Hence to be able to do this you'll need to:
Learn about how VOC works, specifically how executable code can be catalogued here.
Learn about the CATALOG and how cataloguing programs globally, locally and direct differ.
Understand how your system in particular is designed. Some places have everything directly catalogued in the VOC, others are a mix. If the former, it'll be easier for your question.
Once you understand the above, you'll know how to get a list of all executable programs from VOC, local catalog and global catalog. For example, a simplified example for the VOC is the UniQuery command LIST VOC WITH F1="C".
The hard part is getting the parameter list, of which there isn't any system command. To do this you have 2 options.
Reverse engineer the byte code of every subroutine and tease out the number of parameters
If you have access to the related source code, parse it to generate the list.
Just wanted to add a comment on this one, in UniData there is a MAKE.MAP.FILE command that will identify Programs and Subroutines ( and the number of parameters) which puts the information in the '_MAP_' file. This does not tell you what the parameters are used for, but it helps.
Related
I'm defining my own Puppet class, and I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory. I was wondering to have a similar syntax like below, but didn't found a way to make it work.
$dirs = Dir.entries('C:\\Program Files\\Java\\')
Does anyone how to do it in a Puppet file?
Thanks!
I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory.
Information about the current state of the machine to be configured is conveyed to the catalog compiler via facts. These are available to your classes as top-scope variables, and Puppet (or Facter, actually) provides ways to define your own custom facts. That's a link into the Facter 3 manual, but similar applies to earlier versions. Do not overlook the rest of the Facter documentation, which has more relevant information on this topic.
On the other hand, information about the machine providing catalog-building services -- the master in a master / agent setup -- can be obtained by writing and calling a custom function. This is rarely what you actually want, but it's worth mentioning because you might one day want a custom function for some other purpose.
I'm having this problem for a while now and google have its limits.
I'm writing a powershell file that contain several generic function.
I use the function in vary scripts and now I want to let other personal in my work to use them as well.
the problem is, do to sensitive operation, I want to lock and protect the script (compile to a dll, exe etc').
how do I create powershell library like C# DLL?
one option I try but did not find out how to continue is to compile the script using powerGUI to executable file ( .exe) but then I canot access the function in it let alone pass on parameters to that function.
hope you understood me :)
thank you.
You don't. Rather than trying to obscure this information (if you compile them, they can be decompiled and your "protected" resources will no longer be), remove them entirely and make those parameters for your functions. This both protects your "sensitive" data and makes the code much more reusable.
You can then package your functions into a module
I feel a bit ashamed to ask this question but I am curious. I recently wrote a script (without organizing the code in modules) that reads log files of a store and saves the info to the data base.
For example, I wrote something like this (via Richard Huxton):
while (<$infile>) {
if (/item_id:(\d+)\s*,\s*sold/) {
my $item_id = $1;
$item_id_sold_times{$item_id}++;
}
}
my #matched_items_ids = keys %item_id_sold_times;
my $owner_ids =
Store::Model::Map::ItemOwnerMap->fetch_by_keys( \#matched_item_ids )
->entry();
for my $owner_id (#$owner_ids) {
$item_id_owner_map{$owner_id}++;
}
Say, this script is called script.pl. When testing the file, I made a file script.t and had to repeat some blocks of script.pl in script.t. After copy pasting relevant code sections, I do confirmations like:
is( $item_id_sold_times{1}, 1, "Number of sold items of item 1" );
is( $item_id_owner_map{3}, 8, "Number of sold items for owner 3" );
And so on and so on.
But some have pointed out that what I wrote is not a test. It is a confirmation script. A good test would involve writing the code with modules, writing a script that kicks the methods in the modules and write a test for the module.
This has made me think about what is the definition of a test most widely used in software engineering. Perhaps some of you that have even tested Perl core functions can give me a hand. A script (not modulized) cannot be properly tested?
Regards
An important distinction is: If someone were to edit (bugfix or new feature) your 'script.pl' file, would running your 'script.t' file in its current state give any useful information on the state of the new 'script.pl'? For this to be the case you must include or use the .pl file instead of selectively copy/pasting excerpts.
In the best case you design your software modularly and plan for testing. If you want to test a monolithic script after the fact, I suppose you could write a test script that includes your main program after an exit() and at least have the subroutines to test...
You can test a program just like a module if you set it up as a modulino. I've written about this all over the place, but most notably in Mastering Perl. You can read Chapter 18 for online for free right now since I'm working on the second edition in public. :)
Without getting into verification versus validation, etc, a test is any activity that verifies that behavior of something else, that the behaviors match requirements and that certain undesirable behaviors don't occur. The scope of a test can be limited (only verifying a few of the desired behaviors) or it can be comprehensive (verifying most or all of the required or desired behaviors), probably with lots of steps or components to the test to make it so. In my mind, the term "confirmation" in your context is another way of saying "limited scope test". Certainly it doesn't completely verify the behavior of what your testing, but it does something. In direct answer to the question at hand: I think calling what you did a "confirmation" is just a matter of semantics.
I have a script that I would like to convert to a module and call its functions from a Perl script as I do with CPAN modules now. My question is how would others design the module, I'd like to get some idea how to proceed as I have never written a module before. The script, as written now does this:
1) Sets up logging to a db using am in house module
2) Establishes a connection to the db using DBI
3) Fetch a file(s) from a remote server using Net::SFTP:Foreign
4) Process the users in each file and add the data to the DB
The script currently takes command line options to override defaults using Getopt::Long.
Each file is a pipe delimited collection of user data, ~4000 lines, which goes in to the db, provided the user has an entry in our LDAP directory.
So more to the point:
How should I design my module? Should everything my script currently does be moved into the module, or are there some things that are best left in the script. For example, I was thinking of designing my module, so it would be called like this:
use MyModule;
$obj = MyModule->new; // possibly pass some arguments
$obj->setupLogging;
$obj->connectToDB;
$obj->fetchFiles;
$obj->processUsers;
That would keep the script nice and clean, but is this the best idea for the module? I was thinking of having the script retrieve the files and then just pass the paths to the module for processing.
Thanks
I think the most useful question is "What of this code is going to be usable by multiple scripts?" Surely the whole thing won't. I rarely create a perl module until I find myself writing the same subroutine more than once.
At that stage I generally dump it into something called "Utility.pm" until that gathers enough code that patterns (not in the gang of four sense really) start to suggest what belongs in a module with well-defined responsibilities.
The other thing you'll want to think about is whether your module is going to represent an object class or if it's going to be a collection of library subroutines.
In your example I could see Logging and Database connection management (though I'd likely use DBI) belonging in external modules.
But putting everything in there just leaves you with a five line script that calls "MyModule::DoStuff" and that's less than useful.
Most of the time ;)
"is this the best idea for the module? I was thinking of having the
script retrieve the files and then just pass the paths to the module
for processing."
It looks like a decent design to me. I also agree that you should not hardcode paths (or URLS) in the module, but I'm a little confused by what "having the script retrieve the files and then just pass the paths to the module for processing" means: do you mean, the script would write them to disk? Why do that? Instead, if it makes sense, have fetchFile(URL) retrieve a single file and return a reference/handle/object suitable for submission to processUsers, so the logic is like:
my #FileUrls = ( ... );
foreach (#FileUrls) {
my $file = $obj->fetchFile($_);
$obj->processUsers($file);
}
Although, if the purpose of fetchFile is just to get some raw text, you might want to make that a function independent of the database class -- either something that is part of the script, or another module.
If your reason for wanting to write to disk is that you don't want an entire file loaded into memory at once, you might want to adjust everything to process chunks of a file, such that you could have one kind of object for fetching files via a socket and outputting chunks as they are read to another kind of object (the process that adds to database one). In that case:
my $db = DBmodule->new(...);
my $netfiles = NetworkFileReader->new(...)
foreach (#FileUrls) {
$netfiles->connect($_);
$db->newfile(); # initialize/reset
while (my $chunk = $netfiles->more()) {
$db->process($chunk);
}
$db->complete();
$netfiles->close();
}
Or incorporate the chunk reading into the db class if you think it is more appropriate and unlikely to serve a generic purpose.
This is a little convoluted, but lets try:
I'm integrating LUA scripting into my game engine, and I've done this in the past on win32 in an elegant way. On win32 all I did was to mark all of the functions I wanted to expose to LUA as export functions. Then, to integrate them into LUA, I'd parse the PE header of the executable, unmangle the names, parse the parameters and such, then register them with my LUA runtime. This allowed me to avoid manually registering every function individually just to expose them to LUA.
Now, flash forward to today where I'm working on the iPhone. I've looked through some Unix stuff and I've gotten very close to taking a similar approach, however I'm not sure it will actually work.
I'm not entirely familiar with Unix, but here is what I have so far on iPhone:
Step 1: Query for the executable path through objective-C and get the path of my app
Step 2: Use dlopen to get a handle to my app using: `dlopen(path, RTLD_NOW)`
Step 3: Use `dlsym( libraryHandle, objectName )` to attempt to get the address of a known symbol.
The above steps won't actually get me to where I want to be, but even that doesn't work. Does anyone have any experience doing this type of thing on Unix? Are there any headers or functions I can google to put me on the right track?
Thanks;)
iPhone does not support dynamic linking after the initital application launch. While what you want to do does not actually require linking in any new application TEXT, it would not shock me to find out that some of the dl* functions do not behave as expected.
You may be able to write some platform specific code, but I recommend using a technique developed by the various BSDs called linker sets. Bascially you annotate the functions you want to do something with (just like you currently mark them for export). Through some preprocessor magic they store the annotations, sometimes in an extra segment in the binary image, then have code that grabs that data and enumerates its. So you simply add all the functions you want into the linker set, then walk through the linker set and register all the functions in it with lua.
I know people have gotten this stuff up and running on Windows and Linux, I have used it on Mac OS X and various *BSDs. I am linking the FreeBSD linker_set implementation, but I have not personally seen the Windows implementation.
You need to pass --export-dynamic to the linker (via -Wl,--export-dynamic).
Note: This is for Linux, but could be a starting point for your search.
References:
http://sourceware.org/binutils/docs/ld/Options.html
If static linking is an option, integrate that into the linker script. Before linking, do "nm" on all object files, extract the global symbols, and generate a C file containing a (preferably sorted/hashed) mapping of all symbol names to symbol values:
struct symbol{ char* name; void * value } symbols = [
{"foo", foo},
{"bar", bar},
...
{0,0}};
If you want to be selective in what you expose, it might be easiest to implement a naming schema, e.g. prefixing all functions/methods with Lua_.
Alternatively, you can create a trivial macro,
#define ForLua(X) X
and then grep the sources for ForLua, to select the symbols that you want to incorporate.
You could just generate a mapfile and use that instead, no?