I'm using Frontier::Daemon to build a test library server for Robot Framework test automation framework. I got the test library server working for executing the code locally, but when it runs/executes over XML-RPC, that is when I run into problems. Part of the issue might also be because I'm using Perl reflection to execute test commands.
Maybe RPC::XML might be a better fit, but at the time I was developing the server, Frontier::Daemon seemed easier to start off with.
The Perl reflection code was borrowed from threads posted on this site as well as Wikipedia's page on code reflection (Perl section).
The code is hosted at Google Code, you can browse the code or check it out for review. The issue is described in more detail at the project site.
I was hoping the Perl developer community could give me some pointers on the source of the problem and how to fix it.
Thanks,
Dave
There are a couple things you are missing. First, Frontier::Daemon calls the "methods" you provide it as simple subroutine calls, but your two provided methods expect to be called as methods of your remote server object. Change your code to do this:
my $svr = Frontier::Daemon->new(
methods => {
get_keyword_names => sub { $self->get_keyword_names(#_) },
run_keyword => sub { $self->run_keyword(#_) },
},
...
to call your methods as they seem to expect.
Second, your get_keyword_names tries to return an array, but the interface you are using seems to only allow a single return value and is calling the methods in scalar context, causing get_keyword_names to return the count of elements in the array. I think you want to be returning a reference to the array instead:
return \#methods;
Related
I would like to make an application simmilar to CodeFights in that a user is given a set of excercises for which he has to code solutions. I figured out how to take a users JavaScript code and run it on a Node.js server using for example the Function constructor:
try {
var solver = new Function('arg1', 'arg2', bodyAsString);
} catch(e) {
console.log('Function cannot be parsed');
}
Good thing about this approach is that the solver(...,...) function now has access to global scope only and not the scope it was made in. Even better one could also combine this with a sandbox module.
But what is the best approach if I want to use multiple languages - let's say JavaScript, Python and C++. How do I go about solving this problem if I want to give the user a choice of language? Do I write his solution to a file and try to execute it via command line also in some kind of a sandbox mode? Can this even be done if I use Node.js as backend?
I feel a bit ashamed to ask this question but I am curious. I recently wrote a script (without organizing the code in modules) that reads log files of a store and saves the info to the data base.
For example, I wrote something like this (via Richard Huxton):
while (<$infile>) {
if (/item_id:(\d+)\s*,\s*sold/) {
my $item_id = $1;
$item_id_sold_times{$item_id}++;
}
}
my #matched_items_ids = keys %item_id_sold_times;
my $owner_ids =
Store::Model::Map::ItemOwnerMap->fetch_by_keys( \#matched_item_ids )
->entry();
for my $owner_id (#$owner_ids) {
$item_id_owner_map{$owner_id}++;
}
Say, this script is called script.pl. When testing the file, I made a file script.t and had to repeat some blocks of script.pl in script.t. After copy pasting relevant code sections, I do confirmations like:
is( $item_id_sold_times{1}, 1, "Number of sold items of item 1" );
is( $item_id_owner_map{3}, 8, "Number of sold items for owner 3" );
And so on and so on.
But some have pointed out that what I wrote is not a test. It is a confirmation script. A good test would involve writing the code with modules, writing a script that kicks the methods in the modules and write a test for the module.
This has made me think about what is the definition of a test most widely used in software engineering. Perhaps some of you that have even tested Perl core functions can give me a hand. A script (not modulized) cannot be properly tested?
Regards
An important distinction is: If someone were to edit (bugfix or new feature) your 'script.pl' file, would running your 'script.t' file in its current state give any useful information on the state of the new 'script.pl'? For this to be the case you must include or use the .pl file instead of selectively copy/pasting excerpts.
In the best case you design your software modularly and plan for testing. If you want to test a monolithic script after the fact, I suppose you could write a test script that includes your main program after an exit() and at least have the subroutines to test...
You can test a program just like a module if you set it up as a modulino. I've written about this all over the place, but most notably in Mastering Perl. You can read Chapter 18 for online for free right now since I'm working on the second edition in public. :)
Without getting into verification versus validation, etc, a test is any activity that verifies that behavior of something else, that the behaviors match requirements and that certain undesirable behaviors don't occur. The scope of a test can be limited (only verifying a few of the desired behaviors) or it can be comprehensive (verifying most or all of the required or desired behaviors), probably with lots of steps or components to the test to make it so. In my mind, the term "confirmation" in your context is another way of saying "limited scope test". Certainly it doesn't completely verify the behavior of what your testing, but it does something. In direct answer to the question at hand: I think calling what you did a "confirmation" is just a matter of semantics.
I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.
I've been assigned to pick up a webapplication written in some old Perl Legacy code, get it working on our server to later extend it. The code was written 10 years ago by a solitary self-taught developer...
The code has weird stuff going on - they are not afraid to do lib-param.pl on line one, and later in the file do /lib-pl/lib-param.pl - which is offcourse a different file.
Including a.pl with methods b() and c() and later including d.pl with methods c() and e() seems to be quite popular too... Packages appear to be unknown, so you'll just find &c() somewhere in the code later.
Interesting questions:
Is there a tool that can draw relations between perl-files? Show a list of files used by each other file?
The same for MySQL databases and tables? Can it show which schema's/tables are used by which files?
Is there an IDE that knows which c() is called - the one in a.pl or the one in d.pl?
How would you start to try to understand the code?
I'm inclined to go through each file and refactor it, but am not allowed to do that - only the strict minimum to get the code working. (But since the code never uses strict, I don't know if I'm gonna...)
Not using strict is a mistake -- don't continue it. Move the stuff in d.pl to D.pm (or perhaps a better name alltogether), and if the code is procedural use Sub::Exporter to get those subs back into the calling package. strict is lexical, you can turn it on for just one package. Such as your new package D;. To find out which code is being called, use Devel::SimpleTrace.
perl -MDevel::SimpleTrace ./foo.pl
Now any warnings will be accompanied by a full back-log -- sprinkle warnings around the code and run it.
I think the MySQL question should be removed, from this. Schema Table mappings have nothing to do with perl, it seems an out of place distraction on this question.
I would write a utility to scan a complete list of all subs and which file they live in; then I would write a utility to give me a list of all function calls and which file they come from.
By the way - it is not terribly hard to write a fairly mindless static analysis tool to generate a call graph.
For many cases, in well-written code, that will be enough to help me out...
In my project I'm currently preparing a step-by-step move from legacy code to new, properly-designed and tested modules. Since not every fellow programmer follows closely what I do, I would like to emit warnings when old code is used. I would also strongly prefer being able to output recommendations on how to port old code.
I've found two ways of doing it:
Attribute::Deprecated, which is fine for functions, but rather cumbersome if a complete module is deprecated. Also, no additional information apart from warnings.
Perl::Critic::Policy::Modules::ProhibitEvilModules for modules or maybe a custom Perl::Critic rule for finer deprecation on function or method level. This method is fine, but it's not immediately obvious from code itself that it's deprecated.
Any other suggestions or tricks how to do this properly and easy?
For methods and functions, you can just replace the body of the function with a warning and a call to the preferred function.
perl perllexwarn gives the following example:
package MyMod::Abc;
sub open {
warnings::warnif("deprecated",
"open is deprecated, use new instead");
new(#_);
}
sub new {
# ...
}
1;
If you are deprecating a whole module, put the warning in a BEGIN block in the module.
You can also put the warnings in the import method (e.g. Win32::GUI::import): It all depends on exactly what you want to do.