In this question i saw two different answers how to directly call functions written in C++
Inline::CPP (and here are more, like Inline::C, Inline::Lua, etc..)
SWIG
Handmade (as daxim told - majority of modules are handwritten)
I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions:
What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten?
When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten?
As I thinking about it, using SWIG is more universal for other uses, like asked in this question and Inline::CPP is perl-specific. But, from the perl's point of view, is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm pretty familiar with Inline::CPP.
If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase.
Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries.
Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience.
Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first.
C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls.
The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes.
These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with.
Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that.
One of the advantages to Inline::CPP over Swig is that if you already have some experience with perlguts, perlapi, and perlcall, you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable.
Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig.
Inline::CPP is also a common topic on the inline#perl.org mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules.
You might also find my Perl Mongers talk on Inline::CPP useful in exploring how it might work for you. Additionally, Math::Prime::FastSieve stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the InlineX::CPP2XS distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
You should probably also give ExtUtils::XSpp a look. I think it requires you to declare a bit more stuff than Inline::CPP or SWIG, but it's rather powerful.
Related
What would be the best way to interact with Coq from an external program? For example, let's say I want to programmatically generate programs / proofs in some language other than Coq and I just want to call Coq to typecheck them. Is there a standard way to do something like that?
You have a couple of options.
Construct .v files, invoke coqc, check the return code and parse the output of coqc.
This is, in some sense, the most stable way to interact with Coq. It has the most inter-version stability. It's also the most inflexible; you create a .v file, and check it all in one go.
For an example of this method, see my Coq bug minimizer (specificially get_coq_output in diagnose_error.py), which repeatedly makes small alterations to a .v file and checks to see that the alterations don't change the error message given by coqc.
Use the XML protocol to communicate with coqtop
This is the method used by CoqIDE and by upcoming versions of ProofGeneral. Logitext invokes from Haskell a custom patched version of coqtop with the pgip protocol, which was an earlier attempt at a more standardized way of communication with the prover (see this issue for more details).
This is becoming more stable, and gives more fine-grained control over what you want checked. For example, it allows you to check multiple proofs within a single session, which is important if you depend on a large library that takes time to load, and need to check many small proofs.
Write a custom OCaml toplevel wrapper for the interface to Coq that you want
The main example of this that I'm aware of is PIDEtop, which is used in the Coqoon Eclipse plugin. I suspect that some of the other entries in the GUI section of Related Tools use this method.
Note that coqtop is itself a toplevel wrapper in this style; the files in the toplevel/ folder of the Coq project are likely to be informative.
This gives you the most flexibility and reusability, at the cost of having to design your own protocol, or implement an existing protocol.
Write your external program in OCaml and link with Coq
Much like (3), this method gives you as much flexibility as you want. In fact, the only difference between this and (3) is that in (3), you separate out the communication with Coq into its own binary, whereas here, you fuse communication with Coq with the other functionality of your program. I'm not aware of programs in this style, though I believe coqchk may qualify, as I think it shares a couple of files with the Coq kernel (see the checker/ folder in the Coq codebase).
Regardless of which way you choose, I think that modelling off of existing projects will be more fruitful than making use of (as-yet incomplete) documentation on the various APIs and protocols. The API has been undergoing a lot of revision recently, in an attempt to get it into a reasonable and stable state, and the XML protocol has also been subject to recent improvements; #ejgallego has been the driving force behind much of these improvements.
I have been on a "cleaning spree" lately at work, doing a lot of touch-up stuff that should have been done awhile ago. One thing I have been doing is deleted modules that were imported into files and never used, or they were used at one point but not anymore. To do this I have just been deleting an import and running the program's test file. Which gets really, really tedious.
Is there any programmatic way of doing this? Short of me writing a program myself to do it.
Short answer, you can't.
Longer possibly more useful answer, you won't find a general purpose tool that will tell you with 100% certainty whether the module you're purging will actually be used. But you may be able to build a special purpose tool to help you with the manual search that you're currently doing on your codebase. Maybe try a wrapper around your test suite that removes the use statements for you and ignores any error messages except messages that say Undefined subroutine &__PACKAGE__::foo and other messages that occur when accessing missing features of any module. The wrapper could then automatically perform a dumb source scan on the codebase of the module being purged to see if the missing subroutine foo (or other feature) might be defined in the unwanted module.
You can supplement this with Devel::Cover to determine which parts of your code don't have tests so you can manually inspect those areas and maybe get insight into whether they are using code from the module you're trying to purge.
Due to the halting problem you can't statically determine whether any program, of sufficient complexity, will exit or not. This applies to your problem because the "last" instruction of your program might be the one that uses the module you're purging. And since it is impossible to determine what the last instruction is, or if it will ever be executed, it is impossible to statically determine if that module will be used. Further, in a dynamic language, which can extend the program during it's run, analysis of the source or even the post-compile symbol tables would only tell you what was calling the unwanted module just before run-time (whatever that means).
Because of this you won't find a general purpose tool that works for all programs. However, if you are positive that your code doesn't use certain run-time features of Perl you might be able to write a tool suited to your program that can determine if code from the module you're purging will actually be executed.
You might create alternative versions of the modules in question, which have only an AUTOLOAD method (and import, see comment) in it. Make this AUTOLOAD method croak on use. Put this module first into the include path.
You might refine this method by making AUTOLOAD only log the usage and then load the real module and forward the original function call. You could also have a subroutine first in #INC which creates the fake module on the fly if necessary.
Of course you need a good test coverage to detect even rare uses.
This concept is definitely not perfect, but it might work with lots of modules and simplify the testing.
I have been delivering training on Programming Practices and on Writing Quality Code to participants who have been working on Java since sometime. Object Oriented Analysis and Design is the base and I cover S.O.L.I.D. Principles and excerpts from books like Clean Code, Code Complete 2 and so on.
I am scheduled to deliver training to Perl Programmers(with less than 1 yr. exp. in Perl) in two days and they do not use the Moose(an extension of the Perl 5 object system which brings modern object-oriented language features).
I am now confused as to how to structure my training as they don't follow OOPs.
Any suggestions?
Regards,
Shardul.
Even without Moose, object-oriented programming in Perl is quite possible, and very common. Many CPAN modules offer their functionality through an object-oriented API, even if many of these also offer a non–object-oriented API. (A good example of this duality is IO::Compress::Zip.) Obviously the norms of object-oriented design in Perl are somewhat different from those in some languages — encapsulation is not enforced by the language, for example — but the overall principles and practices are the same.
And even without any sort of object-oriented programming, Moosish or otherwise, there's plenty to talk about in terms of laying out packages, organizing code into functions/subroutines/modules, structuring data, taking advantage of use warnings (or -w) and use strict and -T and CPAN modules, and so on.
I'd also recommend Mark Jason Dominus's book Higher-Order Perl, which he has made available for free download. I don't know to what extent you can race through the whole book in a day and put together something useful in time for your presentation — functional programming is a bit of a paradigm-shift for someone who's not used to it (be it you, or the programmers you're presenting to!) — but you may find some useful things in there that you can use.
A lot of the answers here are answers about teaching OOP to Perl programmers who don't use it, but your question sounds like you're stymied on how to teach a course on code quality, in light of the fact that your Perl programmers do not use OOP, not specifically that you want to teach OOP to non-OO programmers and force them into that paradigm.
That leaves us with two other paradigms of programming which Perl supports well enough:
Good ol' fashioned Structured Programming also Modular Programming
Functional programming support in Perl (also Higher-Order Perl)
I use both of these--combined with a healthy dose of objects, as well. So, I use objects for the same reason that I use good structure and modules and functional pipelines. Using the tool that brings order and sanity to the programming process. For example, object-oriented programming is the main form of polymorphism--but OOP is not polymorphism itself. Thus if you are writing idioms that assist in polymorphism, they assist in polymorphism, they don't have to be stuck in some ad-hoc library "class" and called like UtilClass->meta_operator( $object ) which has little polymorphism itself.
Moose is a great object language, but you don't call Moose->has( attribute => is => 'rw', isa => 'object' ). You call the operator has. The power of Moose lies in a library of objects that encapsulate the meta-operations on classes--but also in simple expressive operators that the rather open syntax of Perl allows. I would call that the appreciation of solving the problems that OOP solves with objects.
Also, I guess I have a problem with your problem, because "not OOP" is a big field. It can range from everything-in-the-mainline coding to not-strictly-OOP (where the process of programming is not simply OOP analysis). So I think you have to know your audience and know what it is they use to keep that code structured and sane. I can't imagine a modern Perl audience that isn't at least object-users.
From there, Perl Best Practices (often abbreviated PBP) can help you. But so would learning that
simply because OOP is one of the best supports for polymorphism it isn't polymorphism in itself
simply because OOP is one of the best supports for encapsulation it isn't encapsulation in itself.
That OOP has been assisted by structured and modular programming--and is not by itself those things. Some of its power is simply just those disciplines.
In addition, as big as an object author and consumer I am, OOP is not the way I think. Reusability is the way I think: What have I done before that I do not want to write again? What have I written that is similar? How can I make my current task just an adapter of what has been written before. (And often: how can I sneak my behavior branch an established module in a single line?)
As a result, a number of my constructs would fail the pedestrian goal of OOP. To give you a better view: I divide code into two "domains": Highly abstract and polymorphic Library code, and the Scripting that I need to do to get the particular function that I'm required in a current project. (this is essentially what "application" means, but I don't think it would be as clear). As a result, polymorphism is mainly instrumental in providing adaptability, but the adaptation itself is whatever takes the least lines of code. My optimum system would be a library that allows scripting/adaptation at any juncture between library behavior and a set of configurations or scripts that address a particular problem. Again, if I had my druthers, configuration would be injected from the scripted domain and no library code would say "I need a properties file" by itself, unless it was a library module encapsulating the algorithm of configuration instanced in properties files. It would just know that it needs "policies" (or decisions from the application domain) in order to fulfil its function.
Thus, my ideal application contains special purpose "objects", which conform to "roles" but where classes are useless overhead--except that the classes perform the behavior which allow injectable data and behavior. So some of my Perl "objects" violate OOP analysis, because they are simply encapsulations of one-off solutions, kind of like the push-pin (expando) JavaScript objects.
I will often (later) revise a special-purpose object and push it further back into the library domain as I find that I need to write something like this again. All objects in the library domain are just on some level of the spectrum of specified behavior. Also, I arrange "data networks" where there is a Sourced type of class that simply encapsulates the behavior of accessing data either in the object itself or another source object. This helps speed my solutions immensely, but I've never seen it addressed in any duck-cat-dog-car-truck OOP primer. Also templating--especially when combined with "data networks"--immensely useful in coding solutions in a half-dozen lines or a half-day of work.
So I guess I'm saying, to the extent that you only know OOP for structuring programming, you won't be able to appreciate how much some older, sound practices or other paradigms do for you--or how things that qualify as OOP can promote mediocre adaptability. (Besides components are far more current than "objects".) Encapsulation solves many problems, but it also promotes the lack of data where you need it. The idea is to get data where you need it so that your canned behavior can realize the specifics of the problem and operate on that.
Reread some stuff on structured programming
Read some stuff on functional programming (assuming that you're not already familiar with it.)
Also it's possible that even an established, "productive" Perl team is writing ... crap. If they are not OOP programmers because they are simply writing crap code, then by all means teach them OOP and if they lack even structured programming *shove both of them down their throats* (I have a hard time considering the label "professional", here).
Take a good look at 'Perl Best Practices' by Damian Conway. It has lots of solid material in it, and you won't go far wrong taking his advice.
Be aware, though, that Getopt::Clade is only available as a placeholder package - it is vapourware, in other words.
You might want to look at what's covered in the "Modern Perl" book too:
http://onyxneon.com/books/modern_perl/
As the others say - plenty to cover without Moose.
Setting up modules/distros
Testing and TAP
Deployment with cpanm / cpan / local::lib
Important changes 5.8 5.10 vs 5.12 vs 5.14, autodie etc.
Perl programmers must know about Perl's weakly functional features, like list contexts, map, grep, etc. A little functional style makes Perl infinitely more readable.
Perl programmers must also understand Perl's traditional OO features, especially modules, bless, and tie. Make them write an object or maybe tie a Cache::Memcached object around a query or something.
I am doing a compilers discipline at college and we must generate code for our invented language to any platform we want to. I think the simplest case is generating code for the Java JVM or .NET CLR. Any suggestion which one to choose, and which APIs out there can help me on this task? I already have all the semantic analysis done, just need to generate code for a given program.
Thank you
From what I know, on higher level, two VMs are actually quite similar: both are classic stack-based machines, with largely high-level operations (e.g. virtual method dispatch is an opcode). That said, CLR lets you get down to the metal if you want, as it has raw data pointers with arithmetic, raw function pointers, unions etc. It also has proper tailcalls. So, if the implementation of language needs any of the above (e.g. Scheme spec mandates tailcalls), or if it is significantly advantaged by having those features, then you would probably want to go the CLR way.
The other advantage there is that you get a stock API to emit bytecode there - System.Reflection.Emit - even though it is somewhat limited for full-fledged compiler scenarios, it is still generally enough for a simple compiler.
With JVM, two main advantages you get are better portability, and the fact that bytecode itself is arguably simpler (because of less features).
Another option that i came across what a library called run sharp that can generate the MSIL code in runtime using emit. But in a nicer more user friendly way that is more like c#. The latest version of the library can be found here.
http://code.google.com/p/runsharp/
In .NET you can use the Reflection.Emit Namespace to generate MSIL code.
See the msdn link: http://msdn.microsoft.com/en-us/library/3y322t50.aspx
I'm investigating using DbC in our Perl projects, and I'm trying to find the best way to verify contracts in the source (e.g. checking pre/post conditions, invariants, etc.)
Class::Contract was written by Damian Conway and is now maintained by C. Garret Goebel, but it looks like it hasn't been touched in over 8 years.
It looks like what I want to use is Moose, as it seems as though it might offer functionality that could be used for DbC, but I was wondering if anyone had any resources (articles, etc.) on how to go about this, or if there are any helpful modules out there that I haven't been able to find.
Is anyone doing DbC with Perl? Should I just "jump in" to Moose and see what I can get it to do for me?
Moose gives you a lot of the tools (if not all the sugar) to do DbC. Specifically, you can use the before, after and around method hooks (here's some examples) to perform whatever assertions you might want to make on arguments and return values.
As an alternative to "roll your own DbC" you could use a module like MooseX::Method::Signatures or MooseX::Method to take care of validating parameters passed to a subroutine. These modules don't handle the "post" or "invariant" validations that DbC typically provides, however.
EDIT: Motivated by this question, I've hacked together MooseX::Contract and uploaded it to the CPAN. I'd be curious to get feedback on the API as I've never really used DbC first-hand.
Moose is an excellent oo system for perl, and I heartily recommend it for anyone coding objects in perl. You can specify "subtypes" for your class members that will be enforced when set by accessors or constructors (the same system can be used with the Moose::Methods package for functions). If you are coding more than one liners, use Moose;
As for doing DbC, well, might not be the best fit for perl5. It's going to be hard in a language that offers you very few guarantees. Personally, in a lot of dynamic languages, but especially perl, I tend to make my guiding philosophy DRY, and test-driven development.
I would also recommend using Moose.
However as an "alternative" take a look at Sub::Contract.
To quote the author....
Sub::Contract offers a pragmatic way to implement parts of the programming by contract paradigm in Perl.
Sub::Contract is not a design-by-contract framework.
Sub::Contract aims at making it very easy to constrain subroutines input arguments and return values in order to emulate strong typing at runtime.
If you don't need class invariants, I've found the following Perl Hacks book recommendation to be a good solution for some programs. See Smart::Comments.