What are codepad.org's Perl runner limitations? - perl

Sometimes I see people use http://codepad.org as a way to quickly run/test their Perl snippets (it supports doing that with a wide variety of languages, from C to Scheme to Perl).
It's pretty obvious that there must be some limitations as to what code/features can be tested with codepad - does anyone know what those limitations are for Perl runner?
I'll get the ball rolling on my own observation: not every CPAN module is available :(

Mostly based on their "about" page:
codepad only supports Perl 5.8.0
Presumably, like any Perl install, not every module (CPAN or otherwise) is present.
As a specific example, List::MoreUtils is missing.
As a sub-limitation, they seem to run on Linux. So any Windows specific modules would certainly be out.
It's in a chroot jail with system calls restrictions. Among other things this seems to prevent file creation (my snippets creating files in a current directory or /tmp both errored out, as well as File::Temp calls)
codepad code is executed on a virtual machine. Behind firewalls. And buried in a bunker. So certain functionality is probably disabled - especially networking/internet one. The exact "about" quote is:
The supervisor processes run on virtual machines, which are firewalled such that they are incapable of making outgoing connections.
The machines that run the virtual machines are also heavily firewalled, and restored from their source images periodically.

It's easier to just run Perl code locally. It's easy to install multiple versions of Perl and to track separate module repositories. It's also not hard to run just about any operating system you want in a virtual machine. Why you'd need anyone's else's service to do what you can do better yourself is beyond me.

Related

How to use CPAN modules without access to CPAN?

I am greatly disappointed every time I am forced to retrieve a module from CPAN. In most environments I work in, internet access is severely restricted or completely denied. All compilers have been removed during the OS hardening process. And all digital storage media is scanned by a security team before entering or exiting the site. Mind you, I understand security, and all of this is OK with me, but...
What is the recommended or best practice for accessing code only CPAN modules provide.
If I only need a snippet, a function, or a single string of functionality a module gives me, how can I extract "just what I need/want" without installing an entire module? Keeping in mind that I may be literally printing out, writing down, and typing in, to transfer the data from an off-site location with internet access.
When you can't access CPAN - how you want use it? So, you can either:
when have internet access - you can install modules
if havent, you can bring an minicpan on USB stick or soo
if this is not possible because security policy, you can only use e.g. you mobile phone to show source on CPAN and retype the needed parts manually
if neither is possible - you can at home print the source code on your tshirt and retype it in the work :)
or, simply must programm all things yourself - by learning at home
or, find the better job :)
EDIT: More serious approach
First, it is strange to have an company who developing with perl, but doesn't allow use CPAN. Of course, I understand than direct access to a tons of unscanned sw is not acceptable for many companies, but in this case here should be exists some "company policies" - how to allow access.
Here are several questions:
the hardening is at technical level only (firewalls etc) - so you can bring e.g. USB stick, CD or any other medium inside, or
policy level - the policy does not allow using any external source
If it is at policy level - IMHO, you're out of luck. Simply when it is NOT ALLOWED using any external source - you can use really only the "print & retype" method.
Here is some possibilities:
establish an company-wide local CPAN repository
Create an local CPAN (minicpan) server with "trusted" modules. This repository can function as repository for locally developed modules too. In this case must exists some "auditing authority" (policies & procedures) how to get modules into the local repo. IMHO, this can be the most useful way - when the company using perl on regular basis.
Of course, mixing system-wide (default perl modules) with CPAN modules not the best idea. Therefore is possible to setup:
local::lib or build-prefix
local::lib - create and use a local lib/ for perl modules with PERL5LIB. Google for perl "local::lib" or something similar. Also read some other SQ questions:
How can I install a CPAN module into a local directory?
How can I use a new Perl module without install permissions?
How can I install CPAN modules locally without root access (DynaLoader.pm line 229 error)?
and other
Using local::lib is nice solution, because doesn't break system-wide perl modules. Of course, again - you will need some "auditing process" how to get modules inside.
perlbrew
Using your own built perl - perlbrew - is more general solution if the system hasn't installed perl. You don't need root access for building your own perl. Of course, here is still some problems (besides the auditing), e.g. the "missing compiler problem".
virtual machine
You can try setup an virtual machine for development (or isolated physical machine) with full CPAN access and develop here. When you finish the development, you can forward your work with all required modules to "auditing process".
other
If you need only extract an function or a modules from the CPAN modules, do it on external machine. Extracting a function or some part is not a technical problem (when you know perl) it is more an license problem - using a part of modules in your work - you need cite the author.
For this you need fetch all needed functions - can find interesting this discussion. Google for "perl functions dependencies" or something similar, or:
How can I determine CPAN dependencies before I deploy a Perl project?
http://metacpan.org/pod/Module::Extract::Use
http://metacpan.org/pod/Module::ScanDeps
Perl: CPAN - Module modifying and adding functionality
Maybe, you will find this discussion interesting too...

How should I improve my perl application deployment process?

I develop and maintain a bioinformatics application suite of 50+ scripts and its deployment process is a mess:
Entire suite is in one big git repository. It has lots of CPAN dependencies, and dozens of internal modules as well.
Development platform is Linux.
Deployment platforms are Windows (20+ users), Mac (10+), Linux (2-3). Most are not 'power users'.
For windows, I have one installer (made with NSIS) for strawberry perl + required modules (ie, I installed strawberry on a windows box, installed all modules and zipped c:\strawberry), and another installer for the suite-- I did this b/c the suite is updated a lot more than the list of required modules.
For Mac, I bundle perl 5.14, all required cpan modules, and the application suite into a double-clickable installer. I don't use the system perl b/c it tends to be out of date. I bundle everything together unlike on windows b/c I suck at mac.
For Linux, I handle their installations manually since there are only a few of them, and they use different distros.
This is obviously a mess that grew organically over several generation of developers. Ideally I would like to create cpan-installable distributions out of the internal libraries and various groups of related scripts, and use module dependencies to let cpan install them for me.
But I'm not sure what the best approach for this is, b/c I'd still need distribute perl itself, would have to write some sort of non-command-line interface to CPAN, control the exact versions of 3rd party CPAN modules, point it by default to my "DarkPan" where I would store our modules, how I would push updates, etc. etc.
I don't think I can use PerlApp or Par since afaik those are for bundling single scripts, not an entire suite of them.
Any advice highly appreciated.
Besides the 3 platforms mentioned (more, if you count the Linux variants), you really have a couple different problems:
Deployment of a standard known-good Perl executable and libraries (CPAN modules).
Deployment of your Perl scripts and modules.
Once upon a time, I supported a large Solaris Perl installation. I tried for a while to stand up a Linux Perl installation "side-by-side", re-using the same CPAN modules. Didn't work. The big problem for me is that a fair number of Perl modules require compilation, which means they target a specific platform. I ended up just with 2 installs, and always remembered to install a new CPAN module in both areas.
We're now 100% Windows, so I don't have the same issue. However, we do run Perl off a shared network drive. All the users map this drive, and run a Registry script that associates .PL files with the network install of Perl. (See my answer to this other Perl question.)
So, besides the mapped drive and the Registry script, users don't need to install anything. Even the CPAN modules are picked up from the network. This solves item #1 (for Windows only users).
Same thing holds true for item #2: the scripts are stored on a network drive (same one) and the users run another Registry script to include the scripts folder in their search PATH. We edit our scripts in one area, and have a "Check-In 'n Release" ("CINR") that we use to, well, check-in and release scripts to the area the users point to. The users can double-click the scripts in Explorer, run them in DOS, or even better yet get them included in the contextual-menu in Explorer, etc. (Actually, we use a .NET application to map the drive and make all these settings for the user, but it can be done much simpler.)
So, how does this help with the other platforms, Linux and Mac? As I ran into with my Solaris/Linux experiment, I think you're stuck with different Perl installation for all 3 platforms, although you should be able to reach the same network drive for your Perl scripts and modules.
The Perl installation might even be OK on a network drive for the Linux users. It's probably easier for them than the Windows users. Mac users are tough. I administer a home Mac network, and I think network drives are very difficult to do in Mac OS X compared to other OSes. It should be as easy as in Linux since so much is the same, but there are very strange problems (for me) mapping NFS and SMB drives. AFP drives are a little easier for the user to map manually, but not so easy to map programmatically.
My Mac recommendation is to try using Platypus. It's definitely great at bundling scripts into a double-clickable application, although your interface options are limited to output only (no user input allowed during execution that I can tell). Not sure if you could put an entire Perl installation into the Platypus app or not, but if you could the paths figured out, you might be able to.
Good luck!
You may wish to check out CAVA packager. It can deal with multiple scripts in a single package.

How to run multiple Perl installs on one machine?

Is it possible to run multiple installs of Perl (in "containers") on one machine?
Reason is that I have different Perl-based server side web applications and wish to schedule updates to them independently.
For example, bugzilla upgrades seem to me to be very invasive, downloading all manner or module updates and lengthy, too (thereby increasing the chance of unpredictable behavior on other applications that depend on those modules, during the time that the upgrade is still partial).
I think it should be possible to run multiple independent server-side CGI Perl applications on one server, I'd rather not be told to separate them onto different machines - I think that's wasteful and I don't have that resource anyway.
Investigate PerlBrew and cpanm:
http://qa.celogeek.com/programming/perl/for/developer/overview
Edit, more info:
http://www.bryanesmith.com/documents/a2pm/perlbrew-june-14-2011.pdf
http://www.dagolden.com/index.php/1384/parallel-make-for-perlbrew/
http://www.perlbrew.pl/
It's easy to install and manage multiple perls. Simply install them in different places and use each perl's tools. I talk about this in The Effective Perler.
Some people suggest perlbrew without realizing that it doesn't really give you any benefit. It can download a perl, configure and install it, and switch around symbolic links to make one of those the default. It doesn't do anything magical, though.
Downloading and installing aren't a problem though. You've never needed root or sudo to do that, and if you do, you'll still need it for perlbrew. You can always install into any directory where you have permission. perlbrew doesn't get around that at all. From the source directory, you have two simple commands to run:
$./Configure -des -Dprefix=/where/you/want/to/install
$ make install
For you, that might mean Bugzilla gets its own perl:
$./Configure -des -Dprefix=/where/you/want/to/install/bugzilla-perl
$ make install
From there, you have a completely self-contained perl installation. When it matters to me which perl I use, I give the program the full path to it:
#!/where/you/want/to/install/bugzilla-perl/bin/perl
It's much easier to make these per-applications installations without perlbrew, which wants to do as much as it can for you, including deciding the directory name, which it prefers you didn't know at all.
perlbrew's main advantage is not the compilation and installation, but it's switch feature to let you make one perl the default. You probably don't want that feature though because you want bugzilla, CGI programs, and so on using only the perl you want them to use, not whatever default perl you last specified.
When you want to update the bugzilla-perl, just use it's tools, which already have adjusted shebang lines to find the right perl:
$ /where/you/want/to/install/bugzilla-perl/bin/cpan ...
I don't like all of those long paths, though, which is why I make links to them all. Then I can just call them with whatever naming scheme I decide, which might be:
$ bugzilla-cpan ...
There's never a question about which tool or version I'm using.

Why can't I simply copy installed Perl modules to other machines?

Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is.
Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment).
At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable
At home (Mac OS X) it does work
In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user
I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed
At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there.
Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works.
So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.
Now, Perl is a dynamic language, but that doesn't imply that everything that people write is portable across platforms. That's not the fault of the language. It's not even the fault of the programmer. Some things, like Win32::OLE shouldn't work on Unix. :)
Other dynamic languages will have some of the same problems. If you have to compile C code, you won't be able to merely copy files to another machine. Some distributions configure the code slightly differently depending on your operating system, etc.
Even if you could copy files, you have to ensure that you copy all of the files that you need. Do you know everything that you need for a particular module? Remember, many of them have dependencies.
Most of the problems you're having aren't anything to do with the language. You're having trouble with the tools. If you want a zero conf CPAN tool that makes all the decisions for you, try cpanminus. It's mostly the same thing that you'd get out of cpan (although different code), but it makes all of the decisions for you. It doesn't run any of the distribution tests, and it installs into your user directory. When you need something that gives you control, come back to cpan.
In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user
This is one of those times when it helps to know The Trick. In this case local::lib, which lets you configure a non-root install area and all the ENV variables in about three minutes.
if perl's modules are written in perl, why the copy/past approach will not work?
Some are written in Pure Perl, but many are written partially in C (using Perl's XS API) for efficiency.
Sometimes you end up with situations like JSON::XS, JSON::PP and JSON::Any to autoselect the best one that is installed.
Isn't there a way to download the module (tar, zip...) and use cpan to deploy it?
The cpan program is all about getting things from the Internet. You can download the package (there will be a link along the lines of "Download: CGI.pm-3.49.tar.gz" on the right hand side of the CPAN page), untar it, then
perl Makefile.PL
make
make install
You would probably be better off configuring your cpan installation to use only HTTP sources (in the urllist config option). Possibly going to far as to create a mini CPAN mirror inside your network.

Deploying Perl to a share nothing cluster

Anyone have suggestions for deployment methods for Perl modules to a share nothing cluster?
Our current method is very manual.
Take down half the cluster
Copy Perl modules ( CPAN style modules ) to downed cluster members
ssh to each member and run perl Makefile.pl; make ; make install on each module to be installed
Confirm deployment
In service the newly deployed cluster members, out of service the old cluster members and repeat steps 2 -> 4
This is obviously far from optimal, anyone have or know of good tool chains for deploying Perl modules to a shared nothing cluster?
Take one node offline, install Perl, and then use it to reimage the other nodes.
At least, that's how I imagine you'd want to install software in a shared-nothing cluster. Perl is just the application you happen to be installing.
Assuming all the machines are identical, you should be able to keep one canonical installation, and use rsync or something to keep the others in updated.
I have, in the past, developed a Perl program which used the Expect module (from CPAN) to automate basically the process you described, automatically sshing to each host, copying any necessary files, and performing the installations. Unfortunately, this was developed on-site for a client, so I do not have access to the code to share. If you're familiar with Expect, it shouldn't be too difficult to set up, though.
We currently have a clustered Perl application that does data processing. We also have numerous CPAN modules and modules that we've developed that the software depends on. When you say 'shared nothing', I'm assuming you're referring to things like NFS mounts.
If the machines have identical configurations, then you may be able to build your entire application into a single directory structure (eg: /opt/my-app), tar it up and that could be come the only thing you need to push to the boxes.
As far as deploying it to the boxes, you might be able to use Capistrano. We developed a couple of our own cluster utilities that piggybacked off of ssh - I've released one form of that utility: parallel-jobs. Its README shows an example of executing multiple parallel ssh commands. It's a small step to extend that program to be able to know about your cluster and then be able to execute the same command across the cluster (as opposed to a series of different commands).
If you are using Debian or Ubunto OS you could package your Perl modules - I have open sourced some code to help with this: Perl module builder it's still very rough but does work and can be made to work on your own code as well as CPAN modules, this then makes deployment much easier.
There is also project to get RedHat rpms for all of CPAN, Dave Cross gave a talk Perl in RPM-Land which may be of use.
If you are on some other system which doesn't have packaging then the rsync option (install on one machine and then rsync to the others) should work as well, note you can mount a windows share and rsync to it across unix if needed.
Using a central manager like Puppet makes creating and maintaining machines in a cluster a lot easier to manage, from installing code to managing users and email configuration. There is also a Perl project in the pipeline to do something similar but this has not been made public yet.
Capistrano is a tool that allows you to run commands on a group of servers; it is perfectly suited to making your task considerably easier.
Further down the line of automation, but also complexity, is Puppet that allows you do define a group of servers, give them roles and then push out sets of code to every machine subscribing to a certain role.
I am not sure exactly what a share nothing cluster is, but if it uses some base *nix system like Fedora, Mandriva, or Ubuntu. Many of the perl modules are precompiled for specific architectures. You can easily run these.
If these systems are of the same arch you can do as someone else said and just copy the compiled modules from system to system, just make sure you have all of the dependancies as well on the recipient system.