I need to allow users to upload a zip file via a web form. The server is running Linux with an Apache web server. Are there advantages to using a module like Archive::Zip to extract this archive or should I just execute a system call to unzip with backticks?
According to the Archive::Zip documentation you'd be better off using Archive::Extract:
If you are just going to be extracting zips (and/or other archives) you are recommended to look at using Archive::Extract instead, as it is much easier to use and factors out archive-specific functionality.
That's interesting because Archive::Extract will try Archive::Zip first and then fall back to the unzip binary if it fails. So it does seem that Archive::Zip is the preferred option.
Archive::Zip uses Compress::Raw::Zlib which is a low level interface to the zlib system library; so it's not a pure Perl implementation meaning it's going to be similar in performance to unzip. So, in other words, from a performance perspective there's no reason to pick unzip ahead of Archive::Zip.
If you execute the binary unzip, your process will fork/exec and
instantiate a new process
consume more memory (for the duration of the spawned process)
You'll also have to configure with the correct path to unzip. Given all this, I would strongly prefer the library approach.
One concern is with memory. We have found the hard way (production web server crashed) that Archive::Tar had a memory leak. So while overall using a module instead of a system call to an external command is a good idea (see other responses for reasoning), you need to make sure the module has no gotchas.
Related
For years I have been using the following at the top of my scripts:
use lib '/var/www/vhosts/example.com/demo.example.com/cgi-bin/library';
That works fine when the lib is within the same domain space as the calling script.
However, I want to call in from a centralised library so I will have just one place to set db credentials.
So, if I adjust that line to call in from another account on the same server, it cannot find the library.
use lib '/var/www/vhosts/example2.com/demo.example2.com/cgi-bin/library';
Running on plesk, if that may make a difference. Used to run on cPanel and I had no issue.
I would appreciate a pointer, having already read some docs and I am confused.
Only someone with access to the configuration of your web server can answer this for sure, but I'd guess that each of your vhosts is running as a different user and the users can only read files from their own web space.
This approach won't work. If you want to to have a centralised module library then either install the modules that you want in the system module library (i.e. where cpan will install them by default) or create your own new centralised library somewhere that isn't under one of the vhost directories (perhaps under /opt).
However, it's worth noting that best practices for deployment of applications are moving in completely the opposite direction. It's generally considered a good idea for each application to have its own set of dependencies installed in its own module library. Using a cpanfile to record the exact versions of the dependencies that you're using makes this simple.
We are getting ready to deploy a Tcl application, but i'm having trouble figuring out how to do it. Currently, I'm experimenting with tclkit and sdx.kit. I can pack a single tcl file and run it, but the structure of the whole application contains folders and images and c files that work together with tcl. i have two folders and inside a bunch of c files and tcl files and other stuff. How would i go and wrap the whole thing. What tool do you guys recommend other than tclkit and why?
The main way that you're recommended to distribute applications is as a tclkit. There are a few alternatives (e.g., TOBE, ActiveState's commercial tooling) but they're pretty similar as they all build on top of Tcl's virtual filesystem layer. (NB. This isn't the same as the Linux VFS stuff; this is a VFS in a single application.) Indeed, the ActiveState tooling actually is a rebadged tclkit (plus some other stuff like code obfuscation). I believe that TOBE uses ZIP archives instead of metakit databases.
The advantage of using a VFS-based solution is that it means that lots of things work inside, particularly including both source (for getting another .tcl file in) and load (for getting a binary library). In fact, you can put your application, the packages it depends on, and the resources (images, etc.) inside the VFS and be fairly sure that things will work. About the only things that we know run into real problems are where you want to exec something in the archive (the VFS mount is process-local; you have to copy the subsidiary file out if you want it to be seen in subprocesses) and if you're wanting to load certificates of private keys with the tls package (because the underlying OpenSSL library doesn't delegate to Tcl to handle that part of its I/O for some reason, AIUI).
When you're building these things, effectively you make a directory (and its subdirectories) that have everything laid out right. Then you run the packager (sdx for tclkits) and it builds the overall application for you. Attach the result to a runtime (the standard tclkit) and you're ready to test and deploy.
We don't generally do tool recommendations here on Stack Overflow, but the ActiveState Tcl Dev Kit is actually rather widely used. Many other people use sdx/tclkit. TOBE is quite a lot rarer. (There are other packaging techniques, but I wouldn't recommend them these days; a packaged VFS works very well indeed.)
I need help: is there a way to fetch a file from the remote server using only core modules of perl 5.8.8? File::Fetch became core module only from 5.9.
This comes up all the time. Take a look at the classic yes, even you can use CPAN. If you have the ability to create and run a Perl script, then you also have the ability to put a module in your local directory and use it. The requirement to use only core modules is entirely artificial.
In your case, LWP::Simple's getstore() function will do what you want. While it is technically not core, LWP::Simple is included by default with many Perl distributions. You may well already have it.
Update: so, you want to do this on 1000 servers? No need to manually install the module on each server. Use CPAN programmatically to download and install the module(s) you need (some tweaking will be needed to get CPAN to install it locally rather than in the root module library). Also Leon Timmermans's suggestion of fatpacking the module is another option.
If you really don't want to do it this way, then basically the answer is no: there is no simple way to fetch a remote file via HTTP using neither the appropriate modules, nor a system command (I didn't consider writing your own HTTP client to be a simple method, but that's fine if it works for you).
The only other potential solution I see would be a different approach to your problem, such as:
Using a script in a single location to get the file, then distribute
it to all 1000 servers via FTP.
Or, putting the file on an FTP server, then using a simple Perl
script on each server to fetch it via FTP.
As Dan already said, yes, even you can use CPAN. One approach his link doesn't mention is writing it as a normal CPAN-using distribution, and then fatpack it. Fatpacker combines a script with all its (pure-perl) dependencies, creating a single easy to distribute file.
You could use:
my $wgetoutput = `wget "$myFileToGet"`;
Stuff in backticks (`) will be given to the default shell, so you can call whatever you want (and are allowed) there.
Caveat: $myFileToGet could have stuff like "&& rm -rf *" in it, so dont forget to sanitize!
I want to run a Perl script online, but I don't know how.
In PHP you need to start with <?php, so do you have to start with something like that in Perl?
And does Apache automatically recognize Perl? Or do I have to upload Perl and let it point to it using #!/path/to/perl?
Can I use print() to display HTML?
In PHP you need to start with <?php, so do you have to start with something like that in Perl?
There are frameworks (such as Mason) which work like that, but it is more typical to have a standard Perl program which outputs the page.
And does Apache automatically recognize Perl?
Apache doesn't automatically recognise any kind of server side programming.
Or do I have to upload Perl and let it point to it using #!/path/to/perl?
You would need to have Perl installed on the server. You would generally start a script that way (but not necessarily, e.g. if you were using mod_perl), but would have to configure the server to recognise it as an executable and run it (just as you have to configure the server to recognise files ending with .php as scripts to run with PHP).
Can I use print() to display HTML?
Yes.
You should probably start by looking at the question Web Programming For The Non-Web Programmer (in Perl).
Must you use Apache? If not here is an alternative to consider.
I have found that the built-in servers and templating engine in the Mojolicious framework work very naturally for inline Perl within HTML. The tags are of the form <%== but work the same way. Also it has good documentation and examples to get you going.
Edit: It seems that there are ways to use Apache with Mojolicious too, see http://search.cpan.org/perldoc?Mojolicious::Guides::Cookbook, though the built-in servers have worked well for me, with FAR less (ie no) configuration.
Apache HTTP doesn't automatically understand Perl or PHP for that matter. For PHP to work, you have to have the Apache httpd module called something like mod_php.so or libphp5.so installed. However, since many websites use PHP in this manner, this Apache httpd module is normally installed.
Just as you need mod_php in order to use PHP in Apache's httpd web server, you need to do is make sure your web server is using mod_perl if you want to use Perl in a similar manner.
You'll need to build and install mod_perl which can be tricky -- especially if you don't control the machine the server is on.
The other way to use Perl is to use what's known as CGI-Perl. This is much easier to setup, but it is also much more dangerous since it can lead to someone being able to run unauthorized programs on your Apache httpd server.
In this case, you need to set up a CGI-BIN directory, and configure Apache httpd. This is fairly simple. Once you do that, you put all of your Perl scripts into the _CGI_BIN_ directory. In this case, your Perl scripts will have to handle all of the communication between your web server and the web client and handle all displays. Fortunately, it's not too difficult in perl since Perl gives you the basic modules to do this.
I've written a web cgi application in perl and before I start to distribute it to clients, I'd like to provide an option for future updates.
I would like to know what are the standard approaches for that using free Linux tools.
It is OK for the server to be stopped during updating.
Thank you,
Spasski
If you have separated code from configuration and data, then the easiest way is to tar/zip the new files and unpack them onto the existing installation. If you need to update the data files, then you could include a script that makes the necessary changes.
Take a look at th bugzilla upgrade guide. I've used this process many times without a hitch.
http://www.bugzilla.org/docs/tip/en/html/upgrade.html