How to insert a column inbetween other columns (Perl Spreadsheet::WriteExcel) - perl

Lets say I have the following spreadsheet that I can parse in perl which looks like this:
I want to insert a column between Column1 and Column2. So end result looks like this:
It doesn't look like there is a set method for this in Spreadsheet-WriteExcel.
Does anyone know an easy way of doing this in Perl?
Many Thanks in advance!

The only(*) thing that Spreadsheet::WriteExcel can do is write spreadsheets. It doesn't have any facilities for reading an existing spreadsheet. Using it to modify an existing spreadsheet would involve first reading it with some other method (like Spreadsheet::ParseExcel) and then writing a brand new spreadsheet with the data the way you want it. Note that if you try this, you will lose macros, graphs, and any other feature that Spreadsheet::WriteExcel doesn't support.
The documentation for Spreadsheet::WriteExcel goes over a lot of the alternatives. See the WRITING EXCEL FILES and MODIFYING AND REWRITING EXCEL FILES sections. Win32::OLE for instance, gives you full access to Excel's internals with all the power and ease-of-use you'd expect from a Microsoft API. I'll leave it to you to judge whether any of these approaches qualify as "easy".
(*) - I don't mean that in a bad way. A Perl module that can write spreadsheets is pretty freaking cool.

Related

Sorting file contents idiomatically with spring-batch

I have a CSV file with a number of fields. What is an idiomatic way to read the file, sort the file using a subset of fields, and then write another CSV as output.
Should I even attempt to do this in spring-batch? I understand that *nix-based OSes have the sort utility to do this, but I'd like to contain all my work within spring batch if possible.
The Batch Processing Strategies section of the documentation seems to suggest that might be standard utility steps to accomplish this:
In addition to the main building blocks, each application may use one or more of standard utility steps, such as:
Sort: A program that reads an input file and produces an output file where records have been re-sequenced according to a sort key field in the records. Sorts are usually performed by standard system utilities.
But I am not able to locate this. Any pointers most welcome!
Thanks very much!
Unless you really should do it inside Spring Batch I would suggest you do it with OS based commands.
But your point is correct, adding intermediary Steps to your Jobs to Sort/Filter or even clean DATA is a mainstream pattern used in Batch Processing or ETL Jobs.
Hope this helps.
I found out that there is a SystemCommandTasklet that is meant to run OS commands. This can be used to do things like sorting, finding unique items, etc.

Light weight data store in Perl

My requirement is to maintain a simple data store with some rows (~1000) & columns (6)
Over period of time (2 years) I am expecting the data to grow to 1000-1500 lines/rows
I would like query, insert & update in the data store
I need this data store because this needs to be processed by another script.
I am using Perl for programming.
I have seen some threads in Stackoverflow (ex: looking for light-weight data persistence solution in perl) about this but I cannot make a decision
Anyone using light weight data store in Perl with query, insert & update capabilities ?
Go for Sqlite. It is powerful, tunable and lightweight.
Already accepted the answer, but in your case, I might just go with hashes and use Storable to write my structure to a disk. That is, if you don't have multiple people using the data at once.
The advantage is that it's all standard Perl, so it will work with almost any Perl installation. Can't get any lighter weight than this.
Probably the simplest lightweight solution would be to use DBI with DBD::SQLite.
If your data is relational and you are comfortable with SQL then I vote DBD::SQLite.
However if your data is more like documents (each entry data is self contained) or if you are not comfortable with SQL then I recommend DBM::Deep. Its interface is exactly as easy to use as regular Perl variables.
Finally, if you want to be really modern, MongoDB is very easy to install and the new Mango Perl module is very cool, just saying :-)

How to handle large columnar data files in Octave WITH headers?

I have a .dat file that is space/tab delimited with 1 line of headers. There are about 60 columns of data in this file. There are others with other numbers of columns.
How can I read in the headers (as a vector, perhaps?) such that I can index into the appropriate column of the data-matrix without having to count my way manually to the correct column?
I seem to recall Matlab being able to create cell-arrays with headers as indexes. Is anything like that remotely possible in Octave?
So far, all I can get is the actual data according to this:
a = dlmread('Core.dat'," ",r0=1,c0=0);
Any and all help is much appreciated! Thanks!
I've been looking for an easy way to do this using just the standard packages, also, but there doesn't seem to be one.
However, it does look like the dataframe package, might let you do this sort of thing.
It does seem like something simpler should be built into the language, though.

Office development - Word

I have a Word document [ template ] with some placeholders in it. I need to populate the placeholders with some data. I also need to generate a table at runtime. Like I can't have a table designed at design time [the number of rows and columns vary]
I see a lot of posts online. WordProcessingML, OpenXmL. Which path should I take? Do I even have to use the template or just generate the entire doc at runtime? I am confused...
As the comments mention, the question is a bit broad, but in general, there are a few alternatives.
1) If you can deal with ONLY the newer format DOCX files, then Plutex's OpenDoPE is a good possible solution.
2) if you have to deal with older format DOC files, you may find that Word COM Automation is about the only decent solution, but that has other issues, such as speed, and the much great difficulty of using it in a server environment.
3) There are some 3'rd party Word libraries out there that let you manipulate doc files for mail merge, but most only give you barely more functionality that the default word Mail merge. WindWard reports is one solution I came very close to using at one point. It's not cheap, but it is quite powerful. Aspose is another one, though it's merge is pretty basic.

How can I create a web page that shows aggregate data from Sawtooth surveys?

I'm guessing this won't apply to 99.99% of anyone that sees this. I've been doing some Sawtooth survey programming at work and I've been needing to create a webpage that shows some aggregate data from the completed surveys. I was just wondering if anyone else has done this using the flat files that Sawtooth generates and how you went about doing it. I only know very basic Perl and the server I use does not have PHP so I'm somewhat at a loss for solutions. Anything you've got would be helpful.
Edit: The problem with offering example files is that it's more complicated. It's not a single file and it occasionally gets moved to a different file with a different format. The complexities added in there are why I ask this question.
Doesn't Sawtooth export into CSV format? There are many Perl parsers for CSV files. Just about every language has a CSV parser or two (or twelve), and MS Excel can open them directly, and they're still plaintext so you can look at them in any text editor.
I know our version of Sawtooth at work (which is admittedly very old) exports Sawtooth data into SPSS format, which can then be exported into various spreadsheet formats including CSV, if all else fails.
If you have a flat (fixed-width field) file, you can easily parse it in Perl using regular expressions or just taking substrings of each line one at a time, assuming you know the width of the fields. Your question is too general to give much better advice, sorry.
Matching the values up from a plaintext file with meta-data (variable names and labels, value labels etc.) is more complicated unless you already have the meta-data in some script-readable format. Making all of that stuff available on a web page is more complicated still. I've done it and it can be a bit of a lengthy project to roll your own. There are packages you can buy, like SDA, which will help you build a website where people can browse and download your survey data and view your codebooks.
Honestly though the easiest thing to do if you're posting statistical data on a website is get the data into SPSS or SAS or another statistics package format and post those files for download directly. Then you don't have to worry about it.