How to parse .chm files in perl? - perl

How to parse .chm files in perl ? Which module is used for it ?

How about Archive::Chm?
Performs some read-only operations on
HTML help (.chm) files. Range of
operations includes enumerating
contents, extracting contents and
getting information about one certain
part of the archive

Related

Perl parse an .xlsm file without using excel

I have an input .xlsm file from which I have to parse some values.
Currently I am using Win32::OLE which from certain reasons I need to stop using.
Is there a way to parse that file without using EXCEL processes?My searches on google lead me to Spreadsheet::ParseXLSX module and Excel::Writer::XLSX(with some problemes), but I don't know whether they require Excel or not.
Thank you!

Getting a list of all files inside a zip/rar/7z file with Scala

Is there a way to get a list of all the files inside a compressed file without decompressing it?
I don't mind using a Java library but all the solutions I found performed a decompression.
Also, if it is relevant, I know that the compressed file has sub directories in it and I want to also get the files from them.

Extracting file names from an online data server in Matlab

I am trying to write a script that will allow me to download numerous (1000s) of data files from a data server (e.g, http://hydro1.sci.gsfc.nasa.gov/thredds/catalog/GLDAS_NOAH10SUBP_3H/2011/345/). Unfortunately, the names of the files in each directory are not formatted in a similar way (the time that they were created were appended to the end of the file name). I need to be able to specify the file name to subset the data (I have a special tool for these data types) and download it. I cannot find a function in matlab that will extract the file names.
I have looked at URLREAD, but it downloads everything including html code.
Thanks for your help!
You can easily parse the link.
x=urlread(url)
links=regexp(x,'<a href=''([^>]+)''>','tokens')
Reads every link, you have to filter all unwanted links.
For example this gets all grb files:
a=regexp(x,'<a href=''([^>]+.grb)''>','tokens')

iPhone - reading .epub files

I am engaged in preparing an application regarding reading the .epub files in iPhone. Where can I get the reference for sample applications for unzipping and parsing the files? Can anyone guide me with a best link? Thank you in advance.
An .epub file is just a .zip file. It contains a few directory files in XML format and the actual book content is usually XHTML. You can use Objective-Zip to unzip the .epub file and then use NSXMLParser to parse the XML files.
More info: Epub Format Construction Guide
On top of Ole's answer (that's a pretty good how-to guide), it's definitely worth reading the specification for the Open Container Format (OCF) - sorry it's a word file. It's the formal specification for the for zip structure used.
In brief you parse the file by
Checking it's plausibly valid by looking for the text 'mimetype' starting at byte 30 and the text 'application/epub+zip' starting at byte 38.
Extracting the file META-INF/container.xml from the zip
Parsing that file and extracting the value of the full-path attribute of the first rootfile element in it.
Load the referenced file (the full-path attribute is a URL relative to the root of zip file)
Parse that file. It contains all the metadata required to reference all the other content (mostly XHTML/CSS/images). Particularly you want to read the contents of the spine element which will list all content files in reading order.
If you want to do it right, you should probably also handle DTBook content as well.
If you want to do this right, you need to read and understand the Open Packaging Format (OPF) and Open Publication Structure (OPS) specifications as well.

How do you compare the content of two archive files programmatically?

I'm doing some testing to ensure that the all in one zip file that i created using a script file will produce the same output as the content of a few zip files that i must manually click and create via web interface. Therefore the zip will have different folder structure.
Of course i can manually extracted them out and using my powerful eyeball technique to scan them or even lazier i can write a script to do that, but before i invest more time and get accused by my boss for company time robbery, i'm asking if there's a better way to do this?
I'm using perl LAMP stack by the way.
thanks.
You can use perl's Archive::ZIP or Python's zipfile to extract the filenames, sizes and CRC checksums of the files in the archives. Create a file which contains the results sorted by file name (ignore the path).
For your smaller ZIPs, merge the results of the script (cat list1 list2 list3 | sort).
Now, you can use diff to compare the results.
I can wholeheartly recommend Beyond Compare. Unless you're really getting underpaid, it's the biggest bang for your (bosses) buck.
[Edit] I seem to have scanned over the different folder structure, sorry about that.Beyond Compare can compare all files in folders with the same folderstructure. It does not have (I believe) the intelligence to go searching for matches in files in different folders.
Regards,
Lieven
Create a crc checksum for your files.
If your checksum is the same for the original files and the unzipped files, you can be sure the files are the same. And even works for non text data.
A checksum be easily be created with an external program such as "SFV Checker" or programmatically (.net/java for example include libraries to do this).
Taking a cue from Carra's answer...if A.zip is your single big archive and B.zip is the archive generated through the web then use the following algorithm
Extract all files from A.zip and recursively (w.r.t folders) compute the checksum of the files present in the folder (using cksum, md5sum etc) where the contents were extracted and save this information after sorting it (pipe it through sort) to a file (say A.txt)
Do the same for B.zip and generate B.txt
Compare A.txt with B.txt they should be exactly the same.
OR
Use unzip -l to get file/directory lists for both the (zip) archives and then flatten the hierarchy of the user generated zip file and compare with the contents of your script generated zip file using some thing like diff. By flattening of hierarchy I mean you may need to do some kind of pre-precessing on one or both lists before you can do a meaningful comparison with diff.