Using OLE in Perl to traverse Outlook folders - perl

I've got a script which will happily export messages from a folder in outlook to rfc822 files, fine.
But I want to traverse/iterate/recurse through the entire list of folders in outlook to extract copies of everything.
I'm thwarted by days of unsuccessful web searches.
Point me to TFM that I may R it.

Mail::Outlook
and all_folders() method ?

Related

Rename a group of .txt files based on content that appears after a specific string in the text file?

I have a folder with hundreds of text files in it. The file names have zero meaning. I am trying to extract a variable number that appears after a string in each file and and rename the files using it. I'm somewhat of a Powershell newbie (an old timer form the DOS world) but I have written many useful scripts with it. I've searched high an low, this one has me stumped.
Any and all suggestions are welcome, or I'd happily pay someone for the snippet of code. -Ed

Is it possible to extract metadata such as Content Created date from files - I can't get this with PowerShell

I need to extract the "Content Created" date out of thousands of files, but haven't been able to find a way to do this using PowerShell / other Command Line utility.
Does someone out there know a way to obtain this metadata? If so, please can you advise me. Thanks.
I've looked at various resources online, including this site, but haven't been successful thus far.
Here's a screenshot explaining what I'm trying to do.
I've been unable to find a native powershell cmdlet which does what you want. However, I found this article: Use PowerShell to Find Metadata from Photograph Files and the script it used: get file meta data function.
The article talks about image files, but the function is not specific for image files.
I tested it out on a folder containing a Word and an Excel file and the returned Metadata from the Word file contains the Content Created date. The Excel file does not contain/return that value. This is not unexpected as the Details tab of properties for the Excel file does not contain a Content Created value so it seems to be specific for Word files, and maybe some other file or document types.
Update:
You write that you need to extract this info from thousands of files, but if those files are anything but Word-files you probably won't be able to do that.
As far as I can tell this should work with the file types exposing the type of metadata you want. However, it seems that the ContentCreated property is unique to Word. I tried adding a text file (.txt), Acrobat PDF (.pdf), MS Access (.mdb), Excel (.xlxs) and a Word doc (.docx) file to my test folder and the only one that has/returns that metadata property is the Word file.
You should also be aware that the script seems to return metadata localized, so for me to programatically get the info i wanted I had to pipe the output of the script to Select-Object -Property Name,'InnehÄll skapat' (which is the Swedish name for Content created). So if you're running on a non-english system you may need to check what the output looks like before creating your Select-Object statement.
PowerQuery in Excel 2013 or later (data tab). Connect to data> Folder.

Splitting Emails with MIME::Parser

I got handed 4GB of emails concatenated into a single file and the suggestion that MIME::Parser could split the individual emails back out again. All my attempts to date end up with the parser just copying the original file without extracting any of the emails. So: Is this even something that MIME::Parser can handle? My code is very basic:
my $file = IO::File->new("somefile", O_RDONLY);
my $parser = new MIME::Parser;
$parser->output_dir("somedir");
my $entity = $parser->parse($file);
$file->close;
Below is a link to sample date that some have requested. This is all SPAM and phishing emails. DO NOT CLICK ANY OF THE LINKS. Enjoy: Pastbin of 4KB of emails.
MIME::Parser is for reading a single Mail to get the attachments etc. It can be used to extract mails which are attached inside another mail as message/rfc822, but is is not intended to extract mails from some kind of archive with lots of mails in it concatenated.
It is not clear what format your single file with mails has. But if it comes from a UNIX system or from a Thunderbird installation it might simply be in the classical Mbox format and there are several tools to split Mbox files into separate messages. Apart from several perl modules there are also other tools like git-mailsplit which help you extract the mails from Mbox-format.

Getting spreadsheets from Google-Drive using Perl module Net::Google::Drive::Simple

in my script I'm using perl module Net::Google::Drive::Simple. I can see/download all my imported/shared files on my Google Drive, but I can't see/download any spreadsheets which I created/are shared with me.
Am I using bad module for this or are there any special methods for handling spreadsheets?
Thank you in advance.
Only documents like PDF or png can be downloaded directly. Google Drive Documents like spreadsheets or (text) documents need to be exported into one of the available formats. Check for "exportLinks" on a file given.
Source

Pipe multiple files into a zip file

I have several files in a GridFS Document Store and what I'd like to do is to pipe this data into a zip file via stdin in NodeJS. So that I will end up with a zip file containing all these files.
Now my question is how can I give the files a valid filename inside of the zip file. I think I need to emulate/fake a file header containing the filename?
Any help is appreciated!
Thanks
I had problems when writing zip files with Node.js not long ago. I ended up doing something similar to what is described in Zip archives in node.js
I can't help you directly with your problem, but at least I hope I can point out some things:
Don't try to use node-archive. Even if the description says it allows to create zip files, the moment I read the source code (since documentation is unexistant) I realized that's just a lie. It only exposes methods for reading.
Using zip by spawning a process, like recommended on the provided link, seems to be the best way. Something that would work is copying the files to a local folder with whatever name you desire and then calling the zip command, just to delete the files afterwards.
The other option, which seems ok, is to use zipper (https://github.com/rubenv/zipper, although better just use npm). The reason I'm not really wishing to use it is because there's not that much flexibility, it seems to have been done in a day and it hasn't been modified since the first commit, so I'm not sure it will receive maintenance (sure, you could just fork it...).
I swear the day I have an entire free weekend with no work I will write a freaking module that does this as complete as possible. It's silly that there isn't and it shouldn't be that much struggle. blablablarant.
Edit:
Not sure if it was there before, but now I've been using the node-compress module (also using gzippo). It works fine.