I have a need to bulk email using VBS, but cannot test anything more than a few emails, and hence, do not get a good account for the speed of operation.
I have several plain text files containing email addresses for different groups of people.
I am using Set objMessage = CreateObject("CDO.Message") as the mechanism to send.
My query is which of the following will be the quickest to process, and therefore take least amount of time to complete:
Do a loop to read all the email addresses one by one and add to objMessage.Bcc variable using the following:
For Each line In listLines
bccline = bccline + line + ";"
Next
objMessage.Bcc = bccline
Do a loop to read one email address, send email, and so on until end of text file.
I have coded both ways and both work great, but as stated, I have no way of really finding out what is the quickest.
I would appreciate any feedback/suggestions on this.
Regards.
Option #1 will definitely be faster. Each time you executed the send email command, the program will need to pass the complete data to the mail server/service to process. The lesser time you executing the send command, the faster the entire program will complete.
Option 1 would be optimal in terms of rapid delivery because you're sending One message to a MTA and the MTA handles delivery to the rest.
Another option
For example if you needed a specially tailored message for specific individuals or groups of individuals based on the text file you need, the fastest way to send those out is to spawn multiple threads using shell execute w/o waiting to start the next one.. is firing off an email script with arguments or named arguments.
Named arguments are something like this: /to:user#suchandsuch.com. The tricky part of using that is avoiding characters that make CMD cranky. So build in a function to escape characters that might cause CMD to interpret characters differently... I'm all for scale so this is the method I might use. But if it's just a general broadcast - BCC is your best option.
Related
Imagine I have a program writen in whatever language and compiled to run interactivelly using just command line interface. Lets imagine this one just for the sake of simplify the question:
The program first asks the user its name.
Then based on some business logic, it may ask the user age OR the user email. Only one of those.
After that it finishes with success or error.
Now image that I want to write a script in powershell that fills all that data automatically.
How can I achieve this? How can I run this program, read its questions (outputs) and then provide the correct answer (input)?
If you don't know the questions it will ask ahead of time, this would be tough.
PowerShell scripts are normally linear. Once you start the program from in PowerShell, it would wait for the program to finish before continuing. There are ways to do things in parallel, but it doesn't interact like that.
Although if you're dealing with something like a website making the first call gives a response (completing the command). You could match the response to select the proper value.
Or if the program is local and allows command line parameters, you could do that.
I'm gradually working my way up the perl learning curve (with thanks to contributors to this REALLY helpful site), but am struggling with how to approach this particular issue.
I'm building a perl utility which utilises three (c++) third party programmes. Normally these are run: A $file_list | B -args | C $file_out
where process A reads multiple files, process B modifies each individual file and process C collects all input files in the pipe and produces a single output file, with a null input file signifying the end of the input stream.
The input files are large(ish) at around 100Mb and around 10 in number. The processes are CPU intensive and the whole process need to be applied to thousands of groups of files each day, so the simple solution of reading and writing intermediate files to disk is simply too inefficient. In addition, the process above is only part of a processing sequence, where the input files are already in memory and the output file also needs to be in memory for further processing.
There are a number of solutions to this already well documented and I have a prototype version utilising IPC::Open3(). So far, so good. :)
However - when piping each file to process A through process B I need to modify the arguments in process B for each input file without interrupting the forward flow to process C. This is where I come unstuck and am looking for some suggestions.
As further background:
Running in Ubuntu 16.04 LTS (currently within Virtual box)and perl v5.22.1
The programme will run on (and within) a single machine by one user (me !), i.e. no external network communication or multi user or public requirement - so simplicity of programming is preferred over strong security.
Since the process must run repeatedly without interruption, robust/reliable I/O handling is required.
I have access to the source code of each process, so that could be modified (although I'd prefer not to).
My apologies for the lack of "code to date", but I thought the question is more one of "How do I approach this?" rather than "How do I get my code to work?".
Any pointers or help would be very much appreciated.
You need a fourth program (call it D) that determines what the arguments to B should be and executes B with those arguments and with D's stdin and stdout connected to B's stdin and stdout. You can then replace B with D in your pipeline.
What language you use for D is up to you.
If you're looking to feed output from different programs into the pipes, I'd suggest what you want to look at is ... well, pipe.
This lets you set up a pipe - that works much like the ones you get from IPC::Open3 but have a bit more control over what you read/write into it.
I'm on a Red Hat installation.
What I'm running into is that:
The perl script looks into the mailbox using Modules to look for message #0 or the delieved mail but it isn't there yet.
If I make a COPY of the mail using the C flag I still get the same response that it does not deliver it to the mailbox.
So what I need to know is a procmail recipe which delivers it to the mailbox then fires the script to process the delivered email.
Thanks
Rob
As I noted in a comment above, this seems like a bad way to do this. But, you
should be able to use something like:
:0c:
* Whatever condition
/path/to/mbox
:0ahi
| /path/to/perl/script
or equivalently
:0
* whatever condition
{
:0c:
/path/to/mbox
:0ahi
| /path/to/perl/script
}
The first recipe will cause the message to be delivered to the mbox file, but
because the c flag is used processing will continue after that recipe. The
a flag on the following recipe specifies that it will only be used if the
preceding recipe was used and completed successfully.
The h flag on that recipe specifies that only the headers should be sent to
the perl script. This probably won't affect it, since you say that it's
getting the message from the mbox file rather than from the pipe; but it does
reduce the amount of data that needs to be sent over the pipe.
The i flag specifies that procmail shouldn't complain if it can't send
everything to the script. Since the script likely isn't reading from its
standard input, it's possible that the pipe buffers would fill up causing
procmail to receive a write error; although this is very unlikely to happen
when sending only the headers of the message.
If you really need to use the Mail::Box family of modules for processing the
messages, rather than something that could parse a message from the standard
input, I'd suggest that you at least use a Maildir mail box rather than mbox.
There is no real specification for the mbox format, and there are many
different interpretations of how it should work. The differences tend to be
subtle, so things could seem to be working fine until you receive a message
which happens to trigger an incompatibility between different implementations
(such as having a line starting with From). That's not even getting into
the issues with locking of mbox files.
So I was able to come up with the simple but although probably not the best answer. Since I have control over when the emails are coming in I decided to remove the lock on the process and it worked fine.
So without the second colon and the "c" option it now runs the script and can see the email in the mailbox.
Whew...what a pain...two days wasted on a simple solution.
I've set up a Perl script to process incoming emails through qmail. However, I find that email messages are hitting the script over and over again, every few minutes for several hours.
Am I supposed to set a return value in the script or do something else to indicate to the sender that the email has been received successfully?
Depends on the MTA but yes, generally any nonzero exit code signals an error. Qmail uses different codes than Sendmail and compatibles, in the 100-something range.
I have simple web-form with a little js script that sends form values to a text box. This combined value becomes a database query.
This will be sendt to dsmadmc (TSM administrative command line).
How can I use perl to keep the dsmadmc process open for consecutive input/output without the dsmadmc process closing between each input command sent?
And how can I capture the output - this is to be sent back to the same web page, in a separate div.
Any thought, anyone?
Probably IPC::Open2 could help. It allows to read/write to/from both input and output of an external process.
Beware of deadlocks though (i.e. situations where both your code and the app wait for their counterpart). You might want to use IO::Select to handle that.
P.S. I don't know how these modules behave on windows (.exe?..), but from a quick google search it looks like they are compatible.