PROCMAIL:: How do I get a perl script to execute AFTER having the mail delivered to the MBox - perl

I'm on a Red Hat installation.
What I'm running into is that:
The perl script looks into the mailbox using Modules to look for message #0 or the delieved mail but it isn't there yet.
If I make a COPY of the mail using the C flag I still get the same response that it does not deliver it to the mailbox.
So what I need to know is a procmail recipe which delivers it to the mailbox then fires the script to process the delivered email.
Thanks
Rob

As I noted in a comment above, this seems like a bad way to do this. But, you
should be able to use something like:
:0c:
* Whatever condition
/path/to/mbox
:0ahi
| /path/to/perl/script
or equivalently
:0
* whatever condition
{
:0c:
/path/to/mbox
:0ahi
| /path/to/perl/script
}
The first recipe will cause the message to be delivered to the mbox file, but
because the c flag is used processing will continue after that recipe. The
a flag on the following recipe specifies that it will only be used if the
preceding recipe was used and completed successfully.
The h flag on that recipe specifies that only the headers should be sent to
the perl script. This probably won't affect it, since you say that it's
getting the message from the mbox file rather than from the pipe; but it does
reduce the amount of data that needs to be sent over the pipe.
The i flag specifies that procmail shouldn't complain if it can't send
everything to the script. Since the script likely isn't reading from its
standard input, it's possible that the pipe buffers would fill up causing
procmail to receive a write error; although this is very unlikely to happen
when sending only the headers of the message.
If you really need to use the Mail::Box family of modules for processing the
messages, rather than something that could parse a message from the standard
input, I'd suggest that you at least use a Maildir mail box rather than mbox.
There is no real specification for the mbox format, and there are many
different interpretations of how it should work. The differences tend to be
subtle, so things could seem to be working fine until you receive a message
which happens to trigger an incompatibility between different implementations
(such as having a line starting with From). That's not even getting into
the issues with locking of mbox files.

So I was able to come up with the simple but although probably not the best answer. Since I have control over when the emails are coming in I decided to remove the lock on the process and it worked fine.
So without the second colon and the "c" option it now runs the script and can see the email in the mailbox.
Whew...what a pain...two days wasted on a simple solution.

Related

Bulk email using VBS

I have a need to bulk email using VBS, but cannot test anything more than a few emails, and hence, do not get a good account for the speed of operation.
I have several plain text files containing email addresses for different groups of people.
I am using Set objMessage = CreateObject("CDO.Message") as the mechanism to send.
My query is which of the following will be the quickest to process, and therefore take least amount of time to complete:
Do a loop to read all the email addresses one by one and add to objMessage.Bcc variable using the following:
For Each line In listLines
bccline = bccline + line + ";"
Next
objMessage.Bcc = bccline
Do a loop to read one email address, send email, and so on until end of text file.
I have coded both ways and both work great, but as stated, I have no way of really finding out what is the quickest.
I would appreciate any feedback/suggestions on this.
Regards.
Option #1 will definitely be faster. Each time you executed the send email command, the program will need to pass the complete data to the mail server/service to process. The lesser time you executing the send command, the faster the entire program will complete.
Option 1 would be optimal in terms of rapid delivery because you're sending One message to a MTA and the MTA handles delivery to the rest.
Another option
For example if you needed a specially tailored message for specific individuals or groups of individuals based on the text file you need, the fastest way to send those out is to spawn multiple threads using shell execute w/o waiting to start the next one.. is firing off an email script with arguments or named arguments.
Named arguments are something like this: /to:user#suchandsuch.com. The tricky part of using that is avoiding characters that make CMD cranky. So build in a function to escape characters that might cause CMD to interpret characters differently... I'm all for scale so this is the method I might use. But if it's just a general broadcast - BCC is your best option.

BFX field to large for a data item increase -S

I am getting the above error when trying to run a script produce to a report. It is a pre-existing script that has been run, successfully many times before. Research has told me that that it is something to do with the stack size? I’m running 10.2B02 in WRQ Reflections. Can anyone tell me what this statement means and how I look up the value of my –S.
Thanks,
Paul
-s is a client startup parameter. You mention "Reflections" so you are probably using a character terminal session. The -s parameter is on the command line used to start Progress (which might be inside a script). If there is a -pf somefile.pf on the command line then it is inside that "parameter file". If it is not specified the default value is 40. The maximum value is limited by available memory but setting it in the hundreds or even in the thousands is not unheard of.
You can also get the startup values by sending a SIGUSR1 to the _progres process that the session is running. I.e. kill -USR1 That will (safely) create a "protrace." file that includes startup parameters and a 4gl stack trace. The file will appear in either the current directory, the home directory or the temp-file directory (I forget which, just look for protrace*).
This error usually means that your code is manipulating a field that is too large. (Like the error says.) That might be for a lot of reasons.
One common possibility is string concatenation in a loop.
Or you might be calling lots of sub-procedures and passing parameters around.
If "nothing has changed" in the code then it probably just means that some data structure has grown slightly larger over time and increasing -s is really no big deal so long as it solves the problem.
If you keep having to increase it then it is more likely that you have some sort of coding issue. Maybe you're passing things by value that ought to be passed by reference or maybe you have run away recursion. Or something else. You'd need to provide a lot more detail to say for sure.
It is also possible (but unlikely) that you have a corrupt data record that appears to have a field in it that is too large. You could run "proutil dbName -C dbanalys" as an initial step to see if that is true.
Part of the error message is non-standard -- I'm not certain which log file it is coming from or how it got there (applications can write their own messages) but it seems that it might have something to do with trying to send an e-mail. So I'd be suspicious that either the list of recipients got too long or that the body of the e-mail is too large.

how to perl for bi-directional communication with dsmadmc.exe?

I have simple web-form with a little js script that sends form values to a text box. This combined value becomes a database query.
This will be sendt to dsmadmc (TSM administrative command line).
How can I use perl to keep the dsmadmc process open for consecutive input/output without the dsmadmc process closing between each input command sent?
And how can I capture the output - this is to be sent back to the same web page, in a separate div.
Any thought, anyone?
Probably IPC::Open2 could help. It allows to read/write to/from both input and output of an external process.
Beware of deadlocks though (i.e. situations where both your code and the app wait for their counterpart). You might want to use IO::Select to handle that.
P.S. I don't know how these modules behave on windows (.exe?..), but from a quick google search it looks like they are compatible.

How can I control an interactive Unix application programmatically through Perl?

I have inherited a 20-year-old interactive command-line unix application that is no longer supported by its vendor. We need to automate some tasks in this application.
The most troublesome of these is creating thousands of new records with slightly different parameters (e.g. different identifiers, different names). The records have to be created in sequence, one at a time, which would take many months (and therefore dollars) to do manually. In most cases, creating a record has a very predictable pattern of keying in commands, reading responses, keying in further commands, etc. However, some record creation operations will result in error conditions ('record with this identifier already exists') that require a different set of commands to be exit gracefully.
I can see a few different ways to do this:
Named pipes. Write a Perl script that runs the target application with STDIN and STDOUT set to named pipes then sends the target application the sequence of commands to create a record with the required parameters, and then instructs the target application to exit and shut down. We then run the script as many times as required with different parameters.
Application. Find another Unix tool that can be used to script interactive programs. The only ones I have been able to find though are expect, but this does not seem top be maintained; and chat, which I recall from ages ago, and which seems to do more-or-less what I want, but appears to be only for controlling modems.
One more potential complication: I think the target application was written for a VT100 terminal and it uses some sort of escape sequences to do things like provide highlighting.
My question is what approach should I take? One of these, or something completely different? I quite like the idea of using named pipes and then having a Perl script that opens the FIFOs and reads and writes as required, as it provides a lot of flexibility, but from what I have read it seems like there's a lot of potential problems if I go down this path.
Thanks in advance.
I'd definitely stick to Perl for the extra flexibility, as chaos suggested. Are you aware of the Expect perl module? It's a lot nicer than the named pipe approach.
Note also with named pipes, you can't force the output coming back from your legacy application to be unbuffered, which could be annoying. I think Expect.pm uses pseudo-ttys to get around this problem, but I'm not sure. See the discussion in perlipc in the section "Bidirectional Communication with Another Process" for more details.
expect is a lot more solid than you're probably giving it credit for, but if I were you I'd still go with the Perl option, wanting to have a full and familiar programming language for managing the process and having confidence that whatever weird issues arise, there will be ways of addressing them.
Expect, either with the Tcl or Perl implementations, would be my first attempt. If you are seeing odd sequences in the output because it's doing odd terminal things, just filter those from the output before you do your matching.
With named pipes, you're going to end up reinventing Expect anyway.

Do Perl CGI programs have a buffer overflow or script vulnerability for HTML contact forms?

My hosting company says it is possible to fill an HTML form text input field with just the right amount of garbage bytes to cause a buffer overflow/resource problem when used with Apache/HTTP POST to a CGI-Bin Perl script (such as NMS FormMail).
They say a core dump occurs at which point an arbitrary script (stored as part of the input field text) can be run on the server which can compromise the site. They say this isn't something they can protect against in their Apache/Perl configuration—that it's up to the Perl script to prevent this by limiting number of characters in the posted fields. But it seems like the core dump could occur before the script can limit field sizes.
This type of contact form and method is in wide use by thousands of sites, so I'm wondering if what they say is true. Can you security experts out there enlighten me—is this true? I'm also wondering if the same thing can happen with a PHP script. What do you recommend for a safe site contact script/method?
I am not sure about the buffer overflow, but in any case it can't hurt to limit the POST size anyway. Just add the following on top of your script:
use CGI qw/:standard/;
$CGI::POST_MAX=1024 * 100; # max 100K posts
$CGI::DISABLE_UPLOADS = 1; # no uploads
Ask them to provide you with a specific reference to the vulnerability. I am sure there are versions of Apache where it is possible to cause buffer overflows by specially crafted POST requests, but I don't know any specific to NMS FormMail.
You definitely should ask for specifics from your hosting company. There are a lot of unrelated statements in there.
A "buffer overflow" and a "resource problem" are completely different things. A buffer overflow suggests that you will crash perl or mod_perl or httpd themselves. If this is the case, then there is a bug in one of these components, and they should reference the bug in question and provide a timeline for when they will be applying the security update. Such a bug would certainly make Bugtraq.
A resource problem on the other hand, is a completely different thing. If I send you many megabytes in my POST, then I could eat an arbitrary amount of memory. This is resolvable by configuring the LimitRequestBody directive in httpd.conf. The default is unlimited. This has to be set by the hosting provider.
They say a core dump occurs at which point an arbitrary script (stored as part of the input field text) can be run on the server which can compromise the site. They say this isn't something they can protect against in their Apache/Perl configuration—that it's up to the Perl script to prevent this by limiting number of characters in the posted fields. But it seems like the core dump could occur before the script can limit field sizes.
Again, if this is creating a core dump in httpd (or mod_perl), then it represents a bug in httpd (or mod_perl). Perl's dynamic and garbage-collected memory management is not subject to buffer overflows or bad pointers in principle. This is not to say that a bug in perl itself cannot cause this, just that the perl language itself does not have the language features required to cause core dumps this way.
By the time your script has access to the data, it is far too late to prevent any of the things described here. Your script of course has its own security concerns, and there are many ways to trick perl scripts into running arbitrary commands. There just aren't many ways to get them to jump to arbitrary memory locations in the way that's being described here.
Formail has been vulnerable to such in the past so I believe your ISP was using this to illustrate. Bad practices in any perl script could lead to such woe.
I recommend ensuring the perl script verifies all user input if possible. Otherwise only use trusted scripts and ensure you keep them updated.