how to stop postfix MAILER-DAEMON emails - email

I am running Ubuntu 12.04 with Postfix
Late yesterday, I added a package (ispconfig3) that modified my postfix configuration and also added an entry to the root crontab that was invoking a script.
At around 11PM last night, I uninstalled that package and went to bed. The uninstall deleted the script and it's directory ok. But it did not clean up the crontab entry.
Since cron had trouble invoking the script, it sent root#xx.org an email. But ispconfig3 had modified my postfix configuration, therefore there is no mail transport capability. So a MAILER-DAEMON email was placed in the mail queue.
Overnight, (I'm guessing here!) cron wakes up every minute and tries to do the same thing. So by 7:00AM there are now 1100+ emails in the mail queue. But since postfix is messed up, I can't see them.
At around 8:00ish I realize that something is wrong with my email set up. I check postfix configuration, backout the changes and now I can get emails ok. I can send them, receive them, etc.
Then the flurry of emails start. Every minute or so, I get around 30 MAILER-DAEMON emails indicating that cron couldn't invoke the script. I check
sudo crontab -l
see the stale command for the non-existing script. I clear it out:
sudo crontab -e
I expect the emails to stop.
They don't.
In fact, every minute they seem to be increasing in number. I then spend a few hours looking at a ton of configuration files to try to figure out what is going on. By 11:00ish or so, it's up to 50+ emails coming in every minute.
I finally realized that this stream of emails was occurring because of the failures that occurred the night before and that it was going to go on for 7 days. The "7d" comes from a postfix configuration setting. (BTW I changed that to be "2d" i.e. only a couple of hours).
In any case, I solved it. I'm adding this post so others can save themselves some time. See below.

Finally hit on the idea to look at the mail queue.
A bit of googling and I found this site:
https://www.garron.me/en/linux/delete-purge-flush-mail-queue-postfix.html
I tried
postqueue -p
which listed all of the "(mail transport unavailable)" emails:
... snip ...
-- 1104 Kbytes in 1185 Requests.
I then did:
postqueue -f # this flushes the mail queue
postqueue -p
Mail queue is empty
And all of a sudden email flurry ended.
Note: the website above said to use:
postfix -f
that did not work for me. A bit of googling found the postqueue command.
Another note: I was worried there were emails in that mail queue that were not "mail transport unavailable", so I double checked all 1185 emails to ensure it was ok to purge them.

Related

Magento Cart, Newsletter

Magento ver. 1.7.0.0
We are currently using Magento and want to send Newsletters out. It has this build in. We have created templates and then when it gets moved into the queue it sticks there and does not get sent out at the appropriate time. There is no other way of sending the newsletter out immediately and there are no error messages to give you a clue as to why the situation has arisen.
Has anyone come across this and found a fix?
You need to enable cron for scheduled tasks in Magento. Login to your server in ssh (putty), open your crontab;
crontab -e
and add this line to it;
*/5 * * * * php -f /path/to/magento/shell/cron.php
Obviously change /path/to/magento/ to the actual path on the server.
You can now also enable log cleaning in admin which sheds alot of db records which slows Magento down.
Hope this helps
T

conditionally sending cron job emails

I'm not even sure if what I want is possible, but I'd like to run a Cron job where an email is only sent in certain conditions. I know that you can prevent mail from being sent at all by setting MAILTO to an empty string in the crontab file, but I've searched in several different ways, and can't find anything about sending email conditionally. My end goal is to run a Cron job that periodically checks whether the webserver is running, and if not, restart it. I only want an email if the webserver has to be restarted. I'm writing my Cron jobs in Perl. Is there a Perl command I can use within the job script that will disable the email in certain cases? Thanks for any help you can give me.
Cronjobs will send emails if the command you are running generate output. If you write your script to only send output to STDERR/STDOUT when you want an email, that should accomplish your goal.
There are 2 possibilities to send mails from cron jobs:
From program, that has been started by cron daemon,
From UNIX/Linux mechanism, that can send mail, if a program, that has been started as a cron job, has written something to STDOUT or STDERR.
I don't recommend to use the 2nd possibility. It is inflexible. You can't send mails to different recipients, depending on what alert has happened.
Usage of the 2nd way is rather a bad design. Cron jobs should redirect all their stdout and stderr to an idividual for every cron job log file for possible troubleshooting.
Perl possesses perfect possibilities to send mails, e.g. using MIME::Lite module.
This module is not a core one, so that you might should request sysadmin to install this module, if it's not available.
If you will use the 1st way, then your issue is easy to solve using Perl logic: just send the required mail from your Perl program after this program restarted the web server.

Wait for system to sync time before performing another task

I'm using a Raspberry Pi, and upon startup it's sending an e-mail with the time and an IP address. The problem is that the time is not correct, it's the time from last time the system was shut down. When I log in through ssh and do a date command, I get the correct time. In other words, the e-mail is sent before the system has updated its time.
I was thinking of automatically running ntpdate on boot, but after reading up on it it seems like a bad idea due to the many risks of error.
So, can I somehow wait until the time has been uppdated before continuing in a script?
There is a tool included in the ntp reference implementation for this very purpose. The utility has a rather cryptic name: ntp-wait. Five minutes with the man page and you will be all set.

KSH script won't email when nohup

I have a unique issue, i am in a unix environment and have a ksh script that ssh's to multiple sites, executes some code and then returns a response and then emails that response to an email address.
The script works perfectly when i run it, but since it must run for several hours i wish to nohup the script.
Here is where the problem is. When i nohup the script the email is not sent. I have scoured the boards looking for a reason or solution to no avail. if someone could point me in the right direction i would greatly appreciate it.
Here is my mail portion of the script:
mail -s "subject" email#address.com < /usr/etc/bin/mydir/infofile.out &&
rm -f infofile.out
exit;
EDIT: my environment is AIX 6.1.7.1
Finally figured out the answer, and even thou i was being dumb, i feel i have a responsibility to answer anyway, just in case someone else runs across this issue.
Turns out when i nohup my script it DOES send the email correctly. Its just that by nohuping and logging out it forces the email to be sent from the unix mail utility's default email address, and in my environment that address sends out hundreds of useless alerts, most of which i have filtered in outlook to go to a trash folder, well the email i was sending ended up in that trash folder.
Thanks to those who responded, especially shellter, your recommendation to use shell debugging is what let me know that it was sending from that default mail account.

Best way to send email when PHP process dies

I wrote a quick PHP page to handle 502 requests. Nginx will re-direct to this page when a 502 is encountered and an email is fired off.
The problem is, most of the time that the 502 is encountered is because PHP has died, so writing to the DB and sending an email using PHP is no longer possible. Tweaks to PHP-FPM settings have done a lot to help (restarting PHP, etc), but I'd still like a fall-back.
There are numerous ways to send an email outside of PHP, but I am curious what others out there are doing with good success? I'd like to keep it simple for configuration (i.e. not have yet another complex dependency to worry about on the servers) and reliability reasons.
Googling and searching SO didn't turn up much, probably because "dies" and "fail" bring back a lot of false positives for my scenario.
What about use a cronjob (bash based) to parse error_log file periodically (x hours) and send an email (mutt/mail) when find something like resuming normal operations in the last period (x hours). I think is simple and effective...
[Thu Dec 27 14:37:52 2012] [notice] caught SIGTERM, shutting down
[Thu Dec 27 14:37:53 2012] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.6-2~precise+1 configured -- resuming normal operations
UPDATE:
#Brian As #takeshin says cronjobs can run even every second if you want, but some sysadmins could bite you... :|
Here is what I've ended up doing. I've not rolled it out to our prod servers yet, but all testing thus far looks good.
Nginx does not support CGI natively, so you need another means to do it. thttpd fit the bill nicely. There is a good write up the nginx wiki showing how to use it.
I configured thttpd with the following:
dir=/var/www/htdocs
user=thttpd
logfile=/var/log/thttpd.log
pidfile=/var/run/thttpd.pid
port=8000
cgipat=**.cgi
And added this to my nginx config:
error_page 502 #thttpd;
location #thttpd {
include proxy.include;
proxy_pass http://127.0.0.1:8000;
}
Finally, I created a basic CGI script that calls PHP on the command line and passed in my already-written PHP script. This was an ideal solution for me because the script was already set up to log to our alerts table and fire off an email. This is also real-time, as the script will execute as soon as nginx returns a 502 code (subsequent 502s will not hammer me with emails, per the logic of the script).
I was able to run some simulation tests be forcing nginx to return a 502 (see more here).
I'm going to continue tweaking this, but I'm pretty happy with the relative ease of deploying it and that I could re-use existing code.
We have dual solution.
We use shell script to send out email notifications, if PHP dies. We check if php service is running with shell command in the shell script, if it is not running, we'll fire off a shell command to send an email.
This is all in a few lines of Shell Script. Not too hard.
Of course, set it up in cron.