How can I fix "/etc/cron.daily/logrotate: gzip: stdin: file size changed while zipping"? - email

in last days i get daily mail from cron's logrotate task:
/etc/cron.daily/logrotate:
gzip: stdin: file size changed while zipping
How can I fix it?
Thanks,
Gian Marco.

Here's a blog post in French which gives a solution.
In English, you can read this bug report.
To summarise:
First you have to add the --verbose option in the script /etc/cron.daily/logrotate to have more information the next time it runs to identify which rotation log cause the problem.
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate --verbose /etc/logrotate.conf`
Next you have to add the delaycompress option in logrotate configuration.
Like exemple, I add the nginx's logrotate configiguration in /etc/logrotate.d/nginx:
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
...
}

upstart will close (and reopen) its log file when it notices that the file is deleted. However, if you look at what gzip does, you see that it doesn't delete the file until after it's one writing the output file. That means that there always is a race condition where log lines might be lost for lines logs being written gzipping.
You can disable the warning using gzip --quiet, but really that doesn't hide the fact that you might still loose log lines.
This means that delaycompress is not a generic fix to this. It's a specific fix to a specific problem.
The real solution for this is probably a combination of delaycompress and being able to send a signal to the process. It will make the race condition go away in practise (unless you rotate multiple times per second :) ).

Related

How can I disable the logging of MongoDB?

My MongoDB Log file is over 16GB, the log file is not important to me. How can I disable the logging of MongoDB and if I do, will I encounter any problems?
It would not be a smart idea to turn off logging. Use Rotate Log Files to rotate them and keep them small.
logrotate is standard function on Linux.
Simplest way to rotate the log file is kill -USR1 $(/usr/sbin/pidof mongod)
My logrotate.conf file looks like this:
missingok
compress
delaycompress
notifempty
create
/var/log/mongodb/mongod.log{
size 10M
rotate 9
sharedscripts
postrotate
kill -USR1 $(/usr/sbin/pidof mongod)
endscript
}
When the logfile reaches 10MB then it is rotated. Up to 9 files are kept. logrotate is executed by a daily cron job.
Though you can disable logging, it is really not recommended.

`entr`: How is update ID'd? noatime troubles? &, why -r not work with -d?

I have a script regularly appending to a log file. When I use entr (discovered here) to monitor that log file, and I then touch the log, everything works fine, but when the script appends to the file, entr fails. This may be because I have noatime set in my fstab - but that only stops the updating of the access time not the modify time, so this confuses me.
I've checked and while atime is not updating, ctime (ls -lc) definitely is. Could entr really be depending on atime? I use noatime because I have an SSD. So what should I do? I just stumbled on lazytime. Would that solve the problem?
Since monitoring the log file was not working, I tried entr -cdr on the directory of files that are updated (a new file is created) at the same time as the log (the log is in a different directory). entr recognizes when the directory contents change, but the -r does not work. The entr process just ends, saying "entr: directory altered".
Any idea how to fix this or whether I should just go back to inotify, would be appreciated.
Edit: I have written it with inotify now, and the event reported when the log file is written to is, sensibly enough, "MODIFY."
It turns out that entr does not respond to IN_MODIFY events, but only to these (in Linux):
IN_CLOSE_WRITE|IN_DELETE_SELF|IN_MOVE_SELF|IN_CREATE
Also, IN_ATTRIB, but only if the file-mode or inode numbers change.
In BSD/OSX, it's:
NOTE_DELETE|NOTE_WRITE|NOTE_RENAME|NOTE_TRUNCATE|NOTE_ATTRIB
Also, the option -r has no effect in the context of the -d option. It only works when entr is monitoring files.
See the developer's comments. Also, more info on entr.

logrotate problem with rotating unusuall logs

I have a problem with logrotate. The application itself produces following logs:
xxx.log
and at 23:59 the application changes the log to:
xxx.log.2019-01-05
and so on. Right now I am getting following in the log directory:
xxx.log
xxx.log.2019-01-01
xxx.log.2019-01-02
etc.
What I need to to is want to rotate the logs that get created on 23:59 and not to touch the xxx.log file itself.
I have tried with following logrotate rule:
/var/log/xxx/xxx/xxx.log.* {
daily
missingok
rotate 30
compress
notifempty
copytruncate
nosharedscripts
prerotate
bash -c "[[ ! $1 =~ *.gz ]]"
endscript
}
But first of all logrotate does not compress the log that was created last and it also adds .1.gz extension to previously compressed files.
logrotate does not compress the log that was created last
Do you have "delaycompress" defined in /etc/logrotate.conf? Per logrotate man:
delaycompress
Postpone compression of the previous log file to the next rotation cycle.
it also adds .1.gz extension
While you're at the aforementioned man page, you should check out what the "extension" option does:
extension ext
Log files with ext extension can keep it after the rotation.

Logrotate not generating all files after run

Hello people
It's my first time using logrotate and I don't know if I'm configuring it in the right way. I'm using it with loggerhead log file in Ubuntu 11.04
Log is under
/log/loggerhead/loggerheadd.log
My configuration file looks like this
/log/loggerhead/loggerheadd.log {
daily
rotate 7
compress
delaycompress
missingok
}
Then I run a force rotation
logrotate -f /etc/logrotate.d/loggerhead
and that change the name of the log file to
/log/loggerhead/loggerheadd.log.1
And didn't create the original file (loggerheadd.log) again, so I couldn't run a new force rotation, because "the file doesn't exist".
So, it's supposed that the application write entries in "loggerheadd.log" but when logrotate run the file will be renamed, so where will be written the log entries? Am I missing something?
Hope you can help me
By default logrotate will just rename your files, so your old file will be gone.
You can either use the create option to create a new file after the old one is used, or copytruncate to copy the original file to one with a new name, then truncate the original. Either option will do what you're asking for (more details on the man page here)

MongoDB log file growth

Currently my log file sits at 32 meg. Did I miss an option that would split the log file as it grows?
You can use logrotate to do this job for you.
Put this in /etc/logrotate.d/mongod (assuming you use Linux and have logrotate installed):
/var/log/mongo/*.log {
daily
rotate 30
compress
dateext
missingok
notifempty
sharedscripts
copytruncate
postrotate
/bin/kill -SIGUSR1 `cat /var/lib/mongo/mongod.lock 2> /dev/null` 2> /dev/null || true
endscript
}
If you think that 32 megs is too large for a log file, you may also want to look inside to what it contains.
If the logs seem mostly harmless ("open connection", "close connection"), then you may want to start mongod with the --quiet switch. This will reduce some of the more verbose logging.
Rotate the logs yourself
http://www.mongodb.org/display/DOCS/Logging
or use 'logrotate' with an appropriate configuration.
Using logrotate is a good option. while, it will generate 2 log files that fmchan commented, and you will have to follow Brett's suggestion to "add a line to your postrotate script to delete all mongod style rotated logs".
Also copytruncate is not the best option. There is always a window between copy and truncate. Some mongod logs may get lost. Could check logrotate man page or refer to this copytruncate discussion.
Just provide one more option. You could write a script that sends the rotate signal to mongod and remove the old log files. mongologrotate.sh is a simple reference script that I have written. You could write a simple cron job or script to call it periodically like every 30 minutes.