It seems that my rotation is not working. If I execute the logrotate manually it's working like it should. The logrotate is ran because I can see it in the logs. Here's what I have:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
/home/www/logs/access_log {
sharedscripts
delaycompress
size 2G
compress
dateext
maxage 30
postrotate
/usr/local/bin/apachectl -k graceful
endscript
}
Any clue?
It's an old question, but someone might find this helpful.
The default rotation settings indicate weekly rotation, but your configuration for access log below specifies rotation by size.
These settings will not work at the same time. It's either time or size.
For access logs, I would advise daily rotation.
If your logs grow over 2GB during the day, then you will need to run logrotate hourly. This ensures that logrotate will check the size of your log and rotate accordingly.
This however, implies that you have to add time stamp to your logs, because you would want to have multiple logs for the same day, right?
There is also maxsize parameter for logrotate, and it's supposed to work together with time based rotation (daily, weekly etc) but I'm not sure if it works. You will need to experiment.
Related
I'm trying to rotate logs daily EXCEPT when my logfile is over 500M then it should be rotated asap, so I've copied my /etc/cron.daily/logrotate into /etc/cron.hourly and my config file looks like this :
compress
missingok
/path/to/my/logfile.log
{
daily
maxsize 500M
ifempty
copytruncate
olddir /path/to/my/archived/logs/
dateext
}
the dateext is here to make sure the log rotation works the same way as the old log rotation tool (several dedicated custom scripts).
As it is, when logfile.log is over 500M a rotation happens, which create logfile.log.YYYYMMDD.gz, but if logfile.log reaches again 500M the same day (which is more than likely to happen) then I have an issue since logfile.log.YYYYMMDD.gz already exists.
So my quesion is, Is there a way to append the log content into an already existing rotated log file, in order to have only one archived log per day ?
I want logrotate to delete the log file which is older than 7 days. I have tried with maxage 7 but by not mentioning the rotate(count) keeps only one log after rotation, Again I have learned that logrotate will only do other stuff if it is able to rotate at least one file.
I have tried with something like this:
directory path/log.json {
size 1M
compress
maxage 5
missingok
}
Any help for the same wouold be appriciated.
I am using logrotate. It works fine with dateformat %s and generates files in somefile.log.1555267419.gz. But I need to add the extension in milliseconds dateformat(somefile.log.1555267419789.gz).
I checked the manpage and as far as I have understood, It says it does not support milliseconds specifier.
Is there any way to add milliseconds in extension and still be able to rotate the old log files?
/var/log/somelog/*.log {
compress
notifempty
daily
copytruncate
size 15M
dateext
dateformat .%s
rotate 20
}
As you have already mentioned, the logrotate docs mention no support of milliseconds. You can get the EPOCH time using %s, assuming the system clock is past Sept 9 2001.
dateformat format_string
Specify the extension for dateext using the notation similar to strftime(3) function. Only %Y %m %d and %s specifiers are allowed. The default value is -%Y%m%d. Note that also the character separating log name from the extension is part of the dateformat string. The system clock must be set past Sep 9th 2001 for %s to work correctly. Note that the datestamps generated by this format must be lexically sortable (i.e., first the year, then the month then the day. e.g., 2001/12/01 is ok, but 01/12/2001 is not, since 01/11/2002 would sort lower while it is later). This is because when using the rotate option, logrotate sorts all rotated filenames to find out which logfiles are older and should be removed.
However, you could try to set EPOCH with millisecond granularity using logrotate scripts manually. Here is an example where you call a custom script located at /my/script which takes the rotated filename as the parameter. Using $1 in the prerotate/postrotate you get the absolute path of the file being rotated. You also want to use the nosharedscripts config (which is set by default) to make sure the prerotate/postrotate scripts are run per log file matching your *.log pattern.
/var/log/somelog/*.log {
compress
notifempty
daily
copytruncate
size 15M
dateext
dateformat .%s
rotate 20
nosharedscripts
postrotate
/my/bash/script $1 > /dev/null
endscript
}
In your custom script, you can then set milliseconds on the file name by renaming it. One way to get the EPOCH with milliseconds on a Linux machine is to run date "+%s%3N".
What is meant by "rotate 13" in "/etc/logrotate.conf" file? Can this be changed to "rotate 4" ?
$ cat /etc/logrotate.conf
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 13
The comment is not correct.
This statement keeps 13 weeks worth of the logs.
'rotate' means that it keeps the old logfiles for as many cicles as specified. (A cycle is in your case a week)
From the documentation at https://linux.die.net/man/8/logrotate
rotate count
Log files are rotated count times before being removed or mailed to the address specified in a mail directive. If count is 0, old versions are removed rather than rotated.
The rotate command determines how many archived logs are returned before logrotate starts deleting the older ones.
You can read more about it here and here.
rotate 13 means logrotate rotate the log files till count 13, after that logrotate start to shift the files: like logfile_1 shifted to logfile_2 and logfile_2 shifted to logfile_3 and so on. for that logrotate discard the last logfile_13 and create a new file at logfile1
I am trying to copy files from a directory that is in constant use by a security cam program. To archive these .jpg files to another HD, I first need to copy them. The problem is, the directory is being filled as the copying proceeds at the rate of about 10 .jpgs per second. I have the option of stopping the program, do the copy then start it again which is not what I want to do for many reasons. Or I could do the find/mtime approach. I have tried the following:
find /var/cache/zm/events/* -mmin +5 -exec cp -r {} /media/events_cache/ \;
Which under normal circumstances would work. But it seems the directories are also changing their timestamps and branch off in different directions so it never comes out logically and for some reason each directory is very deep like /var/cache/zm/events/../../../../../../../001.jpg x 3000. All I want to do is copy the files and directories via cron with a simple command line if possible. With the directories constantly changing, is there are way to make this copy without stopping the program?
Any insight would be appreciated.
rsync should be a better option in this case but you will need to try it out. Try setting it up at off peak hours when the traffic is not that high.
Another option would be setting up the directory on a volume which uses say mirroring or RAID 5 ; this way you do not have to worry about losing data (if that indeed is your concern).