What is "rotate 13" in "/etc/logrotate.conf" file? - logrotate

What is meant by "rotate 13" in "/etc/logrotate.conf" file? Can this be changed to "rotate 4" ?
$ cat /etc/logrotate.conf
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 13

The comment is not correct.
This statement keeps 13 weeks worth of the logs.
'rotate' means that it keeps the old logfiles for as many cicles as specified. (A cycle is in your case a week)
From the documentation at https://linux.die.net/man/8/logrotate
rotate count
Log files are rotated count times before being removed or mailed to the address specified in a mail directive. If count is 0, old versions are removed rather than rotated.

The rotate command determines how many archived logs are returned before logrotate starts deleting the older ones.
You can read more about it here and here.

rotate 13 means logrotate rotate the log files till count 13, after that logrotate start to shift the files: like logfile_1 shifted to logfile_2 and logfile_2 shifted to logfile_3 and so on. for that logrotate discard the last logfile_13 and create a new file at logfile1

Related

I want my log rotate to delete the log file which are older than 7 days

I want logrotate to delete the log file which is older than 7 days. I have tried with maxage 7 but by not mentioning the rotate(count) keeps only one log after rotation, Again I have learned that logrotate will only do other stuff if it is able to rotate at least one file.
I have tried with something like this:
directory path/log.json {
size 1M
compress
maxage 5
missingok
}
Any help for the same wouold be appriciated.

How exactly does unlink work?

I'm having a little difficulty understanding how exactly this works.
It seems that unlink() will remove the inode which refers to the file's data, but won't actually delete the data. If this is the case,
a) what happens to the data? Presumably it doesn't stick around forever, or people would be running out of disk space all the time. Does something else eventually get around to deleting data without associated inodes, or what?
b) if nothing happens to the data: how can I actually delete it? If something automatically happens to it: how can I make that happen on command?
(Auxiliary question: if the shell commands rm and unlink do essentially the same thing, as I've read on other questions here, and Perl unlink is just another call to that, then what's the point of a module like File::Remove, which seems to do exactly the same thing again? I realize "there's more than one way to do it", but this seems to be a case of "more than one way to say it", with "it" always referring to the same operation.)
In short: can I make sure deleting a file actually results in its disk space being freed up immediately?
Each inode on your disk has a reference count - it knows how many places refer to it. A directory entry is a reference. Multiple references to the same inode can exist. unlink removes a reference. When the reference count is zero, then the inode is no longer in use and may be deleted. This is how many things work, such as hard linking and snap shots.
In particular - an open file handle is a reference. So you can open a file, unlink it, and continue to use it - it'll only be actually removed after the file handle is closed (provided the reference count drops to zero, and it's not open/hard linked anywhere else).
unlink() removes an link (a name if you want, but technically a record in some directory file) to the data (referred by an inode). Once there is no more link to the data, the system automatically free the associated space. The number of links to an inode is tracked into the inode. You can observe the number of actual links to a file with ls -l for example :
789994 drwxr-xr-x+ 29 john staff 986 11 nov 2010 SCANS
23453 -rw-r--r--+ 1 erik staff 460 19 mar 2011 SQL.java
This means that the inode 789994 has 29 links to it and that inode 23453 has only 1.
SQL.java is an entry into the current directory which points to inode 23453, if you remove that record from the directory (system call unlink or command rm) then the count goes to 0 and the system free the corresponding space, because if the count is 0 then this means that there is no more link/name to access the data! So it can be freed.
Removing the link just means the space is no longer reserved for a certain file name. The data will be erased/destroyed when something else is allocated to that space. This is why people write zeroes or random data to a drive after deleting sensitive data like financial records.

Logrotate doesn't work automatically (using size limit)

It seems that my rotation is not working. If I execute the logrotate manually it's working like it should. The logrotate is ran because I can see it in the logs. Here's what I have:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
/home/www/logs/access_log {
sharedscripts
delaycompress
size 2G
compress
dateext
maxage 30
postrotate
/usr/local/bin/apachectl -k graceful
endscript
}
Any clue?
It's an old question, but someone might find this helpful.
The default rotation settings indicate weekly rotation, but your configuration for access log below specifies rotation by size.
These settings will not work at the same time. It's either time or size.
For access logs, I would advise daily rotation.
If your logs grow over 2GB during the day, then you will need to run logrotate hourly. This ensures that logrotate will check the size of your log and rotate accordingly.
This however, implies that you have to add time stamp to your logs, because you would want to have multiple logs for the same day, right?
There is also maxsize parameter for logrotate, and it's supposed to work together with time based rotation (daily, weekly etc) but I'm not sure if it works. You will need to experiment.

Copying constantly changing directory

I am trying to copy files from a directory that is in constant use by a security cam program. To archive these .jpg files to another HD, I first need to copy them. The problem is, the directory is being filled as the copying proceeds at the rate of about 10 .jpgs per second. I have the option of stopping the program, do the copy then start it again which is not what I want to do for many reasons. Or I could do the find/mtime approach. I have tried the following:
find /var/cache/zm/events/* -mmin +5 -exec cp -r {} /media/events_cache/ \;
Which under normal circumstances would work. But it seems the directories are also changing their timestamps and branch off in different directions so it never comes out logically and for some reason each directory is very deep like /var/cache/zm/events/../../../../../../../001.jpg x 3000. All I want to do is copy the files and directories via cron with a simple command line if possible. With the directories constantly changing, is there are way to make this copy without stopping the program?
Any insight would be appreciated.
rsync should be a better option in this case but you will need to try it out. Try setting it up at off peak hours when the traffic is not that high.
Another option would be setting up the directory on a volume which uses say mirroring or RAID 5 ; this way you do not have to worry about losing data (if that indeed is your concern).

How do I strip initial offsets from OGG files?

=== BACKGROUND ===
Some time ago I ripped a lot of music from an internet radio station. Unfortunately something seems to have went wrong, since the length of most files is displayed as being several hours, but they started playing at the correct position.
Example: If a file is really 3 minutes long and it would be displayed as 3 hours, playback would start at 2 hours and 57 minutes.
Before I upgraded my system, gstreamer was in an older version and its behaviour would be as described above, so I didn't pay too much attention. Now I have a new version of gstreamer which cannot handle these files correctly: It "plays" the whole initial offset.
=== /BACKGROUND ===
So here is my question: How is it possible to modify an OGG/Vorbis file in order to get rid of useless initial offsets? Although I tried several tag-edit programs, none of them would allow me to edit these values. (Interestingly enough easytag will display me both times, but write the wrong one...)
I finally found a solution! Although it wasn't quite what I expected...
After trying several other options I ended up with the following code:
#!/bin/sh
cd "${1}"
OUTDIR="../`basename "${1}"`.new"
IFS="
"
find . -wholename '*.ogg' | while read filepath;
do
# Create destination directory
mkdir -p "${OUTDIR}/`dirname "${filepath}"`"
# Convert OGG to OGG
avconv -i "${filepath}" -f ogg -acodec libvorbis -vn "${OUTDIR}/${filepath}"
# Copy tags
vorbiscomment -el "${filepath}" | vorbiscomment -ew "${OUTDIR}/${filepath}"
done
This code recursively reencodes all OGG files and then copies all vorbis comments. It's not a very efficient solution, but it works nevertheless...
What the problem was: I guess it has something to do with the output of ogginfo:
...
New logical stream (#1, serial: 74a4ca90): type vorbis
WARNING: Vorbis stream 1 does not have headers correctly framed. Terminal header page contains additional packets or has non-zero granulepos
Vorbis headers parsed for stream 1, information follows...
Version: 0
Vendor: Xiph.Org libVorbis I 20101101 (Schaufenugget)
...
Which disappears after reencoding the file...
At the rate at which I'm currently encoding it will probably take several hours until my whole media library will be completely reencoded... but at least I verified with several samples that it works :)