For some reason our tomcatlogs are being appended with a .1 when logrotate runs. e.g:
file "tcl-2013-08-16.0.log" becomes "tcl-2013-08-16.0.1.log". I am struggling to find what setting is adding the ".1" before the ".log" part of the file name. Below is a copy of the settings file from /etc/logrotate.d/:
extension .log
rotate 52
daily
nocreate
nodateext
missingok
notifempty
compress
delaycompress
Below is the config in /etc/logrotate.conf:
weekly
rotate 52
create
dateext
compress
delaycompress
include /etc/logrotate.d
What am I missing here?
Thanks, Nath
If you look at the copy of the settings file from /etc/logrotate.d/", you will find there is this option extension .log.
This option means you are trying to append the .log extension to rotated files.
If you remove this option and run logrotate again, you will find that by default the extension will be *.log.1 , *.log.2, and so on for the rotated files.
In a nutshell, remove the option extension .log. Because the rotated files will take .log as extension.
Related
I have a problem with logrotate. The application itself produces following logs:
xxx.log
and at 23:59 the application changes the log to:
xxx.log.2019-01-05
and so on. Right now I am getting following in the log directory:
xxx.log
xxx.log.2019-01-01
xxx.log.2019-01-02
etc.
What I need to to is want to rotate the logs that get created on 23:59 and not to touch the xxx.log file itself.
I have tried with following logrotate rule:
/var/log/xxx/xxx/xxx.log.* {
daily
missingok
rotate 30
compress
notifempty
copytruncate
nosharedscripts
prerotate
bash -c "[[ ! $1 =~ *.gz ]]"
endscript
}
But first of all logrotate does not compress the log that was created last and it also adds .1.gz extension to previously compressed files.
logrotate does not compress the log that was created last
Do you have "delaycompress" defined in /etc/logrotate.conf? Per logrotate man:
delaycompress
Postpone compression of the previous log file to the next rotation cycle.
it also adds .1.gz extension
While you're at the aforementioned man page, you should check out what the "extension" option does:
extension ext
Log files with ext extension can keep it after the rotation.
my custom.conf file for logrotate is only performing the renaming of old files but not disposing them after my maxage day. i can see it rotating the files. the custom.conf file is saved to /etc/logrotate.d/ directory. can someone please tell me if i am missing something here?
It just continues to add previous dates *.log-20180428-20180430-20180502-20180504 at the end of my log file.
Here is the custom.conf file (Note: directory_name path is a mounted drive.)
/directory_name/*/*/*.log*
/directory_name/*/*.log*
{
daily
compress
delaycompress
rotate 4
ifempty
maxage 4
nocreate
missingok
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd-ng.pid 2> /dev/null` 2> /dev/null || true
endscript
}
The /etc/logrotate.conf file should not start and end with { and } like that.
And maxage and rotate is probably not needed both (I always just use rotate, never maxage).
Before { there should be a file name, like this:
/var/log/mylog {
...
}
You should probably add and change files in the /etc/logrotate.d folder instead of changing the /etc/logrotate.conf file. (Protects better against automatic changes on system upgraded I think, and cleaner)
I see no way to set destination directory or file here: http://www.cs.cmu.edu/~quake/triangle.switch.html
Actually, the program places result file in the same directory, even if current directory is different.
Why? Is ti possible to change?
The output files for the program are generated from the input file names. You can see this from the source code on line 3586
strcpy(b->outnodefilename, b->innodefilename);
...
strcat(b->outnodefilename, ".node");
strcat(b->outelefilename, ".ele");
...
Because of that I don't think there is a way to set the output directory as an option. It seems you will need to manually copy the output files to a different directory
cp output.node your/output/dir/output.node && rm output.node
Hello people
It's my first time using logrotate and I don't know if I'm configuring it in the right way. I'm using it with loggerhead log file in Ubuntu 11.04
Log is under
/log/loggerhead/loggerheadd.log
My configuration file looks like this
/log/loggerhead/loggerheadd.log {
daily
rotate 7
compress
delaycompress
missingok
}
Then I run a force rotation
logrotate -f /etc/logrotate.d/loggerhead
and that change the name of the log file to
/log/loggerhead/loggerheadd.log.1
And didn't create the original file (loggerheadd.log) again, so I couldn't run a new force rotation, because "the file doesn't exist".
So, it's supposed that the application write entries in "loggerheadd.log" but when logrotate run the file will be renamed, so where will be written the log entries? Am I missing something?
Hope you can help me
By default logrotate will just rename your files, so your old file will be gone.
You can either use the create option to create a new file after the old one is used, or copytruncate to copy the original file to one with a new name, then truncate the original. Either option will do what you're asking for (more details on the man page here)
I'm currently stuck with this problem where my .gz file is "some_name.txt.gz" (the .gz is not visible, but can be recognized with File::Type functions),
and inside the .gz file, there is a FOLDER with the name "some_name.txt", which contains other files and folders.
However, I am not able to extract the archive as you would manually (the folder with the name "some_name.txt" is extracted along with its contents) when calling the extract function from the Archive::Extract because it will just extract the "some_name.txt" folder as a .txt file.
I've been searching the web for answers, but none are correct solutions. Is there a way around this?
From Archive::Extract official doc
"Since .gz files never hold a directory, but only a single file;"
I would recommend using tar on the folder and then gz it.
That way you can use Archive::Tar to easily extract specific file:
Example from official docs:
$tar->extract_file( $file, [$extract_path] )
Write an entry, whose name is equivalent to the file name provided to disk. Optionally takes a second parameter, which is the full native path (including filename) the entry will be written to.
For example:
$tar->extract_file( 'name/in/archive', 'name/i/want/to/give/it' );
$tar->extract_file( $at_file_object, 'name/i/want/to/give/it' );
Returns true on success, false on failure.
Hope this helps.
Maybe you can identify these files with File::Type, rename them with .gz extension instead of .txt, then try Archive::Extract on it?
A gzip file can only contain a single file. If you have an archive file that contains a folder plus multiple other files and folders, then you may have a gzip file that contains a tar file. Alternatively you may have a zip file.
Can you give more details on how the archive file was created and a listing of it contents?