Logrotate not generating all files after run - logrotate

Hello people
It's my first time using logrotate and I don't know if I'm configuring it in the right way. I'm using it with loggerhead log file in Ubuntu 11.04
Log is under
/log/loggerhead/loggerheadd.log
My configuration file looks like this
/log/loggerhead/loggerheadd.log {
daily
rotate 7
compress
delaycompress
missingok
}
Then I run a force rotation
logrotate -f /etc/logrotate.d/loggerhead
and that change the name of the log file to
/log/loggerhead/loggerheadd.log.1
And didn't create the original file (loggerheadd.log) again, so I couldn't run a new force rotation, because "the file doesn't exist".
So, it's supposed that the application write entries in "loggerheadd.log" but when logrotate run the file will be renamed, so where will be written the log entries? Am I missing something?
Hope you can help me

By default logrotate will just rename your files, so your old file will be gone.
You can either use the create option to create a new file after the old one is used, or copytruncate to copy the original file to one with a new name, then truncate the original. Either option will do what you're asking for (more details on the man page here)

Related

Moodle: PDFs are empty

Many PDFs from different courses appear to have been corrupted or something. We first noticed when viewing to view in CHrome and got the error "Failed to load PDF document." In Internet Explorer the page just shows up empty.
When viewing the file in the "Updating file in" area, it says the following: "Either the file does not exist or there is a permission problem." It has a file size, but when I click on Download, the file is 0 kb.
Where are the files saved? Why are they corrupted?
Update: I've narrowed it down to that the /moodledate/filedir lost all the references. The folders are there as well as the files. Is there any way to fix this without having to reupload all PDFs?
I am on version 3.6.3 on Windows
The content/path hash is stored in the mdl_files table - maybe have a look in there to see if you can match up the files. The hash should match the folder/file name.
SELECT *
FROM mdl_files
WHERE filename LIKE '%pdf%'
OR mimetype LIKE '%pdf%'
OR source LIKE '%pdf%'
Also, check the file permissions. I don't use Windows, so not sure how it works on there. But on Linux, the web server should have access to the data folder.
Something like:
sudo chown -R www-data:www-data /pathto/moodledata/
sudo chmod -R 02777 /pathto/moodledata/
see https://docs.moodle.org/38/en/Security_recommendations#Most_secure.2Fparanoid_file_permissions

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

How does nxlog track the line number?

In nxlog config I have these params set:
SavePos True
ReadFromLast True
When removing lines from a log file (this should never happen) nxlog ships the entire log file. Is this related to how nxlog tracks the line number?
To recreate:
I stop the nxlog service
Delete the nxlog cache (just to make sure im starting fresh)
Right now the log folder ive configured nxlog to watch is empty
I add a new log file to the folder
nxlog ships the log file
I open the log file and add a few lines
nxlog ships those lines
I delete those new lines I just added
nxlog ships the entire log file
NXLog and generally other log shippers are designed to deal with append-only log files.
When you delete lines from the log file it sees that the file size is less. Under the append-only assumption this can only mean that the file was replaced/rotated and the current file is a new one that needs to be fully read.
Also note that when you edit a log file in a text editor the editor will usually replace the file with a new one even if you only append data to the end. This is not equivalent to echo test >> test.log.
If you want to transfer all kinds of changes in files you should use rsync or other tools.

How can I fix "/etc/cron.daily/logrotate: gzip: stdin: file size changed while zipping"?

in last days i get daily mail from cron's logrotate task:
/etc/cron.daily/logrotate:
gzip: stdin: file size changed while zipping
How can I fix it?
Thanks,
Gian Marco.
Here's a blog post in French which gives a solution.
In English, you can read this bug report.
To summarise:
First you have to add the --verbose option in the script /etc/cron.daily/logrotate to have more information the next time it runs to identify which rotation log cause the problem.
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate --verbose /etc/logrotate.conf`
Next you have to add the delaycompress option in logrotate configuration.
Like exemple, I add the nginx's logrotate configiguration in /etc/logrotate.d/nginx:
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
...
}
upstart will close (and reopen) its log file when it notices that the file is deleted. However, if you look at what gzip does, you see that it doesn't delete the file until after it's one writing the output file. That means that there always is a race condition where log lines might be lost for lines logs being written gzipping.
You can disable the warning using gzip --quiet, but really that doesn't hide the fact that you might still loose log lines.
This means that delaycompress is not a generic fix to this. It's a specific fix to a specific problem.
The real solution for this is probably a combination of delaycompress and being able to send a signal to the process. It will make the race condition go away in practise (unless you rotate multiple times per second :) ).

Are variables supported in logrotate configuration files?

I looked at logrotate.conf examples and everything in my /etc/logrotate.d directory. Nowhere was I able to find documentation on variables in these files.
I am trying to create a config file for rotating the logs of an application we are writing. I want to set the directory where logs are stored once, and then use it as a variable, like so:
my_app_log_dir=/where/it/is/deployed/logs
${my_app_log_dir}/*.log ${my_app_log_dir}/some_sub_dir/*.log {
missingok
# and so on
# ...
}
Is that possible?
You can achieve what you are looking for by implementing this kludge:
my-logrotate.conf ( NOTE: double quotes " are mandatory, also note that file names don't have to appear on the same line )
"*.log"
"some_sub_dir/*.log"
{
missingok
# and so on
# ...
}
Then the actual logrotate script - my-logrotate.sh
#!/bin/sh
set -eu
cd "${my_app_log_dir}"
exec logrotate /path/to/my-logrotate.conf
Now you can add logrotate.sh to your crontab.
You can use a bash "here document" to create a suitable config file on the fly, either at installation time or before running logrotate.
A bash script might look like this:
cat >rt.conf <<.
"${my_app_log_dir}/*.log" {
rotate 5
size 15k
missingok
}
.
logrotate rt.conf
Directly in the config file no (as far as my knowledge in logrotate goes).
Other solution:
Use the include option to include parts of the configuration file from a directory. This can help you if you have a package for your application, the package can leave a file in that directory containing only the entries for your app.
With logrotate 3.8.7,a test reveals that you can set and use variables in the pre-rotate and post-rotate script sections.
I tried this for a particular service log file.
postrotate
pid_file="/run/some_service/some_serviced.pid"
test -e "${pid_file}" && kill -s HUP $(cat "${pid_file}") || true
touch "${pid_file}.WAS_USED"
endscript
After running logrotate in force mode to ensure the log file was rotated and the post-rotate script executed, on looking in /run/some_service, there was an additional file "some_serviced.pid.WAS_USED", thus proving that the use of the variable worked.