logrotate working but ignoring size - logrotate

CentOS v.7
Logrotate v.3.8.6
I set logrotate to rotate when file reaches 5M but it ignores it, if I add daily it will rotate daily regardless of size, i tried with size, minsize and maxsize all the same the only difference is with "size" it doesnt even refer to it in the output, here is my config and output of logrotate -vdf /etc/logrotate.d/maillog
(the actual log file size when running the following tests was 45K)
(the conf file is the same for all tests only the size parameter changed)
/var/log/maillog {
size 5M
rotate 50
create 644 root root
dateext
dateformat -%Y-%m-%d_%H_%s
notifempty
postrotate systemctl restart rsyslog
systemctl restart postfix
endscript }
SIZE:
logrotate -vdf /etc/logrotate.d/maillog
reading config file /etc/logrotate.d/maillog
Allocating hash table for state file, size 15360 B
Handling 1 logs
rotating pattern: /var/log/maillog forced from command line (50 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/maillog
log needs rotating
rotating log /var/log/maillog, log->rotateCount is 50
Converted ' -%Y-%m-%d_%H_%s' -> '-%Y-%m-%d_%H_%s'
dateext suffix '-2017-12-19_13_1513689486'
glob pattern '-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
fscreate context set to unconfined_u:object_r:var_log_t:s0
renaming /var/log/maillog to /var/log/maillog-2017-12-19_13_1513689486
creating new /var/log/maillog mode = 0644 uid = 0 gid = 0
running postrotate script
running script with arg /var/log/maillog: "
systemctl restart rsyslog
systemctl restart postfix
"
No reason for "log needs rotating" is given.
MINSIZE:
logrotate -vdf /etc/logrotate.d/maillog
reading config file /etc/logrotate.d/maillog
Allocating hash table for state file, size 15360 B
Handling 1 logs
rotating pattern: /var/log/maillog forced from command line (50 rotations)
empty log files are not rotated, only log files >= 5242880 bytes are rotated, old logs are removed
considering log /var/log/maillog
log needs rotating
rotating log /var/log/maillog, log->rotateCount is 50
Converted ' -%Y-%m-%d_%H_%s' -> '-%Y-%m-%d_%H_%s'
dateext suffix '-2017-12-19_13_1513689869'
glob pattern '-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
fscreate context set to unconfined_u:object_r:var_log_t:s0
renaming /var/log/maillog to /var/log/maillog-2017-12-19_13_1513689869
creating new /var/log/maillog mode = 0644 uid = 0 gid = 0
running postrotate script
running script with arg /var/log/maillog: "
systemctl restart rsyslog
systemctl restart postfix
"
Here it shows, "only log files >= are rotated" but no reason for "log needs rotating" is given.
MAXSIZE:
reading config file /etc/logrotate.d/maillog
Allocating hash table for state file, size 15360 B
Handling 1 logs
rotating pattern: /var/log/maillog forced from command line (50 rotations)
empty log files are not rotated, log files >= 5242880 are rotated earlier, old logs are removed
considering log /var/log/maillog
log needs rotating
rotating log /var/log/maillog, log->rotateCount is 50
Converted ' -%Y-%m-%d_%H_%s' -> '-%Y-%m-%d_%H_%s'
dateext suffix '-2017-12-19_13_1513690859'
glob pattern '-[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
fscreate context set to unconfined_u:object_r:var_log_t:s0
renaming /var/log/maillog to /var/log/maillog-2017-12-19_13_1513690859
creating new /var/log/maillog mode = 0644 uid = 0 gid = 0
running postrotate script
running script with arg /var/log/maillog: "
systemctl restart rsyslog
systemctl restart postfix
"
Here it shows, "log files => are rotated" but no reason for "log needs rotating" is given.
Why is it ignoring file size when rotating?

You're running logrotate with -f, in that scenario it's always going to force a rotation with a complete lack of regard for your other options:
https://manpages.debian.org/jessie/logrotate/logrotate.8.en.html
-f, --force
Tells logrotate to force the rotation, even if it doesn't think this is necessary.
It gives you no reason because you in fact were the reason-giver.

Related

Supervisord not releasing log file during rotation

I've got a pair of CentOS 7 servers that are running SupervisorD with one program. Supervisord is not set to handle any log rotations.. supervisord.conf's lines related to size and backups both = 0.
supervisord.conf
[unix_http_server]
file=/var/run/dir1/supervisor.sock ; the path to the socket file
[supervisord]
logfile=/var/log/dir1/supervisord.log ; main log file; default $CWD/supervisord.log
logfile_maxbytes=0
; logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=0
;logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=trace ; log level; default info; others: debug,warn,trace
pidfile=/var/run/dir1/supervisord.pid ; supervisord pidfile; default supervisord.pid
nodaemon=false ; start in foreground if true; default false
minfds=786068
minprocs=200 ; min. avail process descriptors;default 200
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/dir1/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisord.d/*.conf
program1.conf
[program:jobengine]
; Set full path to Job Engine program if using virtualenv
command=python /opt/program1/jobengine/jobengine.pyc --namespace=worker%(process_num)02d
environment=PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:
directory=/opt/program1
user=root
numprocs=4
process_name=%(process_num)01d
stdout_logfile=/var/log/dir1/jobengine.log
stderr_logfile=/var/log/dir1/jobengine.log
autostart=true
autorestart=true
startsecs=10
startretries=999
; Below lines added to ensure supervisord does not perform any log handling, in favor of logrotate
stdout_logfile_maxbytes=0
stderr_logfile_maxbytes=0
stdout_logfile_backups=0
stderr_logfile_backups=0
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; Causes supervisor to send the termination signal (SIGTERM) to the whole process group.
stopasgroup=false
; default (999)
priority=999
logrotate/program1.conf
# Track main logs, jobengine logs, etc.
/var/log/dir1/*log {
# Run as the Apache user
su apache apache
# CloudBolt will create the log file
missingok
# Do not rotate empty logs
notifempty
# Do not compress logs
nocompress
# Use `*.log.1` naming instead of `*.log-YYYYMMDD` format
nodateext
# Keep 5 archived logs
rotate 5
# Rotate files larger than this size
size 5M
}
So what happens is logrotate kicks off at it's appropriate time, however supervisord refuses to move from the old to the new log file. I'm used to the logrotate having a postscript for a restart signal however I'm 100% new to supervisord so I've been trying to verify how to gracefully restart supervisord and it's child processes. The way I understand it, 'stopwaitsecs' force supervisord to wait the specified time if busy and when I test the 4 child pids are killed via SIGTERM immediately.
What I need is supervisord to only restart once the child pids have completed w/e they're doing.
I also, on a whim, tried sending supervisord a SIGUSR2, and also a supervisorctl reload.. everything seems to kill the processes immediately.
What am I doing wrong here?

CentOS EPEL fail2ban not processing systemd journal for tomcat

I've installed fail2ban 0.10.5-2.el7 from EPEL on CentOS 7.8. I'm trying to get it to work with systemd for processing a Tomcat log (also systemd).
In jail.local I added:
[guacamole]
enabled = true
port = http,https
backend = systemd
In filter.d/guacamole.conf:
[Definition]
failregex = Authentication attempt from <HOST> for user "[^"]*" failed\.$
ignoreregex =
journalmatch = _SYSTEMD_UNIT=tomcat.service + _COMM=java
If I run journalctl -u tomcat.service I see all the log lines. The ones I am interested in look like this:
May 18 13:58:26 myhost catalina.sh[42065]: 13:58:26.485 [http-nio-8080-exec-6] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 1.2.3.4 for user "test" failed.
If I redirect journalctl -u tomcat.service to a log file, and process it with fail2ban-regex then it works exactly the way I want it to work, finding all the lines it needs.
% fail2ban-regex /tmp/j9 /etc/fail2ban/filter.d/guacamole.conf
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use log file : /tmp/j9
Use encoding : UTF-8
Results
=======
Failregex: 47 total
|- #) [# of hits] regular expression
| 1) [47] Authentication attempt from <HOST> for user "[^"]*" failed\.$
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [1] ExYear(?P<_sep>[-/.])Month(?P=_sep)Day(?:T| ?)24hour:Minute:Second(?:[.,]Microseconds)?(?:\s*Zone offset)?
| [570] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 571 lines, 0 ignored, 47 matched, 524 missed
[processed in 0.12 sec]
However, if fail2ban reads the journal directly then it does not work:
fail2ban-regex systemd-journal /etc/fail2ban/filter.d/guacamole.conf
It comes back right away, and processes 0 lines!
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use systemd journal
Use encoding : UTF-8
Use journal match : _SYSTEMD_UNIT=tomcat.service + _COMM=java
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Lines: 0 lines, 0 ignored, 0 matched, 0 missed
[processed in 0.00 sec]
I've tried to remove _COMM=java. It doesn't make a difference.
If I leave out the journal match line altogether, it at least processes all the lines from the journal, but does not find any matches (even though, as I mentioned, it processes a dump of the log file fine):
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use systemd journal
Use encoding : UTF-8
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Lines: 202271 lines, 0 ignored, 0 matched, 202271 missed
[processed in 34.54 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 202271 lines
Either this is a bug, or I'm missing a small detail.
Thanks for any help you can provide.
To make sure the filter definition is properly initialised, it would be good to include the common definition. Your filter definition (/etc/fail2ban/filter.d/guacamole.conf) would therefore look like:
[INCLUDES]
before = common.conf
[Definition]
journalmatch = _SYSTEMD_UNIT='tomcat.service'
failregex = Authentication attempt from <HOST> for user "[^"]*" failed\.$
ignoreregex =
A small note given that your issue only occurs with systemd but not flat files, could you try the same pattern without $ at the end? Maybe there is an issue with the end of line when printed to the journal?
In your jail definition (/etc/fail2ban/jail.d/guacamole.conf), remember to define the ban time/find time/retries if they haven't already been defined in the default configuration:
[guacamole]
enabled = true
port = http,https
maxretry = 3
findtime = 1h
bantime = 1d
# "backend" specifies the backend used to get files modification.
# systemd: uses systemd python library to access the systemd journal.
# Specifying "logpath" is not valid for this backend.
# See "journalmatch" in the jails associated filter config
backend = systemd
Remember to restart the fail2ban service after doing such changes.

How do I roll sensu logs without restarting?

Sensu logs can fill up with large amounts of data. You can setup an outside infrastructure with logrotate to restart sensu software on a periodic basis to eliminate open file handles but we would prefer not to restart.
Is there a way to roll the logs to a set number of backups with a set disk usage? I'm looking for configuration similar to how you can configure a Java application's logging with log4j and rolling file appenders/loggers. I cannot find anything on the sensu website.
Update: In my case, it turned out that the PID files from /var/run/sensu/sensu-.*.pid were missing, which seems to be due to the fact that we're managing the Sensu processes via /opt/sensu/embedded/bin/sensu-ctl. I ended up fixing it by applying this patch to logrotate.d/sensu:
diff --git a/sensu_configs/logrotate.d/sensu b/sensu_configs/logrotate.d/sensu
index 8457e29..42a80f9 100644
--- a/sensu_configs/logrotate.d/sensu
+++ b/sensu_configs/logrotate.d/sensu
## -6,7 +6,7 ##
sharedscripts
compress
postrotate
- kill -USR2 `cat /var/run/sensu/sensu-client.pid 2> /dev/null` 2> /dev/null || true
+ /opt/sensu/embedded/bin/sensu-ctl sensu-client 2
endscript
}
## -18,7 +18,7 ##
sharedscripts
compress
postrotate
- kill -USR2 `cat /var/run/sensu/sensu-server.pid 2> /dev/null` 2> /dev/null || true
+ /opt/sensu/embedded/bin/sensu-ctl sensu-server 2
endscript
}
## -30,6 +30,6 ##
sharedscripts
compress
postrotate
- kill -USR2 `cat /var/run/sensu/sensu-api.pid 2> /dev/null` 2> /dev/null || true
+ /opt/sensu/embedded/bin/sensu-ctl sensu-api 2
endscript
}
I am leaving the original answer below, in case somebody finds it useful.
I think logrotate.d/sensu should do what you need, by sending the -USR2 signal to Sensu when rotating logs. You might need to apply this patch to it, though:
diff --git a/sensu.logrotate b/sensu.logrotate
index 8457e29..a5178fa 100644
--- a/sensu.logrotate
+++ b/sensu.logrotate
## -1,4 +1,5 ##
/var/log/sensu/sensu-client.log {
+ su sensu sensu
rotate 7
daily
missingok
## -11,6 +12,7 ##
}
/var/log/sensu/sensu-server.log {
+ su sensu sensu
rotate 7
daily
missingok
## -23,6 +25,7 ##
}
/var/log/sensu/sensu-api.log {
+ su sensu sensu
rotate 7
daily
missingok
Please let me know if you ever get a chance to test it out.

Config two master node when run KUBERNETES_PROVIDER=ubuntu ./kube-up.sh

I'm trying to setup Kube's cluster with 2 master node 10.0.11.108 and 10.0.11.97 (Ubuntu) with config "ai ai" in cluster/ubuntu/config-default.sh file.
When I run
KUBERNETES_PROVIDER=ubuntu ./kube-up.sh,
it run deployscript in node 10.0.11.97 twice. And become error:
[sudo] password to copy files and start node: cp: cannot create
regular file ‘/opt/bin/etcd’: Text file busy cp: cannot create regular
file ‘/opt/bin/kube-apiserver’: Text file busy cp: cannot create
regular file ‘/opt/bin/kube-controller-manager’: Text file busy cp:
cannot create regular file ‘/opt/bin/kube-scheduler’: Text file busy
start: Job is already running: etcd
I setup Kube's cluster with 1 master, run twice, encounter the same error message,
I modified utuntu/util.sh, and get 'Cluster validation succeeded'.
First add -f for all cp commands, like this: cp ~/kube/default/* /etc/default/ ===> cp -f ~/kube/default/* /etc/default/
Then rerun ./kube-up.sh, you will encounter "start: Job is already running: etcd"
Then modified utuntu/util.sh, modify all 'service XXX start' to 'service XXX restart' , and run kube-up.sh, You'll get the success message.

logrotate fails to rotate syslog occassionally

Can someone please tell me why occasionally syslog does not rotate and keeps logging in the same file?
After the following customization to the default setting the occasional syslog file rotation issue is observed:
Path for syslog is changed from default /var/log/syslog to /opt/vortex/log/syslog (changed in /etc/rsyslog.conf as given below)
$template ATMFormat,"%$YEAR% %timegenerated%::%syslogtag%:%msg:::$%\n"
auth,authpriv.* /var/log/auth.log
.;auth,authpriv.none -/opt/vortex/log/syslog;ATMFormat
syslog rotation rule is changed from default weekly to daily and set to retain last 30 files (changed in /etc/logrotate.d/rsyslog)
Please find the given below details of configurations and scripts for your reference.
logrotate is scheduled daily through cron by placing logrotate script file in /etc/cron.daily directory. crontab has the following definition
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
0 0 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
#
/etc/cron.daily/logrotate script has the below rules
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.conf
The contents of logrotate.conf is given below:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp, or btmp -- we'll rotate them here
/var/log/wtmp {
missingok
monthly
create 0664 root utmp
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0660 root utmp
rotate 1
}
# system-specific logs may be configured here
log rotate configuration for system log is specified in /etc/logrotate.d/rsyslog as given below :
/opt/vortex/log/syslog
{
rotate 30
daily
missingok
notifempty
delaycompress
compress
postrotate
invoke-rc.d rsyslog reload > /dev/null
endscript
}
/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
invoke-rc.d rsyslog reload > /dev/null
endscript
}
Thanks for your help