TYPO3 var/log - how to auto remove entries after x days - typo3

In TYPO3 10 logs are beeing stored, amoung others, in var/log. Files that are stored there are growing over time. Is there a way to keep it clean and keep automatically entries from x last days?

Would be eventually quite a task of an operation. Normaly, log rotating is a job for the OS, so you may use something like logrotate or similar to rotate the logs.

Related

Tracking Postgres Autovacuums

We've been experimenting with tweaking the autovacuum thresholds on some of our larger tables, because otherwise they never run, but also build up 10s of thousands of dead tuples. Using a query I found somewhere on SO, looking at the pg_stat_user_tables table, I'm able to see the last run time and the number of runs for the autovacuum, but I can't seem to find a history of the events. We're trying to keep track of how often they are running to get some idea of where a good threshold is, so that sort of info would be useful. Is there another table available for this?
There is no table with the history (unless you have created one of course or deployed some monitoring system that did it for you, but I don't know of such
a one). You can set log_autovacuum_min_duration to zero, then you will have a record going forward in your log files.

About MongoDB repair's working (option, log, progress or so)

MongoDB was accidentally broken, and now in 'repair'. (wiredTiger, old version 3.6)
In my case, repair work is more needed really for some instances, so if there is available option, I consider to use it and firstly to skip less necessary but more indexes, probably erroneous ones.
(But the job is likely to proceed all the data through instances, especially for 'wiredTiger' which is named for a sort of 'interleaved' status, then no way to prioritize, though.)
Second, repair is likely to work longer for those with more indexes, and longer for more data (even with less indexes),
whatever, progress log messages are in standard-out, BTW time expectation seems difficult (just keeping going even no log for four hours). Instances are different in size of less than 100GB or more than 1TB.
Logs being shown on screen would be saved as any file?
If there are some problems, assuming a certain instance has ones (e.g, complexity, poor structure, etc and as a result caused crash), the repair could give some of left crashed and others rescued?
And, practically there is no more possible method to recover instances if repair finally fails?

Handling backups of a large table (>1 TB) in Postgres?

I have a 1TB table (X) that is a pain to backup.
The table X contains historical log data that is not often updated after creation. We usually only access a single row at a time, so performance is still very good.
We currently make nightly full logical backups, and exclude X for the sake of backup time and space. We do not need historical backups of X, since the log files from which it is populated are backed up themselves. However, recovery of X by re-processing of the log files would take an unnecessary long time.
I'd like to include X in our backup strategy so that our recovery time can be much faster. It doesn't seem feasible to include X in the nightly logical backup.
Ideally, I'd like a single full backup for X that is updated incrementally (purely to save time).
I lack the experience to investigate solutions alone, and I'm wondering what my options are?
Barman for incremental updates? Partition X? Both?
After doing some more reading, I'm inclined to partition the table and write a nightly script to perform logical backups only on the changed table partitions (and replace the previous backups). However, this strategy may still take a long time during recovery with a pg_restore... Thoughts?
Thanks!
I think using barman with the rsync/SSH + WAL streaming option and performing incremental backups is the best option in your case. Going this way makes your nightly backups easier & less costly, since you don't have to do much logic yourself once you configure barman. I will update this with my blog shortly that details the steps.
Logical backups may not be the right approach for periodic backups when dealing with large databases. When using physical backups even though your backup size is large its more than compensated in the acquisition & restore cost (performance, speed & simplicity).
Thanks
UPDATE (2020-08-27):
Below is a git repo with the end-end demonstration. There are many versions of implementations out there that have done it, but if you wanted to do something from the scratch & keep the image simple (avoiding unnecessary dependencies), please take a look at this implementation,
https://github.com/softwarebrahma/PostgreSQL-Disaster-Recovery-With-Barman
Thanks

Most efficient way to check time difference

I want to check an item in my database every x minutes for y minutes after its creation.
I've come up with two ways to do this and I'm not sure which will result in better efficiency/speed.
The first is to store a Date field in the model and do something like
Model.find({time_created > current_time - y})
inside of a cron job every x minutes.
The second is the keep a times_to_check field that keeps track of how many more times, based on x and y, the object should be checked.
Model.find({times_to_check> 0})
My thought on why these two might be comparable is because the first comparison of Dates would take longer, but the second one requires a write to the database after the object had been checked.
So either way you are going have to check the database continuously to see if it is time to query your collection. In your "second solution" you do not have a way to run your background process as you are only referencing how you are determining your collection delta.
Stick with running you unix Cron job but make sure it is fault tolerant an have controls ensuring it is actually running when you application is up. Below is a pretty good answer for how to handle that.
How do I write a bash script to restart a process if it dies?
Based on that i would ask how does your application react if your Cron job has not run for x number of minutes, hours or days. How will your application recover if this does happen?

WTMP (RHEL 5/6) log maintenance - need to keep a rolling log rather than rotate

We have a policy requirement to use items using wtmp, such as the 'last' command or GDM-Last-Login details. We've discovered that these items will have gaps depending on when wtmp was last rotated, and need to try to work around this.
Because these gaps have been determined to be unacceptable, and keeping wtmp data in a single active logfile forever without splitting off the old data into archives is not really viable, I'm looking for a way to rollover / age-out old wtmp entries while still keeping more recent ones.
From some initial research I've seen this problem addressed in the Unix (AIX, SunOS) world with the use of 'fwtmp' and some pre/post logrotate scripts. Has this been addressed in the Linux world and I've just missed it?
So far as I can tell 'fwtmp' is a Unix built-in that's not made it into RHEL 5 & 6, per searching the RHEL customer portal and some 'yum whatprovides' searches on my test boxes.
Many thanks in advance!