Setup rsyslog to send log to remote syslogserver but not to messages/syslog - elastic-stack

I am running an ELK-Stack as a central syslogserver and I set up rsyslog to send logfiles, which are not logging into /var/lib/messages by default, to it.
The setup is working very well but since I made the configuration the external logfiles actually show up in the messages file, which is blowing it out of proportion and makes debugging normal systemlogs difficult.
I want the logs to be send to the syslogserver but not into the messages file.
This is my current configuration:
111-elk-syslog.conf:
*.* ##IP_OF_THE_SYSLOGSERVER:514
101-external-log.conf
$ModLoad imfile
$InputFileName PATH_TO_LOGFILE
$InputFileTag FILE_TAG
$InputFileStateFile FILE_TAG
$InputFileFacility local3
$InputRunFileMonitor
I know, using filebeat, I could circumvent this but rsyslog is working very well in my enviroment and this application is the only one logging so much, that this is an actual problem.

I don't understand in detail your setup, but it may help to know that in general in rsyslogd if you don't want further handling of a message that you have matched, you simply repeat the filter on the next line by using &, and the action stop. For example, you might try
*.* ##IP_OF_THE_SYSLOGSERVER:514
& stop

Related

May I damage software by killing the process?

Here is my situation: I have an authenticated proxy at work. I use proxyswitcher.net to change/remove the proxy accordingly to the network identification. Also, I made a powershell script that kills dropbox, changes de config file to the one with/without manual proxy configuration and I restart it.
From what I read, the risks of doing this is that some file may be corrupted while dropbox is writing it. I don´t think this is a problem as the script runs at the moment that the network is identified. Also, dropbox is very good at handling this type of errors.
But, is there a better way or other risks I´m not aware of it?
I think, instead of using kill, you could try Stop-Process. Ex:
Stop-Process -Name "dropbox"

Send logs to syslog

I need to send ContxtBroker log to syslog but the other method I could find is via imfile, which is very inaccurate in timing.
Is there any way for sending ContextBroker logs directly to syslog?
The orion context broker only outputs its log information to its log file (which defaults at /tmp/contextBroker.log), and currently we don't have any intention of changing this behavior, sorry.
You could always implement a process that reads from the log file, parses the log contents and forwards the log message to syslog. Doesn't seem too difficult ...

error 404 on hitting http://<host>:8080/cs/REST/

I had successfully configured Oracle webcenter on some of my VM.
To access it from my local machine I did some changes in firewell setting.
Then after the home page is not accessible and i get 404 error.
i.e.,
http://:8080/cs/REST/ is not accessible where as some other REST URLs are accessible such as :
http://:8080/cs/REST/types/
http://:8080/cs/REST/sites/
http://:8080/cs/REST/sites/FirstSiteII/
I think something wrong with my asset type configuration. How to resolve?
Any idea would work for me.
You should be looking at log files which can be generated by the content server.
The “View Server Output” menu provides access to the most recent server output logs.
Iirc, you can set different levels of tracing and you should select the option(s) which are relevant to you issue - otherwise the trace log file will generate a huge amount of text - much of it irrelevant to you & making it particularly hard to read.
The log file is timestamped but it would be better served if you have a single-user make a single attempt to land on your URL(s).
Server output also contains tracing output if enabled. Tracing is typically enabled while
debugging errors. If server output is being captured in a file, the file could grow large if tracing options are enabled. Consider disabling all server tracing options (especially if “verbose” option is checked), to keep server output file size in check.
I don't believe that there's anything served at /cs/REST/ - what would you expect to see?

Apache2 reload config from inside the CGI

I am working on a simple Perl app that copies another Perl app and builds all the required Apache config files.
The thing I can't seem to figure out is how to reload the apache config on the fly. I know I could do a system call and reload apache there, but that would mean I would have to get root access to this app, and that is a little scary.
Is there a way to ask apache to reload its config files from within the CGI container?
-------------------------Additional info------------------------------
I have done some more research and the problem is that Apache must be run with elevated privileges to bind to port 80. So one solution would be to set Apache to run on another port and forward that port to 80 via iptables. (This may be a last resort but a very messy solution).
Here is what gets me, Apache should be able to maintain its current port bindings and recheck its config files, all I am doing is adding another script alias.
Is there any way to add a new script alias with out a reload?
you also have the options to reload the config:
/etc/init.d/httpd reload
or
apachectl -k graceful
But unfortunately, those need root also. This differs from a normal restart in that currently open connections are not aborted. A side effect is that old log files will not be closed immediately. This means that if used in a log rotation script, a substantial delay may be necessary to ensure that the old log files are closed before processing them.
Also, if running Apache with daemontools you can do this by:
svc -h /service/apache
Sorry to ask a question then not give some one else the opportunity to answer but I figured out a solution and I hope it may help some one else.
What I had to do was leave the config alone it is not possible to reload in the manner that I required with out root privileges or some fancy port forwarding (That would make this application less portable than I would like).
So the only thing that Apache appears to load dynamically is the file system.
What I have done is used mod_rewrite to redirect the script requests and simply put them in /var/www/appname/copyname/cgi-bin/

Where can I find application runtime errors using Nginx, Starman, Plack and Catalyst?

I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.