Rsyslog logging multiple lines(exactly duplicate lines) under messages (Centos - Amazon AMI) - centos

I am using rsyslog v5 to centralize logs to a server, i see exactly duplicat elogs under /var/log/messages on my log server, although i do not see duplicate lines under distributed servers logs
I am using Amazon AMI-centos

I figuered it out - was monitoring 2 files under each server and was sending them via *.* ##<%= #log_servers %>:514 twice

Related

How do i debug a problem with rsyslog to remote server?

I am having about 15 servers that send their syslog to a common logserver, all servers are redhat7.7, so the rsyslog is a recent version (8.24).
The normal log get transfered to the common server OK, but then there are some application logs that are watched with the imfile module, and my problem is that only some of them are transfered correctly, but others are not transfered, or there are days, where logs are missing for hours, and then are OK the rest of the day.
How can I debug this problem, I can show a small portion of the custom log definition:
$InputFileName /var/log/log-compression.log
$InputFileTag log-compression-log
$InputFileStateFile stat-log-compression-log
$InputFileSeverity info
$InputFileFacility local2
$InputRunFileMonitor
There are about 20 such sections, and some of them work, and others not, and the syntax is exactly the same, it is deployed with ansible.

rsyslog filter for postgresql messages

I've told PostgreSQL to log to syslogd and to tag its messages with "postgresql". These messages appear in /var/log/syslog with that tag.
Now I'd like those messages not to appear in /var/log/syslog but in /var/log/example/postgresql/. I thought I could do that by creating the file /etc/rsyslog.d/25-postgresql.conf with this single line (plus some comments, in real life):
:msg,contains,"postgresql" /var/log/example/postgresql/postgresql.log
Restarting rsyslog doesn't result in postgresql messages going to that file, however: they still go to /var/log/syslog.
Any suggestions what I've done wrong?

Load properties into application based on Weblogic managed server

I have this requirement - I have several managed servers running on my Weblogic (version 12.x). There are multiple machines as well.
Machine 1: Managed server 1, 2
Machine 2: Managed server 3, 4
I have a spring-boot based application (war) that is deployed across all managed servers. It has both an MDB (to read messages from JMS queue), and a SOAP Webservice.
The queue that it is reading messages from is however targeted/deployed only on a few managed servers - 1 and 3.
Now, I don't want my application to fail or start complaining when it doesn't find the queue on managed servers 2 and 4. Hence, I wish to load my MDB based on a property/configuration specific to managed server.
Is there any way to achieve this?
You could add a custom System property to the server start parameters of server 2 and 4 in the admin console, ie: "-DignoreMDB=1" and read that using a System.getProperty("ignoreMDB") != null call. Note that you need to restart the nodemanager first and your managed servers second to get modification to the server start parameters active.

NFS V4 at FreeBSD hosted, both client and server, mounts OK but there is no read or write on the filesystem, reporting Input/output error

I have successfully mounted and used NFS Version 4 having Solaris server and FreeBSD client.
Problem is when having FreeBSD server and FreeBSD client at version 4. Version 3 works excellent.
I use FreeBSD NFS server since FreeBSD verson 4.5 (then having IBM AiX clients).
The problem:
mounts OK, but there are no principals appear at the kerberos cache, and when trying to read or write on the mounted filesystem I get the error: Input/output error
nfs/server-fqdn#REALM and nfs/client-fqdn#REALM principals are created at kerberos server and stored at keytab files properly at both sides.
I issue tgt tickets from the KDC using the above for both sides for the root's kerberos cache.
I start services properly:
file /etc/rc.conf
rpcbind_enable="YES"
gssd_enable="YES"
rpc_statd_enable="YES"
rpc_lockd_enable="YES"
mountd_enable="YES"
nfsuserd_enable="YES"
nfs_server_enable="YES"
nfsv4_server_enable="YES"
then I start services
at client: rpcbind, gssd, nfsuserd,
at server all above having the exports file:
V4: /marble/nfs -sec=krb5:krb5i:krb5p -network 10.20.30.0 -mask 255.255.255.0
I mount:
# mount_nfs -o nfsv4 servername:/ /my/mounted/nfs
#
# mkdir /my/mounted/nfs/e
# mkdir: /my/mounted/nfs/e: Input/output error
#
Same result for even an ls command.
klist does not show any new principals at root's cache, or any other cache.
The amazing performance at version 3 I love, but need local lock files feature of NFS4.
Second reason is security. I need kerberised RPC calls (-sec=krbp).
If anyone of you has achieved this using FreeBSD server for NFS Version 4, please give a feedback to this question, I'll be glad if you do.
Comments are not good to give code examples. Here is the setup of FreeBSD client and FreeBSD server that works for me. I don't use Kerberos but if you make it working with this minimal configuration then you can add Kerberos afterwards (I believe).
Server rc.conf:
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 4"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"
mountd_flags="-r"
Server /etc/exports:
/parent/path1 -mapall=1001:1001 192.168.2.200
/parent/path2 -mapall=1001:1001 192.168.2.200
... (more shares)
V4: /parent/ -sec=sys 192.168.2.200
Client rc.conf:
nfs_client_enable="YES"
nfs_client_flags="-n 4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Client fstab:
192.168.2.100:/path1/ /mnt/path1/ nfs rw,bg,late,failok,nfsv4 0 0
192.168.2.100:/path2/ /mnt/path2/ nfs rw,bg,late,failok,nfsv4 0 0
... (more shares)
As you see the client tries to mount only what's after the /parent/ path specified in the V4 line on the server. 192.168.2.100 is server IP and 192.168.2.200 is the client IP. This setup will only allow that one client connect to the server.
I hope I haven't missed anything. BTW please rise questions like this on SuperUser or ServerFault rather than StackOverflow. I am surprised this question hasn't been closed yet because of that ;)

Implementing a distributed grep

I'm trying to implement a distributed grep. How can I access the log files from different systems? I know I need to use the network but I don't know whether you use ssh, telnet, or anything else? What information do I need to know about the machines I am going to connect to from my machine? I want to be able to connect to different Linux machines and read their log files and pipe it back to my machine.
Your system contains a number of Linux machine which produce log data(SERVERs), and one machine which you operate(CLIENT). Right?
Issue 1) file to be accessed.
In general, log file is locked by a software which produce log data, because the software has to be able to write data into log file at any time.
To access the log file from other software, you need to prepare unlocked log data file.
Some modification of the software's setup ane/or the software(program) itself.
Issue 2) program to serve log files.
To get log data from SERVER, each SERVERs have to run some server program.
For remote shell access, rshd (remote shell deamon) is needed. (ssh is combination of rsh and secure communication).
For FTP access, ftpd (file transfer protocol deamon) is needed.
The software to be needed is depend how CLIENT accesses SERVERs.
Issue 3) distribued grep.
You use words 'distribued grep'. What do you mean by the words?
What are distribued in your 'distributed grep'?
Many senarios came in my mind.
a) Log files are distribued in SERVERs. All log data are collected to CLIENT, and grep program works for collected log data at CLIENT.
b) Log files are distribued in SERVERs. Grep function are implemented on each SERVERs also. CLIENT request to each SERVERs for getting the resule of grep applied to log data, and results are collected to CLIENT.
etc.
What is your plan?
Issue 4) access to SERVERs.
Necessity of secure communication is depend on locations of machines and networks among them.
If all machines are in a room/house, and networks among machines are not connected the Internet, secure communication is not necessary.
If the data of log is top secret, you may need encript the data before send the data on the network.
How is your log data important?
At very early stage of development, you should determing things described above.
This is my advice.