rsyslog 5.8 imfile outside /var/log not picking up log files - redhat

I would like to pick up logs of different types from various locations other than /var/log and send them to a central location.
Using RH 6.6 and rsyslog 5.8 the configuration works fine when using path within /var/log. If I use other path like /opt/appname/log/file.log. The rsyslog client does not pick up the log. I do not see any error or message when running rsyslogd in debug mode.
Example:
Client:
...
$InputFileName /opt/appname/test.log
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1
$InputFileSeverity info
$InputFilePersistStateInterval 200
$InputFileFacility local3 # alto tried with other local
$InputRunFileMonitor
...
Server:
...
$template HostAudit, "/opt/logs/%HOSTNAME%/test.log" # tried differnt path
$template auditFormat, "%msg%\n"
local3.* ?HostAudit;auditFormat
...
Any recommendations?, I appreciate your help!!!
Bill

I would first try these:
Verify that the state file names are unique
Verify that every $InputFileName points to an existing regular file
Remove some of the files that you want to be monitored from the configuration. It could be that there is a problem with only one of the monitored files. That would make rsyslog ignore the rest of the files.
I had this with "$InputFileStateFile tomcat-log" for each of the individual tomcat logs. Each of the state file name needs to be unique. For me it worked by changing it to instances of:
"$InputFileStateFile tomcat-manager"
"$InputFileStateFile tomcat-localhost"
etc...
Another option is to just add numbers to the end of the state file name.
"$InputFileStateFile tomcat-log1"
"$InputFileStateFile tomcat-log2"

Related

mosquitto.db file does not get created

In the process of testing mosquitto Persistence, I have removed mosquitto.db from Persistence location to enable a fresh start. But, to my chagrin, the file does not get created even after I restart the broker.
Did I get it wrong that the broker creates the .db file as per the config? Any pointers on how to get a fresh mosquitto.db file would be appreciated.
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /var/run/mosquitto.pid
max_inflight_messages 1
persistence true
persistence_file mosquitto.db
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
password_file /etc/mosquitto/passwd
allow_anonymous false
max_queued_messages 1000000
autosave_interval 30
# autosave_on_changes false
If you delete the file while the broker is running it is likely to not get recreated because the broker will already hold an open file handle.
Deleting a file while it's open by a process does not actually remove the file, just it's entry in the directory, the process will continue to read/write to the file until the handle is closed.
If you restart mosquitto after deleting the file it won't write to the file until it actually has some data to write to it, e.g.
have a subscribed client (at QOS 1 or 2)
send some messages
disconnect the subscriber
send more messages
shutdown mosquitto
The file should now be written containing the messages that were published while the client was disconnected.

How to configure open-fire server with HttpUploadComponent for offline file transferring?

I use Openfire with Conversations and would like to implement offline file transferring with HttpUploadComponent, I have copied httpupload folder inside openfire folder as below screenshot:
Then I did below configurations in openfire:
I also installed Python and configured config.yml file in httpupload folder like below:
component_jid: upload.192.168.105.164
component_secret: 1234
component_port: 5275
storage_path : ./var/lib/httpupload/
max_file_size: 20971520 #20MiB
http_address: 0.0.0.0 #use 0.0.0.0 if you don't want to use a proxy
http_port: 8080
get_url : http://192.168.105.164:8080/
put_url : http://192.168.105.164:8080/
expire_interval: 82800 #time in secs between expiry runs (82800 secs = 23 hours). set to '0' to disable
expire_maxage: 2592000 #files older than this (in secs) get deleted by expiry runs (2592000 = 30 days)
user_quota_hard: 104857600 #100MiB. set to '0' to disable rejection on uploads over hard quota
user_quota_soft: 78643200 #75MiB. set to '0' to disable deletion of old uploads over soft quota an expiry runs
allow_web_clients: true #answer OPTIONS requests to allow web clients to upload files
I did run Httpupload server as well :
After starting python server, if you go openfire\serversetting\external components*view the external components* [in the first line], you'll see whether session is created or not:
After all of this, when I want to send a file from android client its failling and It gives me this error:
Where is my problem? Thanks.
In attached error screenshot, the last word is 403, which is indicating that it's related to authorization on HttpUploadComponent end.
Now I started to check the code of this component and on line 83 of https://github.com/siacs/HttpUploadComponent/blob/master/httpupload/server.py it is picking the variable "storage_path" from configuration to place the file in that directory.
Now as mentioned in your question, you have set storage_path : ./var/lib/httpupload/
But you are on a windows machine and this path is invalid.
Try giving a valid windows os path.

cannot attach to service manager-error

I am new in firebird and I would like to trace my firebird-database activities, hence I am trying to use Audit/Trace Services.
My firbird databse is on Server: 10.7.105.8
I am running this comman in my cmd:
C:\Program Files\Firebird\Firebird_2_5\bin>fbtracemgr -se 10.7.105.8:3050:service_mgr -user SYSDBA -password masterkey -start -name "User Trace 1" -config "fbtrace.conf" > C:\Users\Babak\Desktop\trace.out
but I get this error:
Can not attach to service manager
Service 3050 : Service_mgr is not defined
What should I do to solve this problem?
thank you so much
EDIT
thank you for your hints. I think my trace process works fine, but I cant find the information, what I need in my trace.out file
If I am starting my trace my command promp looks like this:
if in this step I take a look in my trace.out I can only see this:
Trace Session ID 3 Started
I am running some select queries in my firebird, and then I finish my trace with with ctr+c, then the only things, which I can see in my trace.out are something like this:
Trace session ID 3 started
2015-07-08 10:49:59.868874 ***** loading fbclient.dll proc=4116 64Bit DLL Preload
2015-07-08 10:49:59.869066 GetDllDirectoryA=""
2015-07-08 10:49:59.869075 GetModuleFileNameA="C:\Program Files\Firebird\Firebird_2_5\bin\fbclient.dll"
2015-07-08 10:49:59.869086 Log-Level is set to 0
2015-07-08 10:49:59.869096 fbclient.dll loaded by: C:\Program Files\Firebird\Firebird_2_5\bin\fbtracemgr.exe
2015-07-08 10:49:59.869113 ***** dimensio integration successfully fbclient.dll
2015-07-08 10:58:10.091330 ***** cleanup unload fbclientorg.dll proc=4116
and not more infos about queries, which I have run.
Could you please say me, what I have done wrong? or what should I do more?
As Mark says, check file "fbtrace.conf". This is a text file and you will see something like this:
# default database section
#
<database>
# Do we trace database events or not
enabled false
# Operations log file name. For use by system audit trace only
#log_filename
....
....
# Put transaction start/end records
log_transactions false <--- TO TEST, SET THIS TO TRUE
# Put sql statement prepare records
log_statement_prepare false <-- TO TEST, SET THIS TO TRUE
Set to true what you need to trace, save the file and check the result.
Firebird connection strings are of the format:
host/port:database
Where /port is optional and defaults to 3050, and database is either the alias or path of a database, or the name of a service. Replace :3050 with /3050 (or leave it off entirely).
The following worked for me:
Open start menu
Search for services and open it
Search Firebird Guardian in the services list.
Start Firebird Guardian if it is stopped or restart if it is running.
Now try to connect your server. It will work.

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>

Zookeeper - three nodes and nothing but errors

I have three zookeeper nodes. All ports are open. The ip address are correct. Below is my config file. All nodes where booted by chef and all have the same install and config file.
# The number of milliseconds of each tick
tickTime=3000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# Place the dataLogDir to a separate physical disc for better performance
# dataLogDir=/disk2/zookeeper
# the port at which the clients will connect
clientPort=2181
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Here is error for one of the nodes. So...I am rather confused on how I could get an error since the config is rather vanilla. All three nodes are doing hte same thing.
2012-07-16 05:16:57,558 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /etc/zookeeper/conf/zoo.cfg
2012-07-16 05:16:57,567 - INFO [main:QuorumPeerConfig#310] - Defaulting to majority quorums
2012-07-16 05:16:57,572 - FATAL [main:QuorumPeerMain#83] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /etc/zookeeper/conf/zoo.cfg
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: serverid replace this text with the cluster-unique zookeeper's instance id (1-255) is not a number
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:333)
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:106)
... 2 more
You need create a file named myid and put it into zookeeper var directory, one for each server, consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
see more at http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Are your servers and IP's
Then create myid file on each of the nodes with value 1 in 111.111.111 and 2 in 111.111.111.112 and 3 in 111.111.111.113 servers under directory(dataDir=/var/lib/zookeeper)
If you place value "1" myid file you will get Number format exception and "Invalid config, exiting abnormally" if the myid file is created with any extension.
Therefore just create myid file without any extension and place integer values 1,2,3 in the corresponding servers without double quotes