mosquitto.db file does not get created - persistence

In the process of testing mosquitto Persistence, I have removed mosquitto.db from Persistence location to enable a fresh start. But, to my chagrin, the file does not get created even after I restart the broker.
Did I get it wrong that the broker creates the .db file as per the config? Any pointers on how to get a fresh mosquitto.db file would be appreciated.
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
pid_file /var/run/mosquitto.pid
max_inflight_messages 1
persistence true
persistence_file mosquitto.db
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
password_file /etc/mosquitto/passwd
allow_anonymous false
max_queued_messages 1000000
autosave_interval 30
# autosave_on_changes false

If you delete the file while the broker is running it is likely to not get recreated because the broker will already hold an open file handle.
Deleting a file while it's open by a process does not actually remove the file, just it's entry in the directory, the process will continue to read/write to the file until the handle is closed.
If you restart mosquitto after deleting the file it won't write to the file until it actually has some data to write to it, e.g.
have a subscribed client (at QOS 1 or 2)
send some messages
disconnect the subscriber
send more messages
shutdown mosquitto
The file should now be written containing the messages that were published while the client was disconnected.

Related

fluentbit writes to /var/log/messages

I'm running fluentbit (td-agent-bit) on a CentOS system in order to output all logs in a centralized system. Everytime fluentbit pushes a record to the remote location, it adds a record in /var/log/messages as well, leading up to a huge log filesize.
Jul 21 08:48:53 hostname td-agent-bit: [2020/07/21 08:48:53] [ info] [out_azure] customer_id=XXXXXXXXXXXXXXXXXXXXXXXX, HTTP status=200
Any idea how can I stop a service (td-agent-bit) from writing to /var/log/messages? Couldn't find any configuration parameter (e.g. verbose) in fluentbit documentation. Thanks!
Your log_level is "info" which includes a lot of messages of the pipeline. You can either decrease the log level inside the output section of the plugin to "error" only, e.g:
[OUTPUT]
name azure
match *
log_level error
note: you can decrease the general log_level also in the main [SERVICE] section.

Too many empty chk-* directories with Flink checkpointing using RocksDb as state backend

Too many empty chk-* files exist in the location where I have setup Rocksdb as state backend
I am using FlinkKafkaConsumer to get data from Kafka topic. And I am using RocksDb as state backend. I am just printing the messages received from Kafka.
Following are the properties I have to set up the state backend:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(100);
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setMinPauseBetweenCheckpoints(50);
env.getCheckpointConfig().setCheckpointTimeout(60);
env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
env.getCheckpointConfig().enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
StateBackend rdb = new RocksDBStateBackend("file:///Users/user/Documents/telemetry/flinkbackends10", true);
env.setStateBackend(rdb);
env.execute("Flink kafka");
In flink-conf.yaml I have also set this property:
state.checkpoints.num-retained: 3
I am using simple 1 node flink cluster(using ./start-cluster.sh) .I started the job and kept it running for 1 hour and I see too many chk-* files created under /Users/user/Documents/telemetry/flinkbackends10 location
chk-10 chk-12667 chk-18263 chk-20998 chk-25790 chk-26348 chk-26408 chk-3 chk-3333 chk-38650 chk-4588 chk-8 chk-96
chk-10397 chk-13 chk-18472 chk-21754 chk-25861 chk-26351 chk-26409 chk-30592 chk-34872 chk-39405 chk-5 chk-8127 chk-97
chk-10649 chk-13172 chk-18479 chk-22259 chk-26216 chk-26357 chk-26411 chk-31097 chk-35123 chk-39656 chk-5093 chk-8379 chk-98
chk-1087 chk-14183 chk-18548 chk-22512 chk-26307 chk-26360 chk-27055 chk-31601 chk-35627 chk-4 chk-5348 chk-8883 chk-9892
chk-10902 chk-15444 chk-18576 chk-22764 chk-26315 chk-26377 chk-28064 chk-31853 chk-36382 chk-40412 chk-5687 chk-9 chk-99
chk-11153 chk-15696 chk-18978 chk-23016 chk-26317 chk-26380 chk-28491 chk-32356 chk-36885 chk-41168 chk-6 chk-9135 shared
chk-11658 chk-16201 chk-19736 chk-23521 chk-26320 chk-26396 chk-28571 chk-32607 chk-37389 chk-41666 chk-6611 chk-9388 taskowned
chk-11910 chk-17210 chk-2 chk-24277 chk-26325 chk-26405 chk-29076 chk-32859 chk-37642 chk-41667 chk-7 chk-94
chk-12162 chk-17462 chk-20746 chk-25538 chk-26337 chk-26407 chk-29581 chk-33111 chk-38398 chk-41668 chk-7116 chk-95
out of which only chk-41668, chk-41667, chk-41666 have data.
The rest of the directories are empty.
Is this expected behavior. How to delete those empty directories? Is there some configuration for deleting empty directories?
Answering my own question here:
In UI I was seeing 'checkpoint expired before completing' error in the checkpointing section. And found out that to resolve the error we need to increase the checkpoint timeout.
I increased the timeout from 60 to 500 and it started deleting the empty chk-* files.
env.getCheckpointConfig().setCheckpointTimeout(500);

rsyslog 5.8 imfile outside /var/log not picking up log files

I would like to pick up logs of different types from various locations other than /var/log and send them to a central location.
Using RH 6.6 and rsyslog 5.8 the configuration works fine when using path within /var/log. If I use other path like /opt/appname/log/file.log. The rsyslog client does not pick up the log. I do not see any error or message when running rsyslogd in debug mode.
Example:
Client:
...
$InputFileName /opt/appname/test.log
$InputFileTag APPNAME1
$InputFileStateFile stat-APPNAME1
$InputFileSeverity info
$InputFilePersistStateInterval 200
$InputFileFacility local3 # alto tried with other local
$InputRunFileMonitor
...
Server:
...
$template HostAudit, "/opt/logs/%HOSTNAME%/test.log" # tried differnt path
$template auditFormat, "%msg%\n"
local3.* ?HostAudit;auditFormat
...
Any recommendations?, I appreciate your help!!!
Bill
I would first try these:
Verify that the state file names are unique
Verify that every $InputFileName points to an existing regular file
Remove some of the files that you want to be monitored from the configuration. It could be that there is a problem with only one of the monitored files. That would make rsyslog ignore the rest of the files.
I had this with "$InputFileStateFile tomcat-log" for each of the individual tomcat logs. Each of the state file name needs to be unique. For me it worked by changing it to instances of:
"$InputFileStateFile tomcat-manager"
"$InputFileStateFile tomcat-localhost"
etc...
Another option is to just add numbers to the end of the state file name.
"$InputFileStateFile tomcat-log1"
"$InputFileStateFile tomcat-log2"

How to configure open-fire server with HttpUploadComponent for offline file transferring?

I use Openfire with Conversations and would like to implement offline file transferring with HttpUploadComponent, I have copied httpupload folder inside openfire folder as below screenshot:
Then I did below configurations in openfire:
I also installed Python and configured config.yml file in httpupload folder like below:
component_jid: upload.192.168.105.164
component_secret: 1234
component_port: 5275
storage_path : ./var/lib/httpupload/
max_file_size: 20971520 #20MiB
http_address: 0.0.0.0 #use 0.0.0.0 if you don't want to use a proxy
http_port: 8080
get_url : http://192.168.105.164:8080/
put_url : http://192.168.105.164:8080/
expire_interval: 82800 #time in secs between expiry runs (82800 secs = 23 hours). set to '0' to disable
expire_maxage: 2592000 #files older than this (in secs) get deleted by expiry runs (2592000 = 30 days)
user_quota_hard: 104857600 #100MiB. set to '0' to disable rejection on uploads over hard quota
user_quota_soft: 78643200 #75MiB. set to '0' to disable deletion of old uploads over soft quota an expiry runs
allow_web_clients: true #answer OPTIONS requests to allow web clients to upload files
I did run Httpupload server as well :
After starting python server, if you go openfire\serversetting\external components*view the external components* [in the first line], you'll see whether session is created or not:
After all of this, when I want to send a file from android client its failling and It gives me this error:
Where is my problem? Thanks.
In attached error screenshot, the last word is 403, which is indicating that it's related to authorization on HttpUploadComponent end.
Now I started to check the code of this component and on line 83 of https://github.com/siacs/HttpUploadComponent/blob/master/httpupload/server.py it is picking the variable "storage_path" from configuration to place the file in that directory.
Now as mentioned in your question, you have set storage_path : ./var/lib/httpupload/
But you are on a windows machine and this path is invalid.
Try giving a valid windows os path.

Zookeeper - three nodes and nothing but errors

I have three zookeeper nodes. All ports are open. The ip address are correct. Below is my config file. All nodes where booted by chef and all have the same install and config file.
# The number of milliseconds of each tick
tickTime=3000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# Place the dataLogDir to a separate physical disc for better performance
# dataLogDir=/disk2/zookeeper
# the port at which the clients will connect
clientPort=2181
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Here is error for one of the nodes. So...I am rather confused on how I could get an error since the config is rather vanilla. All three nodes are doing hte same thing.
2012-07-16 05:16:57,558 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /etc/zookeeper/conf/zoo.cfg
2012-07-16 05:16:57,567 - INFO [main:QuorumPeerConfig#310] - Defaulting to majority quorums
2012-07-16 05:16:57,572 - FATAL [main:QuorumPeerMain#83] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /etc/zookeeper/conf/zoo.cfg
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: serverid replace this text with the cluster-unique zookeeper's instance id (1-255) is not a number
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:333)
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:106)
... 2 more
You need create a file named myid and put it into zookeeper var directory, one for each server, consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
see more at http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Are your servers and IP's
Then create myid file on each of the nodes with value 1 in 111.111.111 and 2 in 111.111.111.112 and 3 in 111.111.111.113 servers under directory(dataDir=/var/lib/zookeeper)
If you place value "1" myid file you will get Number format exception and "Invalid config, exiting abnormally" if the myid file is created with any extension.
Therefore just create myid file without any extension and place integer values 1,2,3 in the corresponding servers without double quotes