I have some questions, how I can set telegraf.conf file for collect logs from the "zimbra.conf" file?
Now I tried to use this config text, but it does not work :(((
I want to send this logs to grafana
One of the lines "zimbra.conf" for example:
Oct 1 10:20:46 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)
And I do not understand exactly how works the "grok_patterns ="
[[inputs.tail]]
files = ["/var/log/zimbra.log"]
from_beginning = false
grok_patterns = ['%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
name_override = "zimbra_access_log"
grok_custom_pattern_files = []
grok_custom_patterns = '''
TS_UNIX %{MONTH}%{SPACE}%{MONTHDAY}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}
TS_CUSTOM %{MONTH}%{SPACE}%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}
'''
grok_timezone = "Local"
data_format = "grok"
I have copied your example line into a log file called Prueba.txt wich contains the following lines:
Oct 3 00:52:32 webmail postfix/smtp[7677]: BD5BAE9999: to=user#mail.com, relay=mo94.cloud.mail.com[92.97.907.14]:25, delay=0.73, delays=0.09/0.01/0.58/0.19, dsn=2.0.0, status=sent (250 2.0$
Oct 13 06:25:01 webmail systemd-logind[949]: New session 229478 of user zimbra.
Oct 13 06:25:02 webmail zmconfigd[27437]: Shutting down. Received signal 15
Oct 13 06:25:02 webmail systemd-logind[949]: Removed session c296.
Oct 13 06:25:03 webmail sshd[28005]: Failed password for invalid user julianne from 120.131.2.210 port 10570 ssh2
I have been able to parse the data with this configuration of the tail.input plugin:
[[inputs.tail]]
files = ["Prueba.txt"]
from_beginning = true
data_format = "grok"
grok_patterns = ['%{TIMESTAMP_ZIMBRA} %{GREEDYDATA:source} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}']
grok_custom_patterns = '''
TIMESTAMP_ZIMBRA (\w{3} \d{1,2} \d{2}:\d{2}:\d{2})
'''
name_override = "log_frames"
You need to match the input string with regular expressions. For that there are some predefined patters such as GREEDYDATA = .* that you can use to match your input (another example will be NUMBER = (?:%{BASE10NUM}) BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))) . You can also define your own patterns in grok_custom_patterns. Take a look at this website with some patters: https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Apx-GrokPatterns/GrokPatterns_title.html
In this case I defined a TIMESTAMP_ZIMBRA pattern for matching Oct 3 00:52:32 and Oct 03 00:52:33 alike inputs.
Here is the collected metric by Prometheus:
# HELP log_frames_delay Telegraf collected metric
# TYPE log_frames_delay untyped
log_frames_delay{delays="0.09/0.01/0.58/0.19",dsn="2.0.0",host="localhost.localdomain",message="BD5BAE9999:",path="Prueba.txt",program="postfix/smtp",relay="mo94.cloud.mail.com[92.97.907.14]:25",source="webmail",status="sent (250 2.0.0 Ok: queued as 4C25fk2pjFz32N5)",to="user#mail.com"} 0.73
P.D.: Ensure that telegraf has access to the log files.
Related
I'm trying to pull data off my kafka topic and write it to HDFS, and appear to have my flume conf identical to what I've seen in several examples, but I can't seem to get around the below error. I can consume from the the topic through python, so I know I'm ok there. I'm on flume version 1.6.0 and java 9.0.1. What am I doing wrong to make it not accept the kafka topic?
09 Jul 2018 17:17:26,973 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:145) -Creating channels
09 Jul 2018 17:17:26,984 INFO [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:42) - Creating instance of channel kafka_hdfs_channel type memory
09 Jul 2018 17:17:26,989 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:200) - Created channel kafka_hdfs_channel
09 Jul 2018 17:17:26,989 INFO [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:41) - Creating instance of source kafka_source, type org.apache.flume.source.kafka.KafkaSource
09 Jul 2018 17:17:26,993 ERROR [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadSources:361) - Source kafka_source has been removed due to an error during configuration
org.apache.flume.conf.ConfigurationException: Kafka topic must be specified.
at org.apache.flume.source.kafka.KafkaSource.configure(KafkaSource.java:180)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:326)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:97)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)}
And here is my flume config:
agentCDIS.sources = kafka_source
agentCDIS.channels = kafka_hdfs_channel
agentCDIS.sinks = hdfs_sink
agentCDIS.sources.kafka_source.type = org.apache.flume.source.kafka.KafkaSource
agentCDIS.sources.kafka_source.kafka.bootstrap.servers = 10.4.3.61:9092, 10.4.3.62:9092, 10.4.3.63:9092
agentCDIS.sources.kafka_source.kafka.topic = test
agentCDIS.sources.kafka_source.kafka.consumer.group.id = cn_flume_group
agentCDIS.sources.kafka_source.channels = kafka_hdfs_channel
agentCDIS.sources.kafka_source.interceptors = i1
agentCDIS.sources.kafka_source.interceptors.i1.type = timestamp
agentCDIS.sources.kafka_source.kafka.consumer.timeout.ms = 1000
agentCDIS.channels.kafka_hdfs_channel.type = memory
agentCDIS.channels.kafka_hdfs_channel.capacity = 10000
agentCDIS.channels.kafka_hdfs_channel.transactionCapacity = 1000
agentCDIS.sinks.hdfs_sink.type = hdfs
agentCDIS.sinks.hdfs_sink.hdfs.path = hdfs://10.4.16.16:8020/user/cnelson/kafka/%{topic}/%y-%m-%d
agentCDIS.sinks.hdfs_sink.hdfs.rollInterval = 5
agentCDIS.sinks.hdfs_sink.hdfs.rollSize = 0
agentCDIS.sinks.hdfs_sink.fileType = DataStream
agentCDIS.sinks.hdfs_sink.channel = kafka_hdfs_channel
agentCDIS.sinks.loggerSink.type = logger
agentCDIS.sinks.loggerSink.kafka_hdfs_channel = memoryChannel
agentCDIS.channels.memoryChannel.type = memory
agentCDIS.channels.memoryChannel.capacity = 100
I went through the post and the config a few times and noticed - you've mentioned that you are using Flume's version 1.6 and as per the documentation, the properties are slightly different. Could you please try the following:
Instead of agentCDIS.sources.kafka_source.kafka.bootstrap.servers => try agentCDIS.sources.kafka_source.zookeeperConnect - the value for this property would be the zookeeper URI used by your Kafka cluster.
Instead of agentCDIS.sources.kafka_source.kafka.topic = test => try agentCDIS.sources.kafka_source.topic = test
Instead of agentCDIS.sources.kafka_source.kafka.consumer.group.id = cn_flume_group => try agentCDIS.sources.kafka_source.groupId = cn_flume_group
The above 3 properties that you've used in your config file were introduced from version 1.7.
I hope this helps!
I have two simple indexes:
First, 01.conf:
searchd
{
listen = 9301
listen = 9401:mysql41
pid_file = /var/run/sphinxsearch/searchd01.pid
log = /var/log/sphinxsearch/searchd01.log
query_log = /var/log/sphinxsearch/query01.log
binlog_path = /var/lib/sphinxsearch/data/test/01
}
source base
{
type = mysql
sql_host = localhost
sql_db = test
sql_user = root
sql_pass = toor
sql_query_pre = SET NAMES utf8
sql_attr_uint = group_id
}
source test : base
{
sql_query = \
SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \
FROM documents WHERE id % 2 = 0
}
index test
{
source = test
path = /var/lib/sphinxsearch/data/test/01
}
Second looks like first but with "02" instead "01" in filename and inside.
And distributed index in 00.conf:
searchd
{
listen = 9305
listen = 9405:mysql41
pid_file = /var/run/sphinxsearch/searchd00.pid
log = /var/log/sphinxsearch/searchd00.log
query_log = /var/log/sphinxsearch/query00.log
binlog_path = /var/lib/sphinxsearch/data/test
}
index test
{
type = distributed
agent = 127.0.0.1:9301:test
agent = 127.0.0.1:9302:test
}
And I try to use distributed index:
sudo searchd --config /etc/sphinxsearch/d/00.conf --stop
sudo searchd --config /etc/sphinxsearch/d/01.conf --stop
sudo searchd --config /etc/sphinxsearch/d/02.conf --stop
sudo searchd --config /etc/sphinxsearch/d/01.conf
sudo searchd --config /etc/sphinxsearch/d/02.conf
sudo indexer --all --rotate --config /etc/sphinxsearch/d/01.conf
sudo indexer --all --rotate --config /etc/sphinxsearch/d/02.conf
sudo searchd --config /etc/sphinxsearch/d/00.conf
Unfortunately I obtain next output:
...
using config file '/etc/sphinxsearch/d/00.conf'...
listening on all interfaces, port=9305
listening on all interfaces, port=9405
precached 0 indexes in 0.000 sec
Why?
And when I try to search something with distributed index (9305):
no enabled local indexes to search.
And mysql indexes are works perfectly if I use them with port 9301 and 9302 respectively. But searching in distributed index returns nothing.
UPDATE
# tail /var/log/sphinxsearch/searchd00.log
[Thu Sep 29 23:43:04.599 2016] [ 2353] binlog: finished replaying /var/lib/sphinxsearch/data/test/binlog.001; 0.0 MB in 0.000 sec
[Thu Sep 29 23:43:04.599 2016] [ 2353] binlog: finished replaying total 4 in 0.000 sec
[Thu Sep 29 23:43:04.599 2016] [ 2353] accepting connections
[Thu Sep 29 23:43:24.336 2016] [ 2353] caught SIGTERM, shutting down
[Thu Sep 29 23:43:24.472 2016] [ 2353] shutdown complete
[Thu Sep 29 23:43:24.473 2016] [ 2352] watchdog: main process 2353 exited cleanly (exit code 0), shutting down
[Thu Sep 29 23:43:24.634 2016] [ 2404] watchdog: main process 2405 forked ok
[Thu Sep 29 23:43:24.635 2016] [ 2405] listening on all interfaces, port=9305
[Thu Sep 29 23:43:24.635 2016] [ 2405] listening on all interfaces, port=9405
[Thu Sep 29 23:43:24.636 2016] [ 2405] accepting connections
UPDATE2
Hmm... It seems what problem in querying data from Sphinx. Also I renamed distributed index into test1. Next code works well.
# mysql -h 127.0.0.1 -P 9405
mysql> select * from test1 where match ('one|two');
+------+----------+
| id | group_id |
+------+----------+
| 1 | 1 |
| 2 | 1 |
+------+----------+
2 rows in set (0,00 sec)
I think what problem was in old version of sphinxapi.php what I used.
precached 0 indexes in 0.000 sec
Well that it self, is normal. There are no local indexes to 'precache'. A distributed index has no index files to 'load' or (pre)cache.
... but searchd should still be running at the end of that. I think searchd should start up ok.
Try also checking
/var/log/sphinxsearch/searchd00.log
might have some more.
Although I suppose its possible sphinx will not startup without any real indexes (ie cant have JUST distributed index), so could just add a fake index to that config.
I am writing an Icinga plugin to check if the smtp server we have contracted with a third party gets blacklisted.
The service uses an unknown number of smtp relays. I need to download all the "Received" sections of the headers, and parse them to get the different IPs of the SMTP relays.
I am trying to use Mail::IMAPClient, and I can perform some operations on the account (login, chose folder, search the messages, etc), but I haven't found a way to get the whole header nor the sections of it I need.
I don't mind using a different module if needed.
You could try using the parse_headers function. According to the example in the documentation, you can use it like this:
$hashref = $imap->parse_headers(1,"Date","Received","Subject","To");
And then you get a hash reference that maps field names to references to array of values, like this:
$hashref = {
"Date" => [ "Thu, 09 Sep 1999 09:49:04 -0400" ] ,
"Received" => [ q/
from mailhub ([111.11.111.111]) by mailhost.bigco.com
(Netscape Messaging Server 3.6) with ESMTP id AAA527D for
<bigshot#bigco.com>; Fri, 18 Jun 1999 16:29:07 +0000
/, q/
from directory-daemon by mailhub.bigco.com (PMDF V5.2-31 #38473)
id <0FDJ0010174HF7#mailhub.bigco.com> for bigshot#bigco.com
(ORCPT rfc822;big.shot#bigco.com); Fri, 18 Jun 1999 16:29:05 +0000 (GMT)
/, q/
from someplace ([999.9.99.99]) by smtp-relay.bigco.com (PMDF V5.2-31 #38473)
with ESMTP id <0FDJ0000P74H0W#smtp-relay.bigco.com> for big.shot#bigco.com; Fri,
18 Jun 1999 16:29:05 +0000 (GMT)
/] ,
"Subject" => [ qw/ Help! I've fallen and I can't get up!/ ] ,
"To" => [ "Big Shot <big.shot#bigco.com> ] ,
};
That should give you all the Received headers in a single array.
I have a server that runs on apache, and I am sending emails from that server. I want to set the number of recipients that each outgoing email can send to. I am following this tutorial and this manual - it seems to be as easy as adding smtpd_recipient_limit=2 to master.cf like below, reloading postfix, and running a test with 3 recipients. Each recipients still get email, with no error messages in /var/log/syslog file below. What is missing?
# See /usr/share/postfix/main.cf.dist for a commented, more complete version
# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h
readme_directory = no
# TLS parameters
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
# limit number of emails that can be send. Each outgoing mail are seperated by
# 1 second delay. Number of recepients of each message is limited to 10.
#smtp_destination_rate_delay = 1s
#smtp_extra_recipient_limit = 10
#smtpd_client_message_rate_limit=2
smtpd_recipient_limit=2
smtpd_recipient_overshoot_limit=0
# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.
myhostname = xxx
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
# (sorry have to smear out the domain name)
mydestination = xxx, localhost.xxx, , localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
#Added
home_mailbox = Maildir/
virtual_alias_maps = hash:/etc/postfix/virtual
Log file below
Apr 10 22:35:54 xxx postfix/pickup[7448]: 16151400BC: uid=33 from=<www-data>
Apr 10 22:35:54 xxx postfix/cleanup[7455]: 16151400BC: message-id=<597dd15203e984495188a846c186772e#xxx>
Apr 10 22:35:54 xxx postfix/qmgr[7447]: 16151400BC: from=<www-data#xxx>, size=674, nrcpt=3 (queue active)
Apr 10 22:35:55 xxx postfix/smtp[7457]: 16151400BC: to=<yyy#gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.196.27]:25, delay=1.5, delays=0.01/0/0.15/1.4, dsn=2.0.0, status=sent (250 2.0.0 OK 1397183738 v62si6606269yhp.5 - gsmtp)
Apr 10 22:35:55 xxx postfix/smtp[7457]: 16151400BC: to=<zzz#gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.196.27]:25, delay=1.5, delays=0.01/0/0.15/1.4, dsn=2.0.0, status=sent (250 2.0.0 OK 1397183738 v62si6606269yhp.5 - gsmtp)
Apr 10 22:35:55 xxx postfix/smtp[7457]: 16151400BC: to=<ttt#gmail.com>, relay=gmail-smtp-in.l.google.com[74.125.196.27]:25, delay=1.5, delays=0.01/0/0.15/1.4, dsn=2.0.0, status=sent (250 2.0.0 OK 1397183738 v62si6606269yhp.5 - gsmtp)
Apr 10 22:35:55 xxx postfix/qmgr[7447]: 16151400BC: removed
It is because smtpd_recipient_limit only apply to the mails received by smtpd daemon through an SMTP transaction. The mails submitted using sendmail command is queued in maildrop queue by postdrop command, which is picked up by pickup and fed to cleanup directly.
You can't restrict recipient count for the mails submitted through sendmail command.
The only solution to this problem is force your applications to send mail only through smtp transaction.
You need to use smtp_destination_recipient_limit instead.
Is it possible to read a znode with a space in it through Zookeeper CLI?
I've 2 values under regions ('us-west 1' and 'us-east')
[zk: localhost:2181(CONNECTED) 11] get /regions/
us-west 1 us-east
I can read 'us-east'.
[zk: localhost:2181(CONNECTED) 11] get /regions/us-east
null
cZxid = 0xa
ctime = Tue Jul 10 12:41:49 IST 2012
mZxid = 0xa
mtime = Tue Jul 10 12:41:49 IST 2012
pZxid = 0x1b
cversion = 9
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 9
But not 'us-west 1'
[zk: localhost:2181(CONNECTED) 11] get /regions/us-west 1
Node does not exist: /regions/us-west
I tried options like '%20', '\ ' , '+' etc.. for the space, but nothing worked.
Please try zookeepercli, as follows:
$ zookeepercli --servers srv1,srv2,srv3 -c create "/demo_only 1" "the value"
$ zookeepercli --servers srv1,srv2,srv3 -c get "/demo_only 1"
the value
zookeepercli is free and open source.
Disclaimer: I'm author of this tool.
It looks like you will not be able to do that from the ZK command-line client. The Zookeeper Java client (which is the one you are using, probably) separates commands (e.g. get) from their parameters (e.g. /regions/us-west 1 in your case) by parsing whitespace characters, as you can see in the code provided with the client (e.g. zookeeper-3.3.5\src\java\main\org\apache\zookeeper\ZooKeeperMain.java):
public boolean parseCommand( String cmdstring ) {
String[] args = cmdstring.split(" ");
if (args.length == 0){
return false;
}
command = args[0];
cmdArgs = Arrays.asList(args);
return true;
}
Since they are splitting by a " ", unless you discover a way to overcome that split call by some sort of escaping when calling the get command, you won't be able to retrieve those nodes using this client. The code above interprets your 1 in a get call as if it were the watch parameter, as you can see in the get command syntax:
get path [watch]
I recommend you to use a different character, like '_' for example, instead of the whitespace on the znodes naming. If that is not an option, you will either need to modify the ZK Java client yourself, or use another client.
There's an open issue in JIRA for this feature, but a workaround is to pass a command on the command-line instead of using the interactive console:
$ zkCli.sh -c get "/regions/us-west 1"