ZooKeeper Timeout - Cannot find myid file. Windows Server 2016 - apache-zookeeper

I have very simple setup. I am trying to start ZooKeeper (apache-zookeeper-3.6.1-bin) on two machines. I get following error when i do zookeeper status
/cygdrive/c/ZooKeeper/apache-zookeeper-3.6.1-bin/apache-zookeeper-3.6.1-bin
$ bin/zkServer.sh restart
ZooKeeper JMX enabled by default
Using config: C:\ZooKeeper\apache-zookeeper-3.6.1-bin\apache-zookeeper-3.6.1-bin\conf\zoo.cfg
ZooKeeper JMX enabled by default
Using config: C:\ZooKeeper\apache-zookeeper-3.6.1-bin\apache-zookeeper-3.6.1-bin\conf\zoo.cfg
Stopping zookeeper ... STOPPED
ZooKeeper JMX enabled by default
Using config: C:\ZooKeeper\apache-zookeeper-3.6.1-bin\apache-zookeeper-3.6.1-bin\conf\zoo.cfg
Starting zookeeper ... STARTED
kalsa#CO01EAP00000027 /cygdrive/c/ZooKeeper/apache-zookeeper-3.6.1-bin/apache-zookeeper-3.6.1-bin
$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: C:\ZooKeeper\apache-zookeeper-3.6.1-bin\apache-zookeeper-3.6.1-bin\conf\zoo.cfg
cat: '/tmp/zookeeper/'$'\r''/myid': No such file or directory
clientPort not found and myid could not be determined. Terminating.
My Zoo.cfg
tickTime=5000
dataDir=/tmp/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
server.1=XYZ:2888:3888
server.2=ABC:2888:3888
I have proper IPs in place of XYZ and ABC.
I have created myid file created as well. Can someone let me know if i am missing anything obvious
enter image description here

The owner of datadir should be the zookeeper user. If not, change the owner.

Related

kafka is not ingesting logs

I created a Kafka service on a kube cluster based on bitnami chart. The deployment goes well.
Next, I installed a filebeat to send logs to that service. It seems to me that the filebeat communicates with the cluster but does not ingest the logs. Indeed, after starting the filebeat service, I find a topic called "logs-topic" which is created by filebeat. However, this topic remains empty. My configuration is given below :
filebeat.inputs:
- type: filestream
enabled: true
paths:
- /var/log/test.log
fields:
level: debug
review: 1
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
output.kafka:
hosts: ["ip-172-31-26-181:30092"]
topic: "logs-topic"
codec.json:
pretty: false
Kafka topic is present
I have no name!#kafka-release-client:/$ kafka-topics.sh --list --bootstrap-server kafka-release.default.svc.cluster.local:9092
.......
logs-topic
syslog output
Dec 30 08:28:45 ip-172-31-23-248 filebeat[29968]: 2021-12-30T08:28:45.928Z#011INFO#011[file_watcher]#011filestream/fswatch.go:137#011Start next scan
Dec 30 08:28:53 ip-172-31-23-248 filebeat[29968]: 2021-12-30T08:28:53.186Z#011INFO#011[publisher]#011pipeline/retry.go:219#011retryer: send unwait signal to consumer
Dec 30 08:28:53 ip-172-31-23-248 filebeat[29968]: 2021-12-30T08:28:53.186Z#011INFO#011[publisher]#011pipeline/retry.go:223#011 done
Dec 30 08:28:55 ip-172-31-23-248 filebeat[29968]: 2021-12-30T08:28:55.927Z#011INFO#011[file_watcher]#011filestream/fswatch.go:137#011Start next scan
I've found a workaround for this issue and maybee it will help others.
My problem was the same described in the post above and i struggled a long time, because i didn't find any logs from filebeat, which indicates the problem. Also i couldn't find a way to increase logging level instead of the debug option, like:
filebeat -e -c filebeat.yml -v -d "publisher"
The Kafka Broker Logs pointed to some SSL Handshake Failures at INFO Level:
journalctl -f -u kafka
Jan 22 18:18:22 kafka sh[15893]: [2022-01-22 18:18:22,604] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Failed authentication with /172.20.3.10 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
Also with DEBUG level configured, i couldn't see the actual problem.
I decided to produce some data with a python script and was able to reproduce the problem:
from kafka import KafkaConsumer, KafkaProducer
import logging
logging.basicConfig(level=logging.DEBUG)
try:
topic = "test"
sasl_mechanism = "PLAIN"
username = "test"
password = "fake"
security_protocol = "SASL_SSL"
producer = KafkaProducer(bootstrap_servers='kafka.domain.com:9092',
security_protocol=security_protocol,
ssl_check_hostname=True,
ssl_cafile='ca.crt',
sasl_mechanism=sasl_mechanism,
sasl_plain_username=username,
sasl_plain_password=password)
producer.send(topic, "test".encode('utf-8'))
producer.flush()
print("Succeed")
except Exception as e:
print("Exception:\n\n")
print(e)
Output:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'kafka'. (_ssl.c:1129)
DEBUG:kafka.producer.sender:Node 1 not ready; delaying produce of accumulated batch
WARNING:kafka.conn:SSL connection closed by server during handshake.
INFO:kafka.conn:<BrokerConnection node_id=1 host=kafka:9092 <handshake> [IPv4 ('198.168.1.11', 9092)]>: Closing connection. KafkaConnectionError: SSL connection closed by server during handshake
=> The Hostname can not be successfully verified in the certificate. I changed and added a lot of parameter (listener / advertised.listeners / host.name) in the servers.properties of Kafka and was not able to configure another/full domain name which will be returned to the client, in the metadata. It always returns "kafka:9092", and this is not the domainname which is indicated in the certificat.
Workaround:
Disable Hostname Verification on both side (server/client).
kafka-server: ssl.endpoint.identification.algorithm=
python-client: ssl_check_hostname=True
Filebeat can do this too, but it's not realy clear:
output.kafka:
ssl.verification_mode: certificate
certificate
Verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification.
Source: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html

kafka listening on multiple interfaces

I have a requirement as below:
Kafka needs to listen to multiple interfaces, one external and one internal interface. All other components within the system will connect kafka to internal interfaces.
At installation time internal ips on other host are not reachable, need to do some configuration to make them reachable, we do not have control over that. So, assume that when kafka is coming up, internal IPs on other nodes are not reachable to each other.
Scenario:
I have two nodes in cluster:
node1 (External IP: 10.10.10.4, Internal IP: 5.5.5.4)
node2 (External IP: 10.10.10.5, Internal IP: 5.5.5.5)
Now, while installation, 10.10.10.4 can ping to 10.10.10.5 and vice versa, but 5.5.5.4 can not reach to 5.5.5.5. That will happen once kafka installation is done and after that someone does some config to make it reachable, so before kafka installation, we can do make them reachable.
Now the requirement is kafka brokers will exchange the messages on 10.10.10 interface, such that cluster will be formed, but clients will send messages on 5.5.5.X interface.
What I tried was as below:
listeners=USERS://0.0.0.0:9092,REPLICATION://0.0.0.0:9093
advertised.listeners=USERS://5.5.5.5:9092,REPLICATION://5.5.5.5:9093
Where 5.5.5.5 is the internal ip address.
But with this, while restarting kafka, I see below logs:
{"log":"[2020-06-23 19:05:34,923] INFO Creating /brokers/ids/2 (is it secure? false) (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.923403973Z"}
{"log":"[2020-06-23 19:05:34,925] INFO Result of znode creation at /brokers/ids/2 is: OK (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.925237419Z"}
{"log":"[2020-06-23 19:05:34,926] INFO Registered broker 2 at path /brokers/ids/2 with addresses: ArrayBuffer(EndPoint(5.5.5.5,9092,ListenerName(USERS),PLAINTEXT), EndPoint(5.5.5.5,9093,ListenerName(REPLICATION),PLAINTEXT)) (kafka.zk.KafkaZkClient)\n","stream":"stdout","time":"2020-06-23T19:05:34.926127438Z"}
.....
{"log":"[2020-06-23 19:05:35,078] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)\n","stream":"stdout","time":"2020-06-23T19:05:35.078444509Z"}
{"log":"[2020-06-23 19:05:35,078] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)\n","stream":"stdout","time":"2020-06-23T19:05:35.078471358Z"}
{"log":"[2020-06-23 19:05:35,079] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)\n","stream":"stdout","time":"2020-06-23T19:05:35.079436798Z"}
{"log":"[2020-06-23 19:05:35,136] ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '2' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)\n","stream":"stdout","time":"2020-06-23T19:05:35.136792119Z"}
And after that this msg continuously comes up.
{"log":"[2020-06-23 19:05:35,166] ERROR [KafkaApi-2] Number of alive brokers '0' does not meet the required replication factor '2' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)\n","stream":"stdout","time":"2020-06-23T19:05:35.166895344Z"}
Is there any way we can achieve that?
With regards,
-M-

Unable to setup Apache Zookeeper and Kafka on windows machine

I am very new to Apache Zookeeper and Kafka. I've downloaded below on windows machine.
Apache Zookeeper - http://zookeeper.apache.org/releases.html
Kafka - https://kafka.apache.org/downloads.html
I am not very clear on what to execute next or where to make necessary changes.
I went to C:\apache-zookeeper-3.6.0\bin and executed zkServer.bat file
C:\apache-zookeeper-3.6.0\bin>call "C:\Program Files\Java\jdk1.8.0_151"\bin\java "-Dzookeeper.log.dir=C:\apache-zookeeper-3.6.0\bin\..\logs" "-Dzookeeper.root.logger=INFO,CONSOLE" "-Dzookeeper.log.file=zookeeper-pc-server-DESKTOP-NQ639DU.log" "-XX:+HeapDumpOnOutOfMemoryError" "-XX:OnOutOfMemoryError=cmd /c taskkill /pid %%p /t /f" -cp "C:\apache-zookeeper-3.6.0\bin\..\build\classes;C:\apache-zookeeper-3.6.0\bin\..\build\lib\*;C:\apache-zookeeper-3.6.0\bin\..\*;C:\apache-zookeeper-3.6.0\bin\..\lib\*;C:\apache-zookeeper-3.6.0\bin\..\conf" org.apache.zookeeper.server.quorum.QuorumPeerMain "C:\apache-zookeeper-3.6.0\bin\..\conf\zoo.cfg"
Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain
C:\apache-zookeeper-3.6.0\bin>endlocal
And Kafka from the location: C:\kafka_2.11-2.3.1\bin\windows
C:\kafka_2.11-2.3.1\bin\windows>kafka-server-start.bat
USAGE: kafka-server-start.bat server.properties
C:\kafka_2.11-2.3.1\bin\windows>
I've setup ZOOKEEPER_HOME=C:\apache-zookeeper-3.6.0 and PATH to C:\apache-zookeeper-3.6.0/bin.
I went through this video https://www.youtube.com/watch?v=TTsOoQ6_QB0 and it solved my issue.
Apache Zookeeper :
Create zookeeper_data (You can choose any name) inside C:\kafka_2.11-2.3.1 where my Kafka distribution has kept.
Then go to C:\kafka_2.11-2.3.1\config and edit zookeeper.properties file and use dataDir=C:\kafka_2.11-2.3.1\zookeeper_data
Step to Start Zookeeper:
zookeeper-server-start.bat C:\kafka_2.11-2.3.1\config\zookeeper.properties
Apache Kafka:
Create kafka-logs folder under C:\kafka_2.11-2.3.1 and then
Go to C:\kafka_2.11-2.3.1\config and edit server.properties file and use below
# A comma separated list of directories under which to store log files
log.dirs=C:\kafka_2.11-2.3.1\kafka-logs
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.num.partitions=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
min.insync.replicas=1
default.replication.factor=1
Step To start Apache Kafka
kafka-server-start.bat C:\kafka_2.11-2.3.1\config\server.properties

While I'm trying to run apache atlas. I'm facing some hbase error(I'm using embedded hbase and solr)

My apache atlas server is started but I found errors in my application.log file.
ui for apache atlas is also not running.
I've followed each and every step from apache website. All went good.
I gave all permissions in atlas-env.sh and application-properties files.
can anyone help me to how to figure it out?
Running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:186)
2019-10-25 12:25:49,366 WARN - [main:] ~ Running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:186)
2019-10-25 12:25:50,104 WARN - [main:] ~ Retrieve cluster id failed (ConnectionImplementation:551)
java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/hbaseid
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:549)
at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:287)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)
at org.janusgraph.diskstorage.hbase2.HBaseCompat2_0.createConnection(HBaseCompat2_0.java:46)
at org.janusgraph.diskstorage.hbase2.HBaseStoreManager.<init>(HBaseStoreManager.java:314)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:476)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:408)
at org.janusgraph.graphdb.configuration.GraphDatabaseC
When HBase starts, HBase Master node creates the node "/hbase/hbaseid" in zookeeper.
1. Check the processes.
check HBase and zookeeper are running or not with 'jps -m'.
If you configured HBase manages zookeeper internally, you can not see the zookeeper process with jps command then you can check its port with 'netstat -nt | grep ZK_PORT' and normally it uses 2181.
netstat -nt | grep 2181
2. Check the zookeeper node
If you run zookeeper cluster independently, you can check the node "/hbase/hbaseid" with the zookeeper CLI like this.
ZOOKEEPER/bin/zkCli.sh
[zk: ...] ls /
[zk: ...] get /hbase/hbaseid
I hope this could help you.
install atlas
you can download the source code of atlas v2.0.0 or master branch from here and build it.
$ export MAVEN_OPTS="-Xms2g -Xmx2g"
$ mvn clean install
$ mvn clean package -Pdist
If you build the master branch, you can find the server package from /SOURCE_CODE/distro/target/apache-atlas-3.0.0-SNAPSHOT-server/apache-atlas-3.0.0-SNAPSHOT.
You should configure the server prior to run it.
Here are the minimum settings. Please find the atlas-application.properties file in conf directory.
atlas.graph.storage.hostname=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for hbase
atlas.graph.index.search.backend=[solr or elasticsearch] => choose one you want to use.
atlas.graph.index.hostname=xxx.xxx.xxx.xxx => solr or elasticsearch server's addr
atlas.kafka.zookeeper.connect=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for Kafka
atlas.kafka.bootstrap.servers=xxx.xxx.xxx.xxx:xxxx => kafka addr
atlas.audit.hbase.zookeeper.quorum=xxx.xxx.xxx.xxx:xxxx => zookeeper addr and port for hbase
To run the server,
$ bin/atlas_start.py
install zookeeper
Actually, to install zookeeper, there is almost nothing to do.
just follow the steps
In this case, you should change your hbase env. in hbase-env.sh
export HBASE_MANAGES_ZK=false
If you see some warnings from hbase log file like 'Could not start ZK at requested port of 2181.' then please check the hbase-site.xml file and set hbase.cluster.distributed to true.

How to enable remote JMX on Kafka brokers (for JmxTool)?

I enabled JMX on Kafka brokers by adding
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=<server_IP>
-Djava.net.preferIPv4Stack=true"
However, when I use kafka.tools.JmxTool to get the JMX metrics, it outputs Unix timestamps only. Why?
./bin/kafka-run-class.sh kafka.tools.JmxTool \
--object-name 'kafka.server:type=BrokerTopicMetrics,name=AllTopicsMessagesInPerSec' \
--jmx-url "service:jmx:rmi:///jndi/rmi://<server_IP>:9111/jmxrmi"
How can I have it print out the metrics?
Edit bin/kafka-run-class.sh and set KAFKA_JMX_OPTS variable
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=your.kafka.broker.hostname -Djava.net.preferIPv4Stack=true"
Update bin/kafka-server-start.sh add the below line
export JMX_PORT=PORT
You must set 'JMX_PORT' variable, or add the following line to bin/kafka-server-start.sh.
export JMX_PORT=${JMX_PORT:-9999}
then you will be able to connect to Kafka JMX metrics. I use jconsole tool and 'localhost:9999' address.
Setting JMX_PORT inside bin/kafka-run-class.sh will clash with Zookeeper, if you are running Zookeeper on the same node.
Best is to set JMX port individually inside corresponding server-start scripts:
Insert line “export JMX_PORT=${JMX_PORT:-9998}” before last line in $KAFKA_HOME/bin/zookeeper-server-start.sh file.
Restart the Zookeeper server.
Repeat steps 1 and 2 for all zookeeper nodes in the cluster.
Insert line “export JMX_PORT=${JMX_PORT:-9999}” before last line in $KAFKA_HOME/bin/kafka-server-start.sh file.
Restart the Kafka Broker.
Repeat steps 4 and 5 for all brokers in the cluster.
If you're running via systemd:
edit /etc/systemd/system/multi-user.target.wants/kafka.service
in the "[service]" section add a line:
Environment=JMX_PORT=9989
reload: systemctl daemon-reload
restart: systemctl restart kafka
enjoy the beans: echo 'beans' | java -jar jmxterm-1.0-alpha-4-uber.jar -l localhost:9989 -n 2>&1
This is Kafka 2.3.0.
Using jconsole For Available MBeans
You should use jconsole first to know the names of the MBeans available.
The proper name of the MBean you wanted to query metrics of is kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec (the AllTopics prefix was used in older verions). Thanks AndyTheEntity.
Enabling Remote JMX (with no authentication or SSL)
As described in Monitoring and Management Using JMX Technology you should set certain system properties when you start the Java VM of a Kafka broker.
Kafka's bin/kafka-run-class.sh shell script makes the configuration painless as it does the basics for you and sets KAFKA_JMX_OPTS.
# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false "
fi
For remote JMX you should set com.sun.management.jmxremote.port that Kafka's bin/kafka-run-class.sh shell script sets using JMX_PORT environment variable.
# JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
fi
With that, enabling remote JMX is as simple as the following command:
JMX_PORT=9999 ./bin/kafka-server-start.sh config/server.properties
Using JmxTool
With the above, run the JmxTool:
$ ./bin/kafka-run-class.sh kafka.tools.JmxTool \
--object-name 'kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec'
Trying to connect to JMX url: service:jmx:rmi:///jndi/rmi://:9999/jmxrmi.
"time","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:Count","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:EventType","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:FifteenMinuteRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:FiveMinuteRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:MeanRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:OneMinuteRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:RateUnit"
1567586728595,0,messages,0.0,0.0,0.0,0.0,SECONDS
1567586730597,0,messages,0.0,0.0,0.0,0.0,SECONDS
...
You could use --one-time option to print the JMX metrics just once.
$ ./bin/kafka-run-class.sh kafka.tools.JmxTool \
--object-name 'kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec' \
--one-time true
Trying to connect to JMX url: service:jmx:rmi:///jndi/rmi://:9999/jmxrmi.
"time","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:Count","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:EventType","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:FifteenMinuteRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:FiveMinuteRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:MeanRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:OneMinuteRate","kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec:RateUnit"
1567586898459,0,messages,0.0,0.0,0.0,0.0,SECONDS
vim kafka_2.11-0.10.1.1/bin/kafka-run-class.sh
and then add the first two lines and comment as I have done for other lines, (Note : after doing this Kafka scripts cannot be used for client operations for listing topics.. for your client operations you need to use a separate scripts , download again in different locations and use)
export JMX_PORT=9096
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<ipaddress> -Dcom.sun.management.jmxremote.port=$JMX_PORT -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT"
# JMX settings
#if [ -z "$KAFKA_JMX_OPTS" ]; then
# KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false "
#fi
# JMX port to use
#if [ $JMX_PORT ]; then
# KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
#fi
Use kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec
The AllTopics prefix was used in older verions. You can specify topic using kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=<topic-name>
src: http://grokbase.com/t/kafka/users/164ksnhff0/enable-jmx-on-kafka-brokers
Kafka has provided all you need. When your start your server, activate the KAFKA_JMX_OPTS arg by using these command:
$KAFKA_JMX_OPTS JMX_PORT=[your_port_number] ./kafka-server-start.sh -daemon ../config/server.properties
Using those command, you activated JMX Remote and related port. Then you can connect your JConsole or another monitoring tools.
Just before calling kafka-server-start.sh add following exports. It worked like a charm for my case. You can set desired port for JMX_PORT and you should set broker for $BROKER_IP part.
export JMX_PORT=9900
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=$BROKER_IP -Djava.net.preferIPv4Stack=true"
This is standard Kafka start procedure:
bin/kafka-server-start.sh config/server.properties
This is Kafka start procedure with JMX:
JMX_PORT=8004 bin/kafka-server-start.sh config/server.properties