i'm trying to start a consumer using Apache Kafka, it used to work well, but i had to format my pc and reinstall everything again, and now when trying to run this:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
this is what i'm getting:
Missing required argument "[zookeeper]"
Option Description
------ -----------
--blacklist <blacklist> Blacklist of topics to exclude from
consumption.
--bootstrap-server <server to connect
to>
--consumer.config <config file> Consumer config properties file.
--csv-reporter-enabled If set, the CSV metrics reporter will
be enabled
--delete-consumer-offsets If specified, the consumer path in
zookeeper is deleted when starting up
--formatter <class> The name of a class to use for
formatting kafka messages for
display. (default: kafka.tools.
DefaultMessageFormatter)
--from-beginning If the consumer does not already have
an established offset to consume
from, start with the earliest
message present in the log rather
than the latest message.
--key-deserializer <deserializer for
key>
--max-messages <Integer: num_messages> The maximum number of messages to
consume before exiting. If not set,
consumption is continual.
--metrics-dir <metrics directory> If csv-reporter-enable is set, and
this parameter isset, the csv
metrics will be outputed here
--new-consumer Use the new consumer implementation.
--property <prop>
--skip-message-on-error If there is an error when processing a
message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms> If specified, exit if no message is
available for consumption for the
specified interval.
--topic <topic> The topic id to consume on.
--value-deserializer <deserializer for
values>
--whitelist <whitelist> Whitelist of topics to include for
consumption.
--zookeeper <urls> REQUIRED: The connection string for
the zookeeper connection in the form
host:port. Multiple URLS can be
given to allow fail-over.
my guess is that there's some kind of problem with the zookeeper connection port, because it's telling me to specify the port which zookeeper has to use to get connected to kafka. I'm not sure of this though, and don't know how to figure out the port to specify if this was the problem. Any suggestions??
Thanks in advance for the help
It looks like you are using an old version of the Kafka tools that requires to set --new-consumer if you want to directly connect to the brokers.
I'd recommend picking a recent version of Kafka so you only need to specify --bootstrap-server like in your example: http://kafka.apache.org/downloads
Related
I am unable to send large file to the broker, I have edited the configuration files and inside my client program but it didn't work.
Error Message
The request included a message larger than the max message size the server will accept.
Server properties
message.max.bytes=900000000
replica.fetch.max.bytes=900000000
Producer
max.request.size=100000000
Consumer
max.partition.fetch.bytes=100000000
fetch.message.max.bytes=100000000
Client app
props.put("max.request.size",96214400);
props.put("message.max.bytes",96214400);
Moving comment to answer...
The message.max.bytes is a cluster-wide setting and you need to make sure that the corresponding max.message.bytes configuration is also set at topic-level. As the topic-level configuration defaults to 1048588 you are seeing this error.
kafka-topics.bat --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test --config max.message.bytes=9000000
I have being using the below CMD to get the latest offsets in from a Kafka Queue which has plain-text port open
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list server:9092 --topic sample_topic --time -1
But, now we only have the SSL port open, so I tried passing the SSL details as a property file
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list server:9093 --topic sample_topic --time -1 --consumer-config /path/to/file
Getting the below error -
Exception in thread "main" joptsimple.UnrecognizedOptionException: consumer-config is not a recognized option
How do I pass the SSL details to this command? These are all the available arguments for kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list <String: hostname:and port,...,hostname:port>
--max-wait-ms <Integer: ms>
--offsets <Integer: count>
--partitions <String: partition ids>
--time <Long: timestamp/-1(latest)/-2
--topic <String: topic>
Unfortunately kafka.tools.GetOffsetShell only supports PLAINTEXT connection. This tools is not used a lot and nobody has bothered updating it.
Depending on your use case, you have a few options:
Use the kafka-consumer-groups.sh tool: Assuming you have a consumer group consuming from that topic, this tool display the log end offsets of each partitions
Patch kafka.tools.GetOffsetShell: It's realtively easy to add support to secured connections bby reusing logic from the other tool. If you do so, consider sending a patch to Kafka =)
Write a tiny tool that calls Consumer.endOffsets()
Use kafka.tools.DumpLogSegments: As a last resort this tool can also be used to find the last offset
I have being using the below CMD to get the latest offsets in from a Kafka Queue which has plain-text port open
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list server:9092 --topic sample_topic --time -1
But, now we only have the SSL port open, so I tried passing the SSL details as a property file
kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list server:9093 --topic sample_topic --time -1 --consumer-config /path/to/file
Getting the below error -
Exception in thread "main" joptsimple.UnrecognizedOptionException: consumer-config is not a recognized option
How do I pass the SSL details to this command? These are all the available arguments for kafka-run-class.sh kafka.tools.GetOffsetShell
--broker-list <String: hostname:and port,...,hostname:port>
--max-wait-ms <Integer: ms>
--offsets <Integer: count>
--partitions <String: partition ids>
--time <Long: timestamp/-1(latest)/-2
--topic <String: topic>
Unfortunately kafka.tools.GetOffsetShell only supports PLAINTEXT connection. This tools is not used a lot and nobody has bothered updating it.
Depending on your use case, you have a few options:
Use the kafka-consumer-groups.sh tool: Assuming you have a consumer group consuming from that topic, this tool display the log end offsets of each partitions
Patch kafka.tools.GetOffsetShell: It's realtively easy to add support to secured connections bby reusing logic from the other tool. If you do so, consider sending a patch to Kafka =)
Write a tiny tool that calls Consumer.endOffsets()
Use kafka.tools.DumpLogSegments: As a last resort this tool can also be used to find the last offset
I am trying to set log.retenton.hours for broker level configuration for kafka 0.10.2x. But I am getting this error for below command.
kafka-configs.sh --zookeeper zookeeper:2181 --entity-type brokers --entity-name 0 --alter --add-config log.retention.hours=-1
Error while executing config command requirement failed: Unknown Dynamic Configuration 'log.retention.hours'.
java.lang.IllegalArgumentException: requirement failed: Unknown Dynamic Configuration 'log.retention.hours'.
at scala.Predef$.require(Predef.scala:277)
at kafka.server.DynamicConfig$.$anonfun$validate$1(DynamicConfig.scala:101)
at kafka.server.DynamicConfig$.$anonfun$validate$1$adapted(DynamicConfig.scala:100)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
at kafka.server.DynamicConfig$.kafka$server$DynamicConfig$$validate(DynamicConfig.scala:100)
at kafka.server.DynamicConfig$Broker$.validate(DynamicConfig.scala:59)
at kafka.admin.AdminUtils$.changeBrokerConfig(AdminUtils.scala:555)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:105)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:68)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
As the error says, that property is not dynamic (cannot be modified while the broker is running)
Plus, that feature shouldn't be possible with your version
From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker
You can set retention per topic level, otherwise, you need to edit the server.properties file of every broker and gracefully reboot them
I'm sure you have a good reason for "disabling" retention, but I would suggest trying compacted topics first
log.retention.hours is read-only property at broker level, so it can't be changed using kafka-config.sh dynamically.
Change it in server.properties and restart the brokers.
Here are the details for readonly or dynamic broker config.
https://kafka.apache.org/documentation/#dynamicbrokerconfigs
I am running Kafka 0.10.0 on CDH 5.9, cluster is kerborized.
What I am trying to do is to write messages from a remote machine to my Kafka broker.
The cluster (where Kafka is installed) has internal as well as external IP addresses.
The machines' hostnames within the cluster get resolved to the private IPs, the remote machine resolves the same hostnames to the public IP addreses.
I opened the necessary port 9092 (I am using SASL_PLAINTEXT protocol) from remote machine to Kafka Broker, verified that using telnet.
First Step - in addition to the standard properties for the Kafka Broker, I configured the following:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<hostname>:9092
I am able to start the console consumer with
kafka-console-consumer --new consumer --topic <topicname> --from-beginning --bootstrap-server <hostname>:9092 --consumer.config consumer.properties
I am able to use my custom producer from another machine within the cluster.
Relevant excerpt of producer properties:
security.protocol=SASL_PLAINTEXT
bootstrap.servers=<hostname>:9092
I am not able to use my custom producer from the remote machine:
Exception org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for <topicname>-<partition>
using the same producer properties. I am able to telnet the Kafka Broker from the machine and /etc/hosts includes hostnames and public IPs.
Second Step - I modified server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<kafkaBrokerInternalIP>:9092
consumer & producer within the same cluster still run fine (bootstrap
servers are now the internal IP with port 9092)
as expected remote producer fails (but that is obvious given that it
is not aware of the internal IP addresses)
Third Step - where it gets hairy :(
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<kafkaBrokerPublicIP>:9092
starting my consumer with
kafka-console-consumer --new-consumer --topic <topicname> --from-beginning --bootstrap-server <hostname>:9092 --consumer.config consumer.properties
gives me a warning, but I don't think this is right...
WARN clients.NetworkClient: Error while fetching metadata with correlation id 1 : {<topicname>=LEADER_NOT_AVAILABLE}
starting my consumer with
kafka-console-consumer --new-consumer --topic <topicname> --from-beginning --bootstrap-server <KafkaBrokerPublicIP>:9092 --consumer.config consumer.properties
just hangs after those log messages:
INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
INFO utils.AppInfoParser: Kafka commitId : unknown
seems like it cannot find a coordinator as in the normal flow this would be the next log:
INFO internals.AbstractCoordinator: Discovered coordinator <hostname>:9092 (id: <someNumber> rack: null) for group console-consumer-<someNumber>.
starting the producer on a cluster node with bootstrap.servers=:9092
I observe the same as with the producer:
WARN NetworkClient:600 - Error while fetching metadata with correlation id 0 : {<topicname>=LEADER_NOT_AVAILABLE}
starting the producer on a cluster node with bootstrap.servers=:9092 I get
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
starting the producer on my remote machine with either bootstrap.servers=:9092 or bootstrap.servers=:9092 I get
NetworkClient:600 - Error while fetching metadata with correlation id 0 : {<topicname>=LEADER_NOT_AVAILABLE}
I have been struggling for the past three days to get this to work, however I am out of ideas :/ My understanding is that advertised.hostnames serves for exactly this purpose, however either I am doing something wrong, or there is something wrong in the machine setup.
Any hints are very much appreciated!
I met this issue recently.
In my case , I enabled Kafka ACL, and after disable it by comment this 2 configuration, the problem worked around.
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:kafka
And an thread may help you I think:
https://gist.github.com/jorisdevrede/a7933a99251452bb1867
What mentioned in it at the end:
If you only use a SASL_PLAINTEXT listener on the Kafka Broker, you
have to make sure that you have set the
security.inter.broker.protocol=SASL_PLAINTEXT too, otherwise you will
get a LEADER_NOT_AVAILABLE error in the client.