Setting log.retentions.hours for broker in Kafka 0.10.2.x - apache-kafka

I am trying to set log.retenton.hours for broker level configuration for kafka 0.10.2x. But I am getting this error for below command.
kafka-configs.sh --zookeeper zookeeper:2181 --entity-type brokers --entity-name 0 --alter --add-config log.retention.hours=-1
Error while executing config command requirement failed: Unknown Dynamic Configuration 'log.retention.hours'.
java.lang.IllegalArgumentException: requirement failed: Unknown Dynamic Configuration 'log.retention.hours'.
at scala.Predef$.require(Predef.scala:277)
at kafka.server.DynamicConfig$.$anonfun$validate$1(DynamicConfig.scala:101)
at kafka.server.DynamicConfig$.$anonfun$validate$1$adapted(DynamicConfig.scala:100)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
at kafka.server.DynamicConfig$.kafka$server$DynamicConfig$$validate(DynamicConfig.scala:100)
at kafka.server.DynamicConfig$Broker$.validate(DynamicConfig.scala:59)
at kafka.admin.AdminUtils$.changeBrokerConfig(AdminUtils.scala:555)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:105)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:68)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)

As the error says, that property is not dynamic (cannot be modified while the broker is running)
Plus, that feature shouldn't be possible with your version
From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker
You can set retention per topic level, otherwise, you need to edit the server.properties file of every broker and gracefully reboot them
I'm sure you have a good reason for "disabling" retention, but I would suggest trying compacted topics first

log.retention.hours is read-only property at broker level, so it can't be changed using kafka-config.sh dynamically.
Change it in server.properties and restart the brokers.
Here are the details for readonly or dynamic broker config.
https://kafka.apache.org/documentation/#dynamicbrokerconfigs

Related

Not Able to Increase Replication Factor of Kafka Topic

We have a 3 node kafka-zookeeper cluster setup with kafka-zookeeper communicating on SSL.
We are currently using apache kafka 2.5 and zookeeper 3.5.7 . We are trying to increase the replication factor in kafka topics using the below method:
To increase the number of replicas for a given topic you have to:
Specify the extra replicas in a custom reassignment json file. For example, you could create increase-replication-factor.json and put this content in it:
{"version":1,
"partitions":[
{"topic":"signals","partition":0,"replicas":[0,1,2]},
{"topic":"signals","partition":1,"replicas":[0,1,2]},
{"topic":"signals","partition":2,"replicas":[0,1,2]}
]}
Use the file with the --execute option of the kafka-reassign-partitions tool
For example:
$ kafka-reassign-partitions --zookeeper localhost:2182 --reassignment-json-file increase-replication-factor.json --execute --command-config zookeeper_client.properties
But we are facing the problem while running the kafka-reassign-partitions , while running this command the connection to zookeeper fails with below error:
Save this to use as the --reassignment-json-file option during rollback
Partitions reassignment failed due to KeeperErrorCode = NoAuth for
/admin/reassign_partitions
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth
for /admin/reassign_partitions
at org.apache.zookeeper.KeeperException.create(KeeperException.java:120)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1644)
at kafka.zk.KafkaZkClient.createPartitionReassignment(KafkaZkClient.scala:871)
We don't have any ACLS created on the topic still we get this issue.

Error while fetching metadata kafka {test=LEADER_NOT_AVAILABLE}?

I get the error after running those simple commands -
I started the Zookeeper and the Kafka servers,
I execute the command:
./kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
and execute the command:
./kafka-console-producer --broker-list localhost:9092 --topic test
I obtain a list of WARN like:
[2019-12-08 21:36:13,024] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 37 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
what did I do wrong?
Thanks
If your broker has the auto.create.topics.enable set to true, then this error will be transient and you should be able to produce message without any further error.
It happens just because the producer is asking for metadata about the topic it wants to write to but that topic doesn't exist in the cluster and the partition leader (where the producer wants to write) doesn't exist yet.
If you retry, the broker will create the topic and the command will work fine.
If the above configuration is set to false, then the broker doesn't create the topic automatically on the first request from a client so you have to create it upfront.
Finally, but it's not your case, the above error could even happen when the topic exist but, for example, the broker which is leader for the specific topic partition is down and a new leader election is in progress.
I wanted to add a comment, but its seems I can't. Just go through this link. Someone had a similar problem and it seems the problem is not what you have done but can be somethings different.
Link: https://grokbase.com/t/kafka/users/134qvay38q/leadernotavailable-exception

Missing required argument "[zookeeper]"

i'm trying to start a consumer using Apache Kafka, it used to work well, but i had to format my pc and reinstall everything again, and now when trying to run this:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
this is what i'm getting:
Missing required argument "[zookeeper]"
Option Description
------ -----------
--blacklist <blacklist> Blacklist of topics to exclude from
consumption.
--bootstrap-server <server to connect
to>
--consumer.config <config file> Consumer config properties file.
--csv-reporter-enabled If set, the CSV metrics reporter will
be enabled
--delete-consumer-offsets If specified, the consumer path in
zookeeper is deleted when starting up
--formatter <class> The name of a class to use for
formatting kafka messages for
display. (default: kafka.tools.
DefaultMessageFormatter)
--from-beginning If the consumer does not already have
an established offset to consume
from, start with the earliest
message present in the log rather
than the latest message.
--key-deserializer <deserializer for
key>
--max-messages <Integer: num_messages> The maximum number of messages to
consume before exiting. If not set,
consumption is continual.
--metrics-dir <metrics directory> If csv-reporter-enable is set, and
this parameter isset, the csv
metrics will be outputed here
--new-consumer Use the new consumer implementation.
--property <prop>
--skip-message-on-error If there is an error when processing a
message, skip it instead of halt.
--timeout-ms <Integer: timeout_ms> If specified, exit if no message is
available for consumption for the
specified interval.
--topic <topic> The topic id to consume on.
--value-deserializer <deserializer for
values>
--whitelist <whitelist> Whitelist of topics to include for
consumption.
--zookeeper <urls> REQUIRED: The connection string for
the zookeeper connection in the form
host:port. Multiple URLS can be
given to allow fail-over.
my guess is that there's some kind of problem with the zookeeper connection port, because it's telling me to specify the port which zookeeper has to use to get connected to kafka. I'm not sure of this though, and don't know how to figure out the port to specify if this was the problem. Any suggestions??
Thanks in advance for the help
It looks like you are using an old version of the Kafka tools that requires to set --new-consumer if you want to directly connect to the brokers.
I'd recommend picking a recent version of Kafka so you only need to specify --bootstrap-server like in your example: http://kafka.apache.org/downloads

How can you set the max.message.bytes of a state store changelog topic?

I have a Kafka Streams application with messages up to 10MiB. I want to persist these messages in a state store, but Kafka Streams fails to produce to the internal changelog topic:
2017-11-17 08:36:19,792 ERROR RecordCollectorImpl - task [4_5] Error sending record to topic appid-statestorename-state-store-changelog. No more offsets will be recorded for this task and the exception will eventually be thrown
org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.
2017-11-17 08:36:20,583 ERROR StreamThread - stream-thread [StreamThread-1] Failed while executing StreamTask 4_5 due to flush state:
By adding some logging, it looks like the default max.message.bytes setting of an internal topic is 1MiB.
The default max.message.bytes for the cluster is set to 50MiB.
Is it possible to tweak the configuration of internal topics of Kafka Streams applications?
A work-around is to start the streams application, let it create the topics, and afterwards alter the topic config. But this feels like a dirty hack.
./kafka-topics.sh --zookeeper ... \
--alter --topic appid-statestorename-state-store-changelog \
--config max.message.bytes=10485760
Kafka 1.0 allows to specify custom topic properties for internal topics via StreamsConfig.
You prefix those configs with "topic." and can use any configs as defined in TopicConfig.
See the original KIP for more details:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-173%3A+Add+prefix+to+StreamsConfig+to+enable+setting+default+internal+topic+configs

Kafka Remote Producer - advertised.listeners

I am running Kafka 0.10.0 on CDH 5.9, cluster is kerborized.
What I am trying to do is to write messages from a remote machine to my Kafka broker.
The cluster (where Kafka is installed) has internal as well as external IP addresses.
The machines' hostnames within the cluster get resolved to the private IPs, the remote machine resolves the same hostnames to the public IP addreses.
I opened the necessary port 9092 (I am using SASL_PLAINTEXT protocol) from remote machine to Kafka Broker, verified that using telnet.
First Step - in addition to the standard properties for the Kafka Broker, I configured the following:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<hostname>:9092
I am able to start the console consumer with
kafka-console-consumer --new consumer --topic <topicname> --from-beginning --bootstrap-server <hostname>:9092 --consumer.config consumer.properties
I am able to use my custom producer from another machine within the cluster.
Relevant excerpt of producer properties:
security.protocol=SASL_PLAINTEXT
bootstrap.servers=<hostname>:9092
I am not able to use my custom producer from the remote machine:
Exception org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for <topicname>-<partition>
using the same producer properties. I am able to telnet the Kafka Broker from the machine and /etc/hosts includes hostnames and public IPs.
Second Step - I modified server.properties:
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<kafkaBrokerInternalIP>:9092
consumer & producer within the same cluster still run fine (bootstrap
servers are now the internal IP with port 9092)
as expected remote producer fails (but that is obvious given that it
is not aware of the internal IP addresses)
Third Step - where it gets hairy :(
listeners=SASL_PLAINTEXT://0.0.0.0:9092
advertised.listeners=SASL_PLAINTEXT://<kafkaBrokerPublicIP>:9092
starting my consumer with
kafka-console-consumer --new-consumer --topic <topicname> --from-beginning --bootstrap-server <hostname>:9092 --consumer.config consumer.properties
gives me a warning, but I don't think this is right...
WARN clients.NetworkClient: Error while fetching metadata with correlation id 1 : {<topicname>=LEADER_NOT_AVAILABLE}
starting my consumer with
kafka-console-consumer --new-consumer --topic <topicname> --from-beginning --bootstrap-server <KafkaBrokerPublicIP>:9092 --consumer.config consumer.properties
just hangs after those log messages:
INFO utils.AppInfoParser: Kafka version : 0.10.0-kafka-2.1.0
INFO utils.AppInfoParser: Kafka commitId : unknown
seems like it cannot find a coordinator as in the normal flow this would be the next log:
INFO internals.AbstractCoordinator: Discovered coordinator <hostname>:9092 (id: <someNumber> rack: null) for group console-consumer-<someNumber>.
starting the producer on a cluster node with bootstrap.servers=:9092
I observe the same as with the producer:
WARN NetworkClient:600 - Error while fetching metadata with correlation id 0 : {<topicname>=LEADER_NOT_AVAILABLE}
starting the producer on a cluster node with bootstrap.servers=:9092 I get
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
starting the producer on my remote machine with either bootstrap.servers=:9092 or bootstrap.servers=:9092 I get
NetworkClient:600 - Error while fetching metadata with correlation id 0 : {<topicname>=LEADER_NOT_AVAILABLE}
I have been struggling for the past three days to get this to work, however I am out of ideas :/ My understanding is that advertised.hostnames serves for exactly this purpose, however either I am doing something wrong, or there is something wrong in the machine setup.
Any hints are very much appreciated!
I met this issue recently.
In my case , I enabled Kafka ACL, and after disable it by comment this 2 configuration, the problem worked around.
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:kafka
And an thread may help you I think:
https://gist.github.com/jorisdevrede/a7933a99251452bb1867
What mentioned in it at the end:
If you only use a SASL_PLAINTEXT listener on the Kafka Broker, you
have to make sure that you have set the
security.inter.broker.protocol=SASL_PLAINTEXT too, otherwise you will
get a LEADER_NOT_AVAILABLE error in the client.