I am trying to print avro messages on a kafka topic using kafka-avro-console-consumer in a log4j format.
For that I use the following kafka-avro-console-consumer command:
bin/kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic avro-test -property print.key=true --formatter kafka.tools.LoggingMessageFormatter
I have exported KAFKA_OPTS via the following command:
export $KAFKA_OPTS= -Dlog4j.configuration=file:/path/to/file/kafka-console-consumer-log4j.properties
Now if I run regular kafka-console-consumer,using the following command:
bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic avro-test -property print.key=true --formatter kafka.tools.LoggingMessageFormatter
I am able to produce a log4j enabled output:
[2018-07-17 19:09:40,514] INFO [Consumer clientId=consumer-1, groupId=console-consumer-10597] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-07-17 19:09:40,522] INFO [Consumer clientId=consumer-1, groupId=console-consumer-10597] Successfully joined group with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator)
[2018-07-17 19:09:40,523] INFO [Consumer clientId=consumer-1, groupId=console-consumer-10597] Setting newly assigned partitions [avro-test-0] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2018-07-17 19:09:40,531] INFO [Consumer clientId=consumer-1, groupId=console-consumer-10597] Resetting offset for partition avro-test-0 to offset 23. (org.apache.kafka.clients.consumer.internals.Fetcher)
However this formatting option does not kick in if I use a avro consumer using the following command:
bin/kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic avro-test -property print.key=true --formatter kafka.tools.LoggingMessageFormatter
It just resorts to a default formatter.
Is there something I may be missing here?
I think if you override the --formatter, you won't get Avro messages anymore, as kafka.tools.LoggingMessageFormatter doesn't understand how to deserialize Avro
Ref - source code
DEFAULT_AVRO_FORMATTER="--formatter io.confluent.kafka.formatter.AvroMessageFormatter"
...
for OPTION in "$#"
do
case $OPTION in
--formatter)
DEFAULT_AVRO_FORMATTER=""
...
exec $(dirname $0)/schema-registry-run-class kafka.tools.ConsoleConsumer $DEFAULT_AVRO_FORMATTER ...
so, it should run kafka.tools.ConsoleConsumer --formatter kafka.tools.LoggingMessageFormatter, as expected becuase the default is being unassigned, and schema-registry-run-class is defining KAFKA_OPTS, but you need to not have spaces or dollar signs on that line
export KAFKA_OPTS='-Dlog4j.configuration=file:/path/to/file/kafka-console-consumer-log4j.properties'
bin/kafka-avro-console-consumer ...
Related
I got this error when my Apache beam application connects to my Kafka cluster with ACL enabled. Please help me fix this issue.
Caused by: java.io.IOException: Reader-4: Timeout while initializing partition 'test-1'. Kafka client may not be able to connect to servers.
org.apache.beam.sdk.io.kafka.KafkaUnboundedReader.start(KafkaUnboundedReader.java:128)
org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:779)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194)
org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:76)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
I have a Kafka cluster with 3 nodes on GKE. I created a topic with replication-factor 3 and partition 5.
kafka-topics --create --zookeeper zookeeper:2181 \
--replication-factor 3 --partitions 5 --topic topic
I set read permission on a topic test for test_consumer_group consumer group.
kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181 \
--add --allow-principal User:CN=myuser.test.io --consumer \
--topic test --group 'test_consumer_group'
In my Apache beam application, I set configuration group.id=test_consumer_group.
Also testing with console consumer and it is not working as well.
$ docker run --rm -v `pwd`:/cert confluentinc/cp-kafka:5.1.0 \
kafka-console-consumer --bootstrap-server kafka.xx.xx:19092 \
--topic topic --consumer.config /cert/client-ssl.properties
[2019-03-08 05:43:07,246] WARN [Consumer clientId=consumer-1, groupId=test_consumer_group]
Received unknown topic or partition error in ListOffset request for
partition test-3 (org.apache.kafka.clients.consumer.internals.Fetcher)
Seems like a communication issue between to your kafka readers Kafka client may not be able to connect to servers
I have 3 zookeeper instances and 3 kafka instances.
The configuration is all fine and i can see they are properly connected to each other.
Then I create a new topic using below command :
kafka-topics.sh --create --zookeeper 10.XXX.XXX.XX:2181,10.XXX.XXX.XX:2182,10.XXX.XXX.XX:2183 --replication-factor 3 --partitions 1 --topic <topicName>
But as soon as i create a consumer using :
kafka-console-consumer.sh --bootstrap-server
10.XXX.XXX.XX:9094,10.XXX.XXX.XX:9095,10.XXX.XXX.XX:9096 --topic ankit108 -
consumer-property group.id=test1
I get below errors:
ERROR [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for
partition __consumer_offsets-32 to broker
2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This
server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
I am getting below error while trying to pass words in Kafka-consumer,
commands with i entered
console-1:(for producer)
export PATH=$PATH:/usr/hdp/current/kafka-broker/bin
kafka-topics.sh --create --zookeeper ip-172-31-20-58.ec2.internal:2181 --replication-factor 1 --partitions 1 --topic testuday1234
kafka-console-producer.sh --broker-list ip-172-31-20-58.ec2.internal:6667 --topic testuday1234
console-2: (for consumer)
export PATH=$PATH:/usr/hdp/current/kafka-broker/bin
kafka-console-consumer.sh --zookeeper localhost:2181 --topic testuday1234 --from-beginning
Please help me in resolving these error
Error i am getting in Producer console:
[udaychitukula6587#ip-172-31-38-183 ~]$ kafka-console-producer.sh --broker-list ip-172-31-20-58.ec2.internal:6667 --topic testuday1234
hi
[2018-05-28 15:27:36,761] WARN Error while fetching metadata [{TopicMetadata for topic testuday1234 ->
No partition metadata for topic testuday1234 due to kafka.common.LeaderNotAvailableException}] for topic [testuday1234]: class kafka.common.LeaderNotAvailableExcep
tion (kafka.producer.BrokerPartitionInfo)
[2018-05-28 15:27:36,774] WARN Error while fetching metadata [{TopicMetadata for topic testuday1234 ->
No partition metadata for topic testuday1234 due to kafka.common.LeaderNotAvailableException}] for topic [testuday1234]: class kafka.common.LeaderNotAvailableExcep
tion (kafka.producer.BrokerPartitionInfo)
Error i am getting in consumer console:
[udaychitukula6587#ip-172-31-38-183 ~]$ kafka-console-consumer.sh --zookeeper localhost:2181 --topic testuday123 --from-beginning
{metadata.broker.list=ip-172-31-20-58.ec2.internal:6667,ip-172-31-53-48.ec2.internal:6667,ip-172-31-60-179.ec2.internal:6667, request.timeout.ms=30000, client.id=c
onsole-consumer-63526, security.protocol=PLAINTEXT}
{metadata.broker.list=ip-172-31-20-58.ec2.internal:6667,ip-172-31-53-48.ec2.internal:6667,ip-172-31-60-179.ec2.internal:6667, request.timeout.ms=30000, client.id=c
onsole-consumer-63526, security.protocol=PLAINTEXT}
There is a couple of things that I've noted here.
First, in the newer versions (I think from 0.10.1) of Kafka for the console consumer, we need to use --bootstrap-server option as opposed to --zookeeper. Could you please confirm the version you're using? and also try to run the consumer command with --bootstrap-server option?
Second, for the producer in such a scenario, I would recommend checking 3 things to confirm where the issue might be:
The leader of a Kafka cluster is elected by the zookeeper, so it might be worth checking by running a zookeeper-client shell to see if there is an active controller in the Kafka cluster (in the znode path - /brokers/ids/[brokerId]).
Try running a Kafka-topics --describe --topic command to see if the topic has an active leader partition i.e. the Leader column in the output of the command should NOT have None. I've run into this myself before.
The last one is about the port number of the broker, could you please check and confirm if the broker is actually listening on that port. You'll find this information (listeners and advertised.listeners) in server.properties file on the broker. I found this post that you might find useful where the user had a problem with the port 6667.
I hope this helps!
Kafka Version : kafka_2.11-0.11.0.1
when I try to send messages from producer to consumer in a Multi-node Kafka Ecosystem, the message is getting sent, and broadcasted from Kafka server! These messages can be read by using Kafka Consumer via Zookeeper [which is going to be deprecated on following version!], and also via kafka-simple-consumer-shell.sh
./bin/kafka-simple-consumer-shell.sh --broker-list <broker-ip>:9092 --topic myTopic
But cannot be read from Consumer via consumer-bootstrap-server!
Call to Consumer via Zookeeper-server :
./bin/kafka-console-consumer.sh --topic myTopic --zookeeper <broker-ip>:2181 --from-beginning
Call to Consumer via Bootstrap-server :
./kafka-console-consumer.sh --topic myTopic --bootstrap-server <broker-ip>:9092 --consumer.config ../config/consumer.properties
My config/producer.properties contains:
bootstrap.servers= {broker-ip}:9092
compression.type=none
My config/consumer.properties contains(nothing):
# zookeeper.connect={private-ip-1}:2181,{private-ip-2}:2181
# timeout in ms for connecting to zookeeper
# zookeeper.connection.timeout.ms=6000
#consumer group id
group.id=test-consumer-group
#consumer timeout
# consumer.timeout.ms=7000
# consumer.timeout.ms=-1
My config/server.properties contains:
listeners=PLAINTEXT://{private-ip-1}:9092
advertised.host.name={private-ip-1}
advertised.listeners=PLAINTEXT://{private-ip-1}:9092
advertised.port=9092
host.name={private-ip-1}
zookeeper.connect={private-ip-1}:2181
I'm actually working on setting up simple Kafka authentication using SASL Plain Text and add ACL authorization. But I have an issue when I try to consume data.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : b8642491e78c5a13
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 1 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 2 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 3 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 4 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 5 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 6 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 7 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 8 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 9 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
[main] WARN org.apache.kafka.clients.NetworkClient - Error while fetching metadata with correlation id 10 : {test-topic=TOPIC_AUTHORIZATION_FAILED}
Next, you can see my configuration files.
server.properties
listeners=SASL_PLAINTEXT://localhost:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
broker.id=0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none
consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-secret";
};
Environment variable:
export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_server_jaas.conf"
Commands
Set ACL:
bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation All --group test-consumer-group --topic test-topic
start Kafka Server :
./bin/kafka-server-start.sh config/server.properties
Start Producer:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties
Start Consumer:
bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
When I try to start the consumer, I have the issue described above. Also, in the kafka logs, I have this:
[2016-10-22 20:17:14,091] ERROR [KafkaApi-0] Error when handling request {group_id=test-consumer-group} (kafka.server.KafkaApis)
kafka.admin.AdminOperationException: replication factor: 3 larger than available brokers: 1
at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:117)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:403)
at kafka.server.KafkaApis.kafka$server$KafkaApis$$createTopic(KafkaApis.scala:629)
at kafka.server.KafkaApis.kafka$server$KafkaApis$$createGroupMetadataTopic(KafkaApis.scala:651)
at kafka.server.KafkaApis$$anonfun$getOrCreateGroupMetadataTopic$1.apply(KafkaApis.scala:657)
at kafka.server.KafkaApis$$anonfun$getOrCreateGroupMetadataTopic$1.apply(KafkaApis.scala:657)
at scala.Option.getOrElse(Option.scala:121)
at kafka.server.KafkaApis.getOrCreateGroupMetadataTopic(KafkaApis.scala:657)
at kafka.server.KafkaApis.handleGroupCoordinatorRequest(KafkaApis.scala:818)
at kafka.server.KafkaApis.handle(KafkaApis.scala:86)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
How can I fix this?
Issue fixed by separating jaas client and jaas server.
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-secret";
};
On the same terminal, export jaas server conf file and start kafka broker:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_server_jaas.conf"
$ ./bin/kafka-server-start.sh config/server.properties
On a client terminal, export client jaas conf file and start consumer:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
If you also want to produce, do this on another terminal window:
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/user/kafka_2.10-0.10.0.1/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties
I have faced similar issue with using the ACLs in Kafka v.0.10. I found this discussion helpful. Especially enabling the authorization log in order to check what is the incoming username for the request and what is it specified in your ACLs.
Firstly check if the server principal admin is provided all the authorization needed. Server principal needs to be allowed to perform all types of authorization on all topics, groups as well as cluster. It's better to declare the admin in the super-users in server.properties file. If this doesn't resolve the issue, then you can enable the authorization log to find out which specimen is being deined for what operation.
Authorization log can be enabled by modifying the log4j.properties in the config folder. In log4j.properties file, change WARN to DEBUG and restart the kafka-servers.
log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender
This helped me in sorting out my issue. Hope that helps.
PS: The authorization logs generated will be very lengthy and consume a lot of space. So, remember to turn this off when done with debugging.
Seems you have created a topic with replication factor of 3 but you only have 1 broker running. Try creating a topic with "--replication-factor 1". You might also want to change the default replication factor to be 1 (default.replication.factor in config/server.properties) if you are creating topics automatically.