I’m trying to get Kafka in kraft mode up n’ running with SASL_PLAINTEXT
I’ve got a functioning kafka broker/controller up n’ running locally, without SASL using this servier.properties
process.roles=broker,controller
node.id=1
controller.quorum.voters=1#localhost:9093
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
inter.broker.listener.name=PLAINTEXT
advertised.listeners=PLAINTEXT://:9092
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
I’ve bound ports from the kafka docker container 9092 to 9092 on the host
kafka-topics.sh --list --bootstrap-server localhost:9092
kafka-topics.sh --bootstrap-server localhost:9092 --topic test --create --partitions 2 --replication-factor 1
Works like a charm, and I can produce and consume. Docker container logs looks good as well.
I need some users to handle ACL on our topics, so I thought it was easy to just replace all PLAINTEXT fields with SASL_PLAINTEXT, I was wrong!!
We handle encryption on another level, so SASL_PLAINTEXT is sufficient, we don't need SASL_SSL
This is the config/kraft/sasl_server.properties i've been trying out so far, with no luck.
I've constructed this properties file by reading this https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_plain.html
process.roles=broker,controller
node.id=1
controller.quorum.voters=1#localhost:9094
listeners=SASL_PLAINTEXT://:9092,CONTROLLER://:9094
advertised.listeners=SASL_PLAINTEXT://:9092
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:SASL_PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
security.protocol=SASL_PLAINTEXT
listener.name.controller.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret" \
user_admin="admin-secret";
plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret";
I’m getting this error
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'controller.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
What am I doing wrong here?
process.roles=$KAFKA_PROCESS_ROLES
node.id=$KAFKA_NODE_ID
controller.quorum.voters=$KAFKA_CONTROLLER_QUORUM_VOTERS
listeners=BROKER://:9092,CONTROLLER://:9093
advertised.listeners=BROKER://:9092
listener.security.protocol.map=BROKER:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT
inter.broker.listener.name=BROKER
controller.listener.names=CONTROLLER
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.controller.protocol=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
listener.name.broker.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="$KAFKA_ADMIN_PASSWORD" \
user_admin="$KAFKA_ADMIN_PASSWORD";
listener.name.controller.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="$KAFKA_ADMIN_PASSWORD" \
user_admin="$KAFKA_ADMIN_PASSWORD";
Here is a working configuration.
sasl.mechanism.controller.protocol=PLAIN was important.
Related
I have a Kafka topic that receives many many messages. Many of them have the same key and I'm interested only in the latest messages. Looking around this topic seems perfect for the config log.cleanup.policy=compact.
Can I change the existing Kafka topic configuration adding/changing config log.cleanup.policy=compact or I have to create a new topic?
I tried to create a new topic with:
bin/kafka-topics.sh --create --topic compactedPricing \
--partitions 1 --replication-factor 1 \
--config cleanup.policy=compact \
--config min.cleanable.dirty.ratio=0.001 \
--config segment.ms=5000 \
--bootstrap-server localhost:9094
but I received the following error:
[2022-10-25 16:01:40,114] ERROR org.apache.kafka.common.errors.TopicAuthorizationException: Authorization failed.
So, is necessary to have a peculiar authorization?
And, in any case, how can I do it?
Using the kafka-avro-console-producer cli
When trying the following command
kafka-avro-console-producer \
--broker-list <broker-list> \
--topic <topic> \
--property schema.registry.url=http://localhost:8081 \
--property value.schema.id=419
I have this error
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema {...}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Internal Server Error; error code: 500
I can’t understand why is it trying to register the schema as the schema already exists and I’m trying to use it through its ID within the registry.
Note: my schema registry is in READ_ONLY mode, but as I said it should not be an issue right?
Basically I needed to ask the producer not to try to auto register the schema using this property
--property auto.register.schemas=false
Found out here Use kafka-avro-console-producer without autoregister the schema
I am trying to set Producer and Consumer quotas in Kafka. I have started zookeeper and kafka server and am trying to change Kafka config
`bin\windows\kafka-configs.bat --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default`
It shows me :
Only quota configs can be added for 'clients' using --bootstrap-server. Unexpected config names: Set('producer_byte_rate)
using the binaries of Kafka for Windows - you need to use without the " ' "
bin\windows\kafka-configs.bat --bootstrap-server localhost:9092 --alter --add-config producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200 --entity-type clients --entity-default
Completed updating default config for clients in the cluster.
or using with " "
kafka-configs.bat --bootstrap-server localhost:9092 --alter --add-config "producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200" --entity-type clients --entity-default
Completed updating default config for clients in the cluster.
I want to configure SASL/SCRAM in Kafka. I found this resource[1](Slide 52 Create SCRAM Users) in slide share and when creating users, I get the following error.
Error while executing config command requirement failed: Invalid
entity config: all configs to be added must be in the format
"key=val".
java.lang.IllegalArgumentException: requirement failed: Invalid entity config: all configs to be added must be in the format "key=val".
at scala.Predef$.require(Predef.scala:233)
at kafka.admin.ConfigCommand$.parseConfigsToBeAdded(ConfigCommand.scala:128)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:78)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:65)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Here is the configuration that I run.
#!/usr/bin/env bash
SCRAM_CONFIG='SCRAM-SHA-256=[iterations=8192,password=kafka123]'
SCRAM_CONFIG="$SCRAM_CONFIG,SCRAM-SHA-512=[password=kafka123]"
./kafka-configs.sh --alter --add-config "$SCRAM_CONFIG" --entity-type users --entity-name stocks_consumer --zookeeper localhost:2181 \
./kafka-configs.sh --alter --add-config "$SCRAM_CONFIG" --entity-type users --entity-name stocks_producer --zookeeper localhost:2181 \
./kafka-configs.sh --alter --add-config "$SCRAM_CONFIG" --entity-type users --entity-name admin --zookeeper localhost:2181 \
I could not found any solution for this, so appreciate if I could get a hint to get this working.
EDIT: I'm using Kafka version 2.10.0.10.1.0
Thanks
[1] https://www.slideshare.net/JeanPaulAzar1/kafka-tutorial-kafka-security
I was able to solve the problem.
The support for SASL/SCRAM has been added in the kafka version 0.11 [1]
I did the same in the above version and it worked properly.
Thanks.
[1] https://archive.apache.org/dist/kafka/0.10.2.0/RELEASE_NOTES.html
With Kafka 0.8.1.1, how do I change the log retention time while it's running? The documentation says the property is log.retention.hours, but trying to change it using kafka-topics.sh returns this error
$ bin/kafka-topics.sh --zookeeper zk.yoursite.com --alter --topic as-access --config topic.log.retention.hours=24
Error while executing topic command requirement failed: Unknown configuration "topic.log.retention.hours".
java.lang.IllegalArgumentException: requirement failed: Unknown configuration "topic.log.retention.hours".
at scala.Predef$.require(Predef.scala:145)
at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:138)
at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:631)
at scala.collection.JavaConversions$JEnumerationWrapper.foreach(JavaConversions.scala:479)
at kafka.log.LogConfig$.validateNames(LogConfig.scala:137)
at kafka.log.LogConfig$.validate(LogConfig.scala:145)
at kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:171)
at kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:95)
at kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:93)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:93)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:52)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
log.retention.hours is a property of a broker which is used as a default value when a topic is created. When you change configurations of currently running topic using kafka-topics.sh, you should specify a topic-level property.
A topic-level property for log retention time is retention.ms.
From Topic-level configuration in Kafka 0.8.1 documentation:
Property: retention.ms
Default: 7 days
Server Default Property: log.retention.minutes
Description: This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data.
So the correct command depends on the version. Up to 0.8.2 (although docs still show its use up to 0.10.1) use kafka-topics.sh --alter and after 0.10.2 (or perhaps from 0.9.0 going forward) use kafka-configs.sh --alter
$ bin/kafka-topics.sh --zookeeper zk.yoursite.com --alter --topic as-access --config retention.ms=86400000
You can check whether the configuration is properly applied with the following command.
$ bin/kafka-topics.sh --describe --zookeeper zk.yoursite.com --topic as-access
Then you will see something like below.
Topic:as-access PartitionCount:3 ReplicationFactor:3 Configs:retention.ms=86400000
The following is the right way to alter topic config as of Kafka 0.10.2.0:
bin/kafka-configs.sh --zookeeper <zk_host> --alter --entity-type topics --entity-name test_topic --add-config retention.ms=86400000
Topic config alter operations have been deprecated for bin/kafka-topics.sh.
WARNING: Altering topic configuration from this script has been deprecated and may be removed in future releases.
Going forward, please use kafka-configs.sh for this functionality`
The correct config key is retention.ms
$ bin/kafka-topics.sh --zookeeper zk.prod.yoursite.com --alter --topic as-access --config retention.ms=86400000
Updated config for topic "my-topic".
I tested and used this command in kafka confluent V4.0.0 and apache kafka V 1.0.0 and 1.0.1
/opt/kafka/confluent-4.0.0/bin/kafka-configs --zookeeper XX.XX.XX.XX:2181 --entity-type topics --entity-name test --alter --add-config retention.ms=55000
test is the topic name.
I think it works well in other versions too